another bytecode interpreter ?
another bytecode interpreter ?
Posted Aug 22, 2013 0:21 UTC (Thu) by wahern (subscriber, #37304)In reply to: another bytecode interpreter ? by aliguori
Parent article: The return of nftables
http://www.itu.int/ITU-T/asn1/uses/
It's a darned shame that ASN.1 support isn't more widespread in the FOSS world. Even Perl modules are crappy. OpenSSL has a fairly complete library, but it's a gigantic PITA to use, like most FOSS ASN.1 tools. Really, ASN.1 was meant to be automatically compiled from the message description to code, with your application code manipulating a real data structure. This is the best open source ASN.1 project I'm aware of:
http://lionet.info/asn1c/compiler.html
It supports streaming parsing and composition without being tied to any I/O model. The only downside is that strings and arrays are always dynamically allocated, which makes constructing and destroying messages fairly verbose, especially if you care about malloc failure. Some proprietary ASN.1 compilers support fixed length arrays which make life a little easier when you're dealing with several simple string fields or lists with a small, finite limit on their length. That makes it easier to use message caches with simpler initialization.
Posted Aug 22, 2013 3:51 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
BTW, if something was initially designed by telecom guys then it is a great reason to avoid it like a plague.
Posted Aug 22, 2013 8:52 UTC (Thu)
by gdt (subscriber, #6284)
[Link] (5 responses)
Your "telecom guys" comment serves no purpose. It references an issue of twenty years ago. These days at IETF meetings you're just as likely to see a "telecom guy" arguing against some hacked-together draft and asking that time be taken to do it better, with the IP equipment vendors opposing that for reasons of "time to market".
Posted Aug 22, 2013 9:17 UTC (Thu)
by dlang (guest, #313)
[Link] (3 responses)
This is like the claims that transparent disk compression was made obsolete by faster disks (including, but not limited to SSDs). In some cases this is true and the system does have better things to spend it's processor time on, but in other cases, the system has more processing power than it needs while waiting for the I/O, so spending even a substantial amount of processing power to cut down on the amount of I/O needed can be a substantial win.
Posted Aug 22, 2013 10:25 UTC (Thu)
by khim (subscriber, #9252)
[Link] (2 responses)
Single-core CPU power basically flatlined: nine years ago top-of the line CPU was Pentium 4 570J @ 3.8GHz while today it's Core i7 4770K @ 3.9GHz. Micro-architectural improvements mean that today's Core i7 is faster then identically-clocked Pentium 4 from last decade, but difference is not striking. Meanwhile ethernet went from 10GbE to 100GbE, USB went from 480Mbit/s to 10GBit/s, even PCIExpress went from 250MB/s to 985MB/s! Sure, if you'll include number of cores in your analysis you'll find out that CPU copes more-of-less fine - but then latency is often limiting factor in communication protocols and SMP is not a big help there.
Posted Aug 22, 2013 12:57 UTC (Thu)
by intgr (subscriber, #39733)
[Link]
Depends on what you mean by "striking". I find that current server CPUs are 3-5 times faster at single-threaded workloads than 8-year-old single-core ones. Every generation of Intel processors still has performance gains of 10% or so while consuming less power.
I see your point about the overhead of complex protocol encodings. But if we go back to the original topic of firewalling: increasing network speeds would not be such a big problem if we weren't still be stuck with packet sizes that were designed for 10 Mbps networks.
Posted Aug 22, 2013 22:05 UTC (Thu)
by dlang (guest, #313)
[Link]
It's also _extremely_ unlikely that you are going to need to limit your computation to a single core. Using multiple cores is trivial if you have multiple communication streams to process (put each stream on it's own core, or processing of data from each interface on it's own core, etc). But even if you have one stream to process, you can almost always find a way to split the workload across multiple cores (one core works on what you are sending now, the second works on what you will be sending in a few hundred ms, etc)
so the move to multiple cores does end up helping the processing of things like this.
Posted Aug 22, 2013 12:40 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
As for compactness, GZIP+XML is about as compact as ASN.1 BER for most use-cases. Google's protobufs are also pretty compact.
Posted Aug 29, 2013 14:24 UTC (Thu)
by moltonel (subscriber, #45207)
[Link]
Concerning CPU usage, while hard to compare (the codec gives you many data verifications for free, which you have to write yourself with the likes of protobuf and msgpack), it is nothing to be ashamed of. And while some people have 10Gbits to play with, others are more concerned with the per-MB data roaming fee that is drilling a hole through their wallet.
Posted Aug 22, 2013 14:43 UTC (Thu)
by dw (guest, #12017)
[Link]
Also to my knowledge, nobody has yet to ship a Protocol Buffers implementation riddled with security holes due to specification complexity, although that might just be because nobody thought to look there yet..
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
another bytecode interpreter ?
