Nftables: Not addressing VJ channels or userspace tcp
Nftables: Not addressing VJ channels or userspace tcp
Posted Mar 24, 2009 19:42 UTC (Tue) by hisdad (subscriber, #5375)Parent article: Nftables: a new packet filtering engine
The performance improvement can be large.
The catch is how to do firewalling.
Updating to a new codebase that still doesn't do this is of limited use.
--John
Posted Mar 25, 2009 1:17 UTC (Wed)
by ras (subscriber, #33059)
[Link] (14 responses)
Out of the 3 of them, iptables is probably the least efficient way of doing things since it provides no form of table lookup. Thus you get one linear list of rules which always takes O(n) to process. The others can do table lookups. So iptables it makes sense to replace iptables. But ....
Almost everyone uses iptables to get the job done? Why. Documentation. Rusty's user level documentation for ipchanins and later iptables on the other hand is possibly the best of any kernel documentation I have seen. It sets the standard, and is a fine example others should follow.
The netfilter developers on the other hand have an appalling record for doing documentation. It lies at the other end of the spectrum - possibly the worst documentation for any kernel tool. This tradition started with Alexy, who didn't bother to write a single line of doco on how to use the tc command. Well possibly one line - something along the lines of "see the usage message". His followers, Patrick included, have continued with that fine tradition. As far as I can tell they have never written a single line of user documentation. Thus the man entry for tc on Debian Lenny was written 8 years ago, by someone who trolled the kernel code to figure out how it worked. Recent additions by Jared and Patrick, such as mirred and redirect don't appear in any "official" documentation. The only information we get is quick HOWTO's posted to mailing lists.
If you want to really use this stuff at all you have to troll with google looking for either these snippets, or more commonly HOWTO's posted by others who have stumbled across the solution. Sadly the HOWTO's are often misleading, as you would expect from something developed by finding something close to what they wanted then fiddling to make it work for their particular application. They don't have much of an idea as to what is happening underneath, so their explanations as to why it works tend toward useless. If you really want to use this stuff to its full potential then you only have one choice: read the kernel source.
This may sound like just someone bitching, but if nftables gets in history will almost certainly repeat itself. Alexy decided that doing traffic control on ingress didn't make sense. Nonetheless, people wanted to do it, and eventually the IMQ module made it possible. IMQ had fairly good documentation. But the netfilter guys didn't like the way IMQ did things, so it never made it into the kernel. However the persistent popularity of IMQ eventually pushed them into providing a way to do incoming traffic control within the existing framework. Only, they never did bother documenting it, so figuring out how to use it takes a fair amount of effort. Needless to say IMQ lives on.
And Patrick introduced HSFC, a replacement for CBQ and HTB. Technically, HSFC is better than either - it is a nice qdisc. But naturally documentation is sparse, scattered across the internet, and mostly not written by Patrick.
In a nutshell, I'd take iptables with documentation over nftables without doco any day, regardless of how much better nftables is technically.
At a higher level, Linux is in dire need of a a change to Dave Millers patch acceptance policy. It should go something like this: if you make a kernel change that shows up at use level (eg by changes to the tc command), then the patch will only be accepted if there is a patch for the man page, and preferable a patch to an "official" tutorial somewhere giving examples of typical usage.
Posted Mar 25, 2009 2:20 UTC (Wed)
by dskoll (subscriber, #1630)
[Link] (9 responses)
Posted Mar 25, 2009 20:22 UTC (Wed)
by nix (subscriber, #2304)
[Link] (8 responses)
Posted Mar 27, 2009 4:46 UTC (Fri)
by rusty (guest, #26)
[Link] (4 responses)
That said, I'd be happy to write a HOWTO for nftables, once it's ready for non-devs. Only takes
Posted Mar 27, 2009 6:26 UTC (Fri)
by ras (subscriber, #33059)
[Link]
But ip is a relatively simple command compared to tc, and all he provided for it was README.iproute2+tc. I guess without it, I would not have known the purpose of the tc command. For the rest I had to read the source.
> That said, I'd be happy to write a HOWTO for nftables
Bugger future messes these people might create, how about fixing the ones they have left behind? I wrote my own doco when I decided to use tc in anger: http://www.stuart.id.au/russell/files/tc/doc so I am not asking for someone to do something I haven't done myself. And before you ask the obvious question: after dealing with the kernel devs once, I think Conroy must be easier to deal with. Perhaps I exaggerate, but only by a tiny bit.
Posted Mar 27, 2009 18:09 UTC (Fri)
by kaber (guest, #18366)
[Link] (1 responses)
Posted Mar 27, 2009 23:52 UTC (Fri)
by ras (subscriber, #33059)
[Link]
Writing reference documentation (like a man page) is something programmers do well. Writing concise (as in contains few redundancies), complete and unambiguous explanations is what most programmers do naturally - and it normally comes easily to them regardless of the language - C or English. I suspect this is why the man page system is one of the best sources of documentation around. There are notable exceptions of course - like the sudo man page, but even that manages to be clearer than some ISO specs I have read.
HOWTO's and tutorials - well rusty seems to have a remarkable talent for it, but he is the exception rather than the rule. The best HOWTO's seem to come from the users.
I see below you are talking of adding nftables as a classifier. This is even better news. It is a way of evolving the mess we have now into something sane.
Posted Jun 4, 2014 19:49 UTC (Wed)
by pgoetz (subscriber, #4931)
[Link]
Posted Mar 27, 2009 21:51 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (2 responses)
But are you saying you'd be better off if there were no traffic control system? Because I believe that's what dskoll's point is: a patch that adds a feature should be rejected unless it additionally adds documentation to help people use it.
There is plenty of open source code I don't use because of lack of quality documentation, but I never fault the person who made the code available to me.
Posted Mar 28, 2009 0:59 UTC (Sat)
by ras (subscriber, #33059)
[Link] (1 responses)
There is no doubt in this case I _am_ faulting the person who provided the code. Well, to be more accurate, I fault the project. The project should not accept the code without documentation.
Evidently you think this is unreasonable. But my attitude is actually worse, if that is possible - I hold different projects to different standards. I am perfectly happy with a buggy, poorly documented 1000 line utility I found on sourceforge. Yet when it comes to large projects like samba, apache and gcc I expect so much more. I actually expect code and documentation that is at least as good as I get from a commercial vendor. Can you believe that! I actually expect the open source process to produce a better quality product than something I pay money for!
Well unreasonable or not, I write the occasional open source program and I hold myself to those standards. You sort of have to really, because if you produce crap everyone can see it - you ain't some anonymous coder hiding behind an organisation.
Still, I don't produce code as good as I see in the kernel. Patrick, rusty and friends - they hold each other to the highest standards. New code that makes it into the kernel core is usually beautifully written. And the kernel project has a savage review process that ensures it stays that way.
Yet no matter how beautiful the code is, there is no point it if nobody uses it. And that is the situation netfilter finds itself in. Out of all the potential uses it might have it is deployed in, it is in but a fraction of them. This is because figuring out how to use it is a huge effort. Because there is no doco you have to read the source to get a true understanding of how it works. Thus only C programmers who are prepared to troll the kernel and iproute2 source really have a clue. Actually it is even worse than that. In the case of the schedulers the code is (necessarily) so complex the code only gets you part way there. You have to read the original academic papers on the algorithms used. (HTB, being the only scheduler written outside of the kernel team, is the one exception.) The situation is so bad its even defeated all the book writers.
So, returning to the original point, we have arguably the largest open source project of them all, the kernel, churning out code almost no one uses because they don't regard documentation as an important part of the final product. And yeah, I think this means the kernel development process is broken. And yes, I hold the people who drive that development process accountable. They could do so much better.
Posted Apr 16, 2009 6:23 UTC (Thu)
by zmi (guest, #4829)
[Link]
Having a clean, logical and documented integration of traffic shaping and
Of the GUI tools I really like "fwbuilder", although it lacks the ability
Posted Mar 25, 2009 3:11 UTC (Wed)
by dlang (guest, #313)
[Link] (3 responses)
on just about every real-world ruleset I've needed to deal with I was able to split the ruleset up through multiple tables/chains and not only speed up the processing, but make the ruleset smaller and easier to understand.
it's like the complaint about the inability to log and drop in one command. create a separate chain called LOGDROP with two rules, the first unconditionally logs the packet, the second unconditionally drops the packet. then in your rules where you want to log and drop you don't need two conditionals, you just do -j LOGDROP and it does both.
I've had firewall rulesets drop from 2000+ lines to <200 lines by using fairly simple tricks like having one set of rules that just examines the source/destination IP addresses and jumps to another chain that doesn't look at IP addresses, but only considers ports.
very few people seem to realize the power that comes from creating your own chains and splitting different types of checking between them.
nftables could be a significant win on the performance side, but to get there it really should start out by replicating the XXtables functionality that exists today so that users don't _need_ to care about nftables.
Posted Mar 25, 2009 4:02 UTC (Wed)
by ras (subscriber, #33059)
[Link]
However, that is just my reading of the political wind. Personally I don't would not care if one day nftables just replaced iptables. It would not be a huge job to just replace my firewalls - if there was documentation.
As for speed, I think that is a minor issue compared to the code duplication. If you really want a fast firewall you could use a u32 ingress filter now for many purposes. And that is a problem. These layers all implementing similar functions bloat the kernel, slow things down and complicate things immensely. Networking is hard enough without having several different ways of doing things.
Posted Mar 28, 2009 0:04 UTC (Sat)
by Nelson (subscriber, #21712)
[Link] (1 responses)
That's only partially true. Every maintained, real-world firewall, that is run by a more programmery type admin is this way. Just about all the others are complete cluster-Fs.
I've seen more than a few firewalls where people started adding special rules for things, with no documentation and then it changes hands, and after a while it's big, ugly, and nobody knows why it does what it does and they're afraid to change it.
FWIW, a compiler can optimize those big ugly ones down to the minimal set and they can also optimize them for efficiency. It's just about a perfect compiler problem from a textbook or something.. Since that's the case, while it's nice to let the programmer configure the tables and define why packets should flow through the rules of different ones, but it seems like there is a good case to build that compiler and just solve it for everyone.
I've developed a number of products that make use of netfilter in different ways. I like the goals of NFtables. Some of the netfilter plugins are kind of calcified, some are useless, not all are IPv6 compliant; it's worth cleaning it all up and setting a new benchmark.
Posted Apr 6, 2009 8:14 UTC (Mon)
by dlang (guest, #313)
[Link]
the point being that you don't have to throw out the current system to get that, you 'just' need to create the tools to analyse the rulesets and optimize them.
nftables may have some real benefits, but a lot of what's being claimed for it could be done with iptables today, and doing it requires verylimilar tools be written in either case (the difference being if it compiles down to iptables commands or to nftables commands)
it has been commented that iptables was intended to be the assembly language level with the expectation that higher level languages would be written to compile down to it. In practice it is used directly.
it looks like nftables is intended to be the machine language level, making it unsuitable (and effectivly unusable) for admins directly with the expectation that higher level languages will be written to compile down to it.
since the higher level tools were never created for iptables I am concerned that they won't be for nftables, which is why I was calling for the minimum to be a compiler for the existing iptables functions.
people talk a lot about the need for 'high level' firewall/router controls, but as far as I know, nobody has ever produced a usable set of such 'high level' controls. every attempt that I have seen ends up oversimplifying things to the point that they are usable for a very small set of tasks, and as soon as you need _anything_ outside of that set, you have to throw out the 'simple' tool and go back to the low-level tool.
Posted Mar 26, 2009 22:51 UTC (Thu)
by zlynx (guest, #2285)
[Link]
If you do not trust your user-space for some reason, then the thing to do would be to force applications to communicate through a user-space daemon process. You would lose performance, just like forcing graphics apps to use the X server instead of direct rendering.
A separate piece of hardware for doing firewall is usually a better idea and if you care about performance enough, you would have one anyway.
Nftables: Not addressing VJ channels or userspace tcp
Documentation!
Documentation!
learn, gave up when I realised how undocumented most of it was, and ended
up delegating the job to a Cisco router, which probably does a worse job
of it than the kernel could but at least is documented.
Documentation!
called ip-cref.tex and it's in the source distribution of iproute2.
about a week to write a decent HOWTO.
Documentation!
Documentation!
Documentation!
Documentation!
Documentation!
Documentation!
Documentation!
it, but didn't understand it really. There was a package "wondershaper" on
SUSE Linux, but still I never managed to do much more than add a port to
"higher priority".
firewalling would be a good reason to switch to nftables.
to define subroutines so you could compress several checks into one call
and thus make the overall ruleset smaller and cleaner.
Nftables: Not addressing VJ channels or userspace tcp
Nftables: Not addressing VJ channels or userspace tcp
on just about every real-world ruleset I've needed to deal with I was able to split the ruleset up through multiple tables/chains and not only speed up the processing, but make the ruleset smaller and easier to understand.
Nftables: Not addressing VJ channels or userspace tcp
Nftables: Not addressing VJ channels or userspace tcp
User-space TCP/IP