|| ||"Wael Noureddine" <wael-AT-chelsio.com>|
|| ||"Jonathan Corbet" <corbet-AT-lwn.net>|
|| ||Re: Article on TOE|
|| ||Wed, 31 Aug 2005 10:10:18 -0700|
We found your article on "Linux and TCP Offload Engines" very
interesting. The article discussed the submitted Chelsio TOE patch and
compiled a list of the objections raised by the stack maintainers. We
hope to be given the opportunity to provide some information regarding
the patch, and to clarify some of the points made.
As you have noted, the patch itself is really minimal. All in all, a
dozen or so lines of actual code will be needed for 2.6.14 to provide
generic, vendor-independent support for TOE. In any case, we have
resources committed to handling any future maintenance work. Therefore,
this should prove of very little impact on the maintenance of the stack.
The maintainers' apprehension regarding TOE in the Linux stack is well
known and shows up in the list of objections. Before we answer these
objections listed in last week's article, it is important to stress the
1) In addition to full offload, a TOE provides all the functions of a
regular NIC, including checksum offload and LSO for non-offloaded
traffic. A TOE can be operated as a NIC without any changes.
2) Today, you can buy a 10 Gbps TOE at virtually no price premium
compared to a 10 Gbps NIC. You're basically getting the additional
features for free.
3) Adding TOE support in the stack does not bypass the software stack.
It only gives the possibility to enable additional functionality if need
be. TOE is a performance enhancement which should be available to users
who need it.
Now, to the objections:
* The maintenance issue has been mentioned above, and looking at the
patch itself should address any concerns in that area. Questions,
comments or suggestion regarding it are more than welcome and
appreciated. If there is anything that can be done to further improve
this aspect let us know.
* Netfilter support is really not shorted out, and connection acceptance
can still be subjected to regular checking. Also, keep in mind that a
TOE is there to speed up some connections which require it, the rest of
the traffic is still fully processed in the software stack.
* Traffic rate control at 10 Gbps speeds is really not practical in
software today. Without arguing if and when that would be possible,
today the Chelsio TOE provide rate control in hardware, so no
functionality is lost in that regard. Clearly, this will depend on
different vendors' implementations, but this is all about choice.
* The security and patching issue is dependent on the vendor approaches
and their handling of flaws. However, given that a TOE can be disabled
at any time, one can fully rely on the software stack, while awaiting a
fix. There is no impact compared to regular NICs, besides the
* TOE performance has been questioned in the past, and perhaps rightly
so. However, it appears that this has changed recently. The Chelsio TOE
holds the Internet 2 Land Speed Record (7.5Gbps over 33,000Km), where
it maxed out the PCI-X bus and the distance required, with 1,500 byte
frames. This is just one indication, other independent tests by the Los
Lab and OSU showed for example that TOE provides about twice the
throughput at half the CPU utilization of a regular NIC for data transfers,
and 60% to 1000% improvement in Web server capacity (see
improvements were obtained without fully utilizing the TOE capability,
such as zero copy.
* It is clear that no one would want to design a 100Mbps TOE today, but
it is also a question whether anyone still has an original 100Mbps
adapter from 1993 in their current system. Technology advances will
obsolete everything we're building now, and in that regard the TOE is no
different from a regular NIC. Assuming you still have the 100Mbps TOE
you bought 10 years ago, you could just disable the offload and use it
as a NIC.
* It is important to stress that the TOE patent issue is being taken out
of context when it comes to full offload. The patents in question are
for the partial offload approach which has been taken by Microsoft. Full
offload is not, and cannot be patented as legal studies have determined.
* Stateless offload is an option which may work out for some
applications and users. However, the performance gap is still
considerable. Adding CPUs or waiting for CPUs to get faster are
suggestions which ignore the cost part of the equation. It is best to
leave such considerations to the users, who have to optimize their cost
* TOE opponents rely on the observation that CPU speeds tend to catch up
with network speeds, obviating the need for TOE. However, the very fact
that TOE is brought up recurrently and ever more pressingly indicates
that this gap is periodic, and it is getting more serious every time.
Today, the performance gap is being filled with exotic inter-connects,
such as InfiniBand, while TCP/IP over Ethernet lags in performance.
Dismissing this market as niche and insignificant would be ignoring the
market realities. As shown in recent studies, such as
a TOE makes TCP/IP over Ethernet again a competitive
It is important to mention that there are many unacknowledged benefits
to performing TCP processing in hardware, including microsecond
granularity retransmission and rate control, and receive data
re-assembly offload. These capability turn out to be very useful when
operating the latest low latency 10 Gbps Ethernet switches-on-a-chip,
which tend to have limited buffering resources and may consequently drop
packets. In addition, a TOE can handle essential TCP features, such as
timestamps, which are usually turned OFF due to their high processing
requirements at 10 Gbps. In addition, a TOE will most likely be required
to enable other technologies such as iSCSI, which is expected to gain
widespread use as a storage networking protocol.
TOE's performance has been independently demonstrated by end users, and
the technology can be integrated into Linux with relatively little
effort compared to other options being considered. There are no real
technical reasons for denying TCP offload its place as a useful option,
which users who require high performance should have today. It is our
hope that other reasons can be addressed to the satisfaction of
everyone, and the benefit of the users of TCP/IP over Ethernet
Comments (4 posted)
|| ||Gervase Markham <gerv-AT-mozilla.org>|
|| ||Free Software And Trademarks|
|| ||Wed, 31 Aug 2005 22:53:23 +0100|
Unfortunately, I went on holiday soon after John Morris' letter on
Trademarks and F/OSS was published in August 18th's LWN, and did not
have a chance to reply immediately. But, as the Mozilla Foundation's
management of the Firefox trademark has been the catalyst for many
recent discussions on the topic, and I am their first point of contact
for trademark issues, I feel I should respond.
Before I begin, I should correct the thesis of the opening paragraph,
which seems rather to underly a lot of what follows. The Foundation did
not establish a wholly-owned subsidiary Corporation to "make themselves
compatible with the rest of the corporate world", no matter what ZDNet
may think. We did it chiefly because there are rules in the USA about
the sources of income for a tax-exempt entity which we were not able to
meet with our current mixture of income sources.
In my view, the general idea of trademarks - that you can label a
product with a name or icon which represents a level of quality in the
mind of the public - is entirely compatible with the principles of Free
Software. Just as some free software licences require appropriate credit
to be given to authors, so it should also be possible to require that
distinguishing marks be removed (assuming that functionality would not
be affected thereby) if the author thinks that a derivative product does
not reflect well on their original efforts.
However, as has been pointed out many times, the way trademark law is
structured makes it a challenge to maintain one's trademark without
inconveniencing, even if just a little, those who wish to use it. This
is unfortunate, but I don't think it's insurmountable if one is careful.
Firefox has an almost uniquely strong (among free software projects)
need for a solid trademark, due to a combination of factors:
- Firefox is by far the most-used piece of consumer free software on the
- Firefox is extremely popular on Windows, and among people I describe
as those for whom "computing is not their main focus in life";
- Firefox's brand is very well known and respected;
- Firefox is used for financial transactions.
This points together mean that there is a great deal of unscrupulous
interest in our product and brand. Without a strong trademark a
nefarious person could, for example, modify Firefox to send them any
login details for a long list of banks, put up a build and buy Google
Ads saying "Official Firefox Download Site!". As the code is Free, the
only way to prevent such a scenario is to use trademark law - we can't
stop them doing a trojaned build, but we can stop them putting our good
name on it.
The interaction of trademarks with free software in such a high profile
way is a new thing. We are still trying to work out how to manage the
Firefox trademark in a way which protects our nearly 100,000,000 users
and potential users from scenarios such as this one, but yet does not
unduly inconvenience people on the same side as us - our developers,
quality Linux distributions, OEMs, etc. I welcome any constructive input
as to how we can better achieve this without losing control of the mark.
Comments (7 posted)
|| ||Alex Fernandez <alejandrofer-AT-gmail.com>|
|| ||The dismal state of proprietary corporate security|
|| ||Tue, 30 Aug 2005 21:12:00 +0200|
As free software speeds along, more and more happy users live in a
world without proprietary offerings. Sheltered from serious security
problems, using libre-and-gratis software which also happens to be
more reliable, and in charge of their own machines; they tend to
misunderstand what is happening on the other side of the fence. This
letter is an attempt to let them peek within, but without feeling the
First, a disclaimer. I live in Spain, not the world center of
information technologies but probably closer to the third world of
computing. I have however worked for large multinationals, and on
occassion with some European partners and research facilities. My
impressions are based on first-hand experience, and may therefore be
biased by my own career. Your mileage may (and hopefully will) vary.
Now, what is happening on proprietary corporate networks? 'Dispair'
would be an understatement: given that the dominant operating system
family is so inherently insecure, corporate IT departments have mostly
quit trying to provide such extravagant facilities as private e-mail.
In the trade-off between privacy and security, privacy has all but
lost -- taking security down with it, of course.
I have experienced workplaces where private accounts do not exist;
instead, people log on to whatever computer they are assigned to,
using the machine id or e-mail handle as username and trivial
passwords. It is against policy to change these passwords. User
documents do not of course travel with the user, but have to be
carried painfully since folder sharing is not allowed and USB ports
are disabled. Administrative rights for the computer are never granted
by the IT department (the old "systems and networks"); their staff has
acknowledged that it is too labor-intensive to administer the network
in any sensible way, so they just replace hardware and format hard
drives. By the way, IT staff erect like a natural barrier for any
sensible request like installing software required for work. It is not
easy to work this way, having no control of your own computer; luckily
hacks are available that grant full administrative rights to any
machine, at which point you are on your own.
Mind you, this is in companies specialized in software development.
Where any source code control exists at all, seldom is it anything
beyond CVS. Usernames are again trivial as are passwords, so the
repository is usually wide open to anyone who happens to be on the
right side of the firewall. The only solution ever considered is to
switch to proprietary source code control systems. E-mail is similarly
unprotected; that is when you don't find random mail folders available
on network disks. By the way, certificates used for remote access to
the intranet are usually not accepted by common browsers and/or
expired, and therefore brittle.
As a last straw, network topologies are difficult to understand, with
egress filtering (a pet peeve of mine) the only reliable constant.
Those responsible for "peripheral defenses" have not yet understood
that limiting the destination port of outgoing connections usually
serves no good purpose; it is a giant leap they will never be ready to
So, the corporate response to the invasion of malware and security
holes has been to give up. No security for anyone means that security
cannot be breached; any problem will be handled as a matter of policy.
Next time you see Microsoft's (or for that matter anyone else's)
claims to a secure operating system, try to view them as
tranquilizers, to be shot intravenously for IT managers who get the
fits every time they see a new intrusion; when they wake up, they will
start looking for a new software product to protect them or new
features to cut down on.
Thanks for your attention,
Comments (none posted)
Page editor: Jonathan Corbet