Way back in the Good Old Days, when your editor was a VMS system
administrator, the ARPAnet was small, slow, and extremely limited in
access. While some of the cooler folks were building USENET around uucp-based
links, Unix systems were still not readily available to most of us.
But we were beginning to get personal computers, bulletin-board systems
were being set up, and the fortunate among us could afford 1200 baud
modems. The tool of choice for using those modems in those days was often
a little program called "Kermit," which was freely downloadable even then.
Your editor doesn't remember when he last used a modem, but he
was still interested to see that this 30-year-old project is about to go
through a final transition; it will, among other things, be free software
For many systems in those days - especially systems without a government
budget behind them - the "network interface" was the serial port. That's
how we connected to The Computer at work; that's also how we got onto the
early services which were available in those days. The first thing any
RS232 networking user needed was a way to move keystrokes between one
interface (where they sat) and the interface connected to the modem. Unix
systems often came with a tool called "cu" for that purpose, but,
even on those systems, users tended to gravitate toward a newish tool
called Kermit instead.
Kermit has its roots at Columbia University, where it was developed as a
way to communicate between computers on a heterogeneous network. It would
fit easily onto a floppy and was (unlike cu) easy to set up and
use; one just needed to figure out how many data bits, how many stop bits,
and what kind of
parity to use (RS232 is a fun "standard"), type an appropriate ATD command
at the modem, and go. Kermit could even handle things like translating
between different character sets; talking to that EBCDIC mainframe was not
Over a short period of time, Kermit developed some reasonable file transfer
capabilities. Its protocol was efficient enough, but it was also
designed to deal with serial ports which might interpret control characters
in unexpected ways, turn eight-bit data into seven-bit data, and more. The
robustness of the protocol meant it "just worked" between almost any two
arbitrary types of machines. So it's not surprising that, in those
pre-Internet days, Kermit became a popular way of moving files around.
Columbia distributed a number of versions of Kermit in source form; it
could be had from bulletin board sites, DECUS tapes, and more. The program
was never released as free software, though. When Kermit was first coming
into use, free software licenses as such didn't exist yet. The university
considered releasing the code into the public domain but decided that it
wasn't a good idea:
Because we wanted Kermit software to be shared openly, we did not
place our Kermit programs into the public domain. While this might
seem contradictory, we felt that by copyrighting the programs, we
could prevent them from being taken by entrepreneurs and sold as
commercial products, which seemed necessary since we had heard
stories of other universities that had been enjoined from using
programs which they themselves had written by firms that had taken
their public domain work and copyrighted it for themselves.
The license for Kermit varied over time, but was always of the "you may
make noncommercial use of this program" variety. The final version of the
Kermit license allowed bundling the program with free operating
systems, but did not allow modifications to the source without permission.
As a result, despite the fact that Kermit's license allowed distribution
with systems like Linux, most distributors eventually shied away from it
because it was not truly free software.
Anybody other than free operating system projects wanting to distribute
Kermit commercially had to buy per-seat licenses (at a cost of $3-10 each)
Kermit has proved remarkably durable over the years. During Kermit's
Internet has taken over, even our phones can run ssh, and RS232-based
communications have mostly fallen by the wayside. Kernel developers are
still known to use serial consoles for some types of debugging, but your editor
would predict that a substantial portion of the rest of LWN's readership
has never had to wonder where that null modem cable went to. Kermit's user
base must certainly be shrinking, but it is
still being maintained and sold to those who need it.
Or, it least, it was; in March the university announced that it
was unable to continue supporting the Kermit project. One assumes that
commercial license sales had finally dropped to the point where they
weren't worth the administrative overhead of dealing with them. As of
July 31, 2011, the university will put no further development effort
into Kermit and will no longer provide support and maintenance services. A
three-decade project is coming to a close.
Columbia University is doing one more thing with Kermit before the end,
though - it is releasing the software under the BSD license. C-Kermit 9.0 will carry
that license; the first beta release was made on June 15. The 9.0
release will also support the FORCE-3 packet protocol ("for use under
severe conditions"), improved scripting, various fixes, and more.
So the 9.0 release, presumably scheduled for sometime close to the
July 31 deadline, will not just have new features; it will be free
software for the first time.
As a result of this change,
Kermit may soon show up in a distribution repository near you; most Linux
users are, at this point, unlikely to care much. But, for many of us,
there will yet come a time when the only way to talk to a system of
interest is through a serial connection. Kermit is far from the only
option we have at this point, needless to say, but it's a good one.
Kermit's hop into the the free software community is more than welcome.
Comments (20 posted)
Secure communication requires encryption keys, but key management and
are difficult problems to solve. They are also important problems to
solve; solutions that can be easily used by those
who are not technically inclined are especially needed.
Turning encrypted communications into the default, rather than an option
used mostly by the technically savvy, will go a long way toward protecting
users from criminals or repressive regimes. Even if these secure
communications methods are only used by a subset of internet users, that
subset truly needs these tools.
For most users, the only encrypted communication they use is SSL/TLS for
accessing web pages served over HTTPS. The keys used for HTTPS are
normally stored on the server side in the form of certificates that are
presented to the browser when the connection is made. In order to
establish the identity of the remote server, to avoid phishing-style
attacks, those certificates are signed by certificate authorities (CAs)
the browser can verify the key that it receives.
This scheme suffers from a number of problems, but it works well enough in
practice so that, by and large, users can safely enter credit card and
other personal information into the sites. It relies on centralized
authorities in the form of the CAs, however, which can be a barrier to web site
owners or users who might otherwise be inclined to serve content over
HTTPS. In addition, trusting CAs has its own set of dangers. But, contrary to some of the
big web players' beliefs, the web is not the only form of communication.
For the most part, things like instant messages, email, peer-to-peer data
transfers, and voice over IP
(VoIP) are all transmitted over the internet in the clear. Solutions exist
to encrypt those communications, but they are not widely used, at least
partly because of the key problem. Each protocol generally has its own
idea of key
formats and storage, as well as how to exchange those keys.
For efforts like the Freedom
Box, it will be extremely important to find a solution to this key
problem, so that users can more-or-less effortlessly communicate in
private. If multiple applications that used keys could agree on common key
types and formats, it would reduce the complexity. That may be difficult
for a number of reasons, some technical, others social, so perhaps finding
a way to collect up all of a user's keys into a single "key bundle"
Generally, encryption these days uses public key cryptography, which means
that there are actually two keys being used. One is the public key that
can be widely disseminated (thus its name), while the other is a private
key that must be kept secure. A key bundle would thus have
two (or more) parts, the bundle that's shown to the rest of the world for
communication and authentication, and the other that's kept secret. This
private bundle would require the
strongest protections against loss or theft.
Generating the keys, and agreeing upon some kind of bundle format, are
fairly straightforward technical problems. Those can be solved rather
easily. But the real problems will lie in educating users about keys, the
difference between public and private keys, and how to properly protect
their private keys. User interfaces that hide most of that complexity will
be very important, but there will be things that users need to
understand (chief among them will be the importance of private bundle
Storing the keys
It would be a simpler problem to solve if the private bundle could be kept,
securely, in one location, but that's not really a workable scenario.
Users will want (or need) to communicate from a wide variety of devices,
desktops, laptops, tablets, mobile phones, etc., and will want to maintain
their identities (i.e. keys) when using any or all of them.
bundles at some "trusted" location on the internet ("in the cloud" in the
parlance of our times) might be possible but it would also centralize their
location. If some entity can cut off access to your key bundle, it
leaves you without a way to communicate securely. Spreading the bundles
out to various locations, and caching them locally, would reduce those
problems somewhat. Obviously, storing the bundles anywhere (including
require that the bundles themselves be encrypted—using a strong
password—which leads to another set of potential problems, of course.
Since these private key bundles will be so important, there will need to be
some way to back them up, which would be an advantage of the cloud
scenario. Losing one's well-established identity could be a very serious
problem. A worse problem would be if someone of malicious intent were to
gain control over the keys. For either of those problems, some kind of
revocation mechanism for the corresponding public keys would need to be
There are already some solutions to these problems, but, like much of the
rest of the key management landscape, they tend to be very key and
application-specific. For example, GNU Privacy Guard (GPG) public keys are
often registered at a key
server so that encrypted email can be decrypted and verified. Those
keys can also be revoked by registering a revocation certificate with the
key server. But, once again, key servers centralize things to some
extent. Likewise, SSL/TLS certificates can be revoked by way of a
revocation list that is issued by a CA.
Most of the above is concerned with the difficulties in maintaining the private
bundle, but there are problems to be solved on the public key side as
well. Gathering a database of the public keys of friends, family,
colleagues, and even enemies will be important in order to communicate
securely with them. But, depending on how that public key is obtained, it
may not necessarily correspond to the individual in question.
It is not difficult to generate a key that purports to be associated with
any arbitrary person you choose. The SSL/TLS certificate signing process
is set up to explicitly deal with that problem by having the CAs vouch that
a given certificate corresponds to a particular site. That kind of
centralized authority is not a desirable trait for user-centric systems,
however, so something like GPG's (and others') "web of trust" is a
better model. Essentially, the web of trust allows the user to determine
how much trust to place in a particular key, but it requires a fair amount
of user knowledge and diligence that may make it too complex for
Free software can make a difference
As can be seen, there are a large number of hurdles that need to be cleared
in order to make secure communication both ubiquitous and (relatively)
simple to use. Key management has always been the achilles heel of public
key cryptography, and obvious solutions are not ready to hand. But they
are important problems to solve, even though they may only be used by a
minority of internet users. For many, the additional hassles required to
securely communicate may not be overcome by the concern that their communications
may be intercepted. For others, who are working to undermine repressive
regimes for example, making it all Just Work
will be very important.
This is clearly an area where free software solutions make sense.
Proprietary software companies may be able to solve some of these problems,
but the closed-source nature of their code will make it very worrisome to use for
anyone with a life-and-death need for it. There have just been far too
many indications that governments can apply pressure to have "backdoors"
inserted into communications channels. With open standards and freely
available code, though, activists and others can have a reasonable assurance of
These problems won't be solved overnight, nor will they all be solved at
once. Public key cryptography has been with us for a long time without
anyone successfully addressing the key management problem.
Recent events in the Middle East and elsewhere have shown that internet
communication can play a key role in thwarting repressive regimes. Making those
communications more secure will further those aims.
Comments (11 posted)
Eight months after Stormy Peters left
the post to join Mozilla, the GNOME Foundation has chosen Karen Sandler
as its new executive director. Sandler is leaving a position with the
Software Freedom Law Center (SFLC) as general counsel and starting with the
Foundation June 21, but will be working part-time for each organization
during the transition.
Prior to joining the SFLC, Sandler was an associate with Gibson, Dunn & Crutcher LLP in New York and Clifford Chance in New York and London. Taking up a position as executive director seems a slight departure from practicing law, even if focused on free software — so what made Sandler choose to pursue working with the GNOME Foundation? Sandler says the appeal is GNOME itself: "It's an incredibly important and impressive software that's been entering a critical time. I can't wait to be a part of that and assist the GNOME community to develop and grow."
She does acknowledge that it's a departure from focusing on legal issues, but says that she's looking forward to the change. "As a lawyer you're generally working to avoid pitfalls and anticipate the worst case scenarios and I'm excited to help much more proactively than that."
So what will Sandler actually be doing as executive director?
When Peters held the role, she says that she started by "asking
everyone in the GNOME community what they thought I should do."
During her time, Peters says that she ran the Foundation's day-to-day operations, served as a contact point for questions, helped build the marketing team and travel committee, and "helped make events happen" though she says events mostly happened thanks to the GNOME community.
Sandler, taking a cue from Peters says that she'll ask what people think
she'll be working on, but hopes to spend at least some time on advocacy:
I think as with any ED, one of my main roles will be as a point person for
the organization, both as a spokesperson and as someone who is dedicated
to listening on its behalf. A lot of organization, facilitation and
coordination will undoubtedly come from those roles. I think there's a
real opportunity for advocacy and I hope to have the time to really focus
She notes that this isn't dissimilar to what she's been doing for the
My work as General Counsel for SFLC is substantively similar to what I hope
I'll be doing for GNOME - a lot of advocacy, organizing, coordination and
even fundraising. And of course it's moving from one service role to
another. Instead of serving my clients in their legal needs, I'll be
serving the GNOME community.
Sandler also points out that there are likely to be a lot of housekeeping tasks that have gathered dust since Peters left for Mozilla and her first days will be "spent getting up to speed, getting to know people, taking care of the administrative backlog and ramping up for the Desktop Summit (and of course following up on items raised during the Summit). I'll also look to renew relationships and generally try to immerse myself in all things GNOME."
Peters left in November 2010, and the search committee for the executive
director was announced
at the end of December. The committee included Peters, IBM's Robert Sutor,
GNOME Foundation director Germán Póo-Caamaño, Kim
Weins of OpenLogic, Jonathan Blandford of Red Hat, Luis Villa of Mozilla,
former GNOME board member Dave Neary, and Bradley Kuhn of the Software
Freedom Conservancy (SFC). However, Kuhn says that he stepped down from the
committee once Sandler emerged as a serious candidate, as they had worked
closely together at the SFLC and SFC; Kuhn also considers her a personal
One thing Sandler won't be doing is driving the technical direction of GNOME. Sandler says that, like Peters, she has "a limited role" in the technical direction of GNOME, and says "I'd support whatever the community and release team decided."
Another large part of the role is fundraising. The executive director is the point person for the advisory board and works to encourage members to sign up and donate not only the advisory board fees, but also to contribute to specific events like GNOME Hackfests.
Financially, the GNOME Foundation is doing well enough. In April 2009
there was a concern
that the foundation would be unable to continue the executive director
position due to lack of funds. In that message, John Palmieri said that Peters had managed to recruit a number of new corporate sponsors, but "we are still projecting that without a significant influx of steady contributions we will be unable to keep an Executive Director on the payroll without cutting into the activities budget." The foundation started leaning heavily on its Friends of GNOME program for individual donations, and doubled advisory board fees for corporate members from $10,000 per year to $20,000.
Despite the increased fees and additional income from Friends of GNOME, the budget for 2011 shows a decline in income of about $52,000, while the proposed expenditures are higher by about $64,000. Though it's worth noting the expenditures will likely be lower than planned, as it appears the budget was prepared with the expectation an executive director would be hired by March.
The GNOME Foundation budget for 2011 is $518,000 — with $145,510
earmarked for employees, and the executive director position is the largest
part of that budget. (The GNOME Foundation also has a part-time system
administrator.) So, is the executive director position the best use of
GNOME Foundation resources? Sandler says yes:
While responsible for fundraising, the position covers itself but there's a
lot more that the position contributes. There's loads of house keeping in
running a nonprofit, holding events, facilitating work and coordinating
communication amongst all participants in the community. I think it's also
really important to have someone speaking and advocating for the project.
Sandler is joining the GNOME Foundation at an interesting time. The
GNOME community is looking a bit fragmented at the moment. The GNOME 3.0
release has gotten mixed reviews, some users are feeling dissatisfied with the lack of
ability to modify GNOME to suit their needs, and the relationship
between GNOME and Canonical is strained at best. The GNOME Shell has not been widely embraced — Fedora 15 has shipped GNOME Shell, but Canonical has gone its own way with Unity, and other GNOME-centric distributions like Linux Mint chose to sit out the GNOME 3.0 release and ship GNOME 2.32.
In short, some would say that GNOME as a project has seen better
days. Sandler is not convinced of that:
GNOME 3 has been controversial, but I think that's an exaggeration [that the project has seen better days]. I (and a whole lot of others based on the press I've been reading) think that the rewrite is really great. Some of the choices made in the rewrite were strong decisions and make for a different desktop experience but all were made with what is best for the user in mind. Some people will object to change no matter what it is - you can't make everyone happy. But you can never move forward if you are not prepared to take a few risks, even if it means some of your users will stay with the old version for a while. Honestly, I think GNOME 3 will win users over as it gets worked into into the main distros, but it will take time for that to happen completely.
I've also read that some of the changes coming for GNOME 3 are geared towards developers, which hopefully will make it easier to write great applications for GNOME 3, not to mention just the attractiveness of the platform overall. As GNOME 3 applications improve so will adoption.
Whether GNOME 3 has time to evolve is another question. The Linux
desktop, on traditional PCs and laptops, simply is not gaining much
traction beyond its current audience. But Linux is being used in a number
of mobile systems that address end users, but GNOME in its entirety is not
yet there. Sandler says that she believes GNOME 3 will make GNOME more
relevant on mobile devices:
It looks great and is designed to be easy to use, not to mention the fact
that it already has some touchscreen functionality. I think [...] that
there's the potential for a lot of change in the way users think about the
desktop but I believe the GNOME 3 rewrite is well positioned to roll with
those changes. It's probably worth noting that some mobile vendors are
using GNOME technologies (if not GNOME 3 yet).
Sandler is also unwilling to give up on the PC desktop just now. "It's also probably worth noting that desktop computing is still how most people use their computers (I think people forget that sometimes)!"
As for GNOME 3's slow adoption and potential fragmentation, Sandler says
that the transition is "still underway and it will take time to see
how things really shake out." She says that "strong
decisions" will always alienate some people, but she hopes that
GNOME can restore relationships. "I think that ultimately good
software and good community will fuel increased participation rather than
the fragmentation that seems [to] be arising now."
The upcoming Desktop Summit may be a good opportunity to mend some fences, says Sandler. "The GNOME board tells me that there's already a full list of sponsors and attendees for the Desktop Summit (and that Canonical in particular is planning to attend and sponsor the event). I believe that developers in the different distros are all talking to GNOME and I hope that we'll only see more cooperation going forward."
Ultimately, Sandler says she's optimistic about GNOME: "Coming to GNOME is obviously a vote of confidence from me personally. I love my work at SFLC and am only persuaded to leave because I think it's a great opportunity to be a part of a great organization."
One thing is certain, working with GNOME at this stage is likely to be an interesting job. We wish Sandler the best in her new role, and thank her for taking the time to talk with LWN ahead of the announcement.
Comments (19 posted)
Page editor: Jonathan Corbet
Next page: Security>>