Your editor did much of his early programming on a large, 60-bit computer.
"Large" as in "you could walk around inside it." Its six-bit character
set was challenged by exotic characters - like lower case. But it sure had
a fast card reader. Your editor has started a few articles by saying that
recent "progress" has made things worse, rather than better, but he won't
be saying that this time.
By the early 1980's, 32-bit systems had taken over much of the computing
world. And, with certain exceptions, 32 bits has been the way of things
for a good two decades. Processor speeds have gone up by three orders of
magnitude, as have disk sizes; main memory has grown by even a bit more.
But most systems sold today still use 32-bit words and addresses. The fact
is, 32 bits suffice for almost every quantity we need to manipulate with
computers. The exception, increasingly, is memory. We have hit the point
where we are running out of address space. The need to work with ever more
memory to run our increasingly bloated applications will eventually push
much of the industry over to 64-bit processors.
Your editor decided to be ahead of the curve, for once. So he ordered up a
new motherboard and Athlon64 processor. Before the process was done, he
also ended up buying a new video card, power supply, and disk drive. In
fact, the only original component left in the case (a holdover from when LWN
thought it might be a training company) is the diskette drive. But, the
new system is now up and running, and your editor has had a chance to get a
feel for what the 64-bit world has to offer.
The hardest question, perhaps, was the choice of distribution to run. The
new system replaces a Debian unstable box, so Debian was the obvious first
choice. The state of the Debian x86_64 port is a
little discouraging, however. Installation requires starting with the
basic x86 distribution, coming up with 64-bit versions of gcc and glibc,
building a new 64-bit kernel, booting that, and piecing together the rest
of the system with the other x86_64 packages that have become available.
More than ten years ago, your editor converted, by hand, his first Linux
box from a.out to ELF binaries; installing Debian x86_64 looks like a
similar process. Somehow, what looked like an interesting and instructive
adventure in the early 1990's is distinctly less appealing now.
MandrakeSoft and SUSE both offer x86_64 versions of their distributions. The
Gentoo port seems to be coming along reasonably well, but some time spent
digging through the Gentoo package
database shows that much of the software base still lacks x86_64 support. Your editor,
in the end, went with the Fedora Core 2 test 2 release, at least for
now. FC2t2 gives good visibility into the development process (as do
Mandrake and Gentoo), a familiar, Red Hat core, and the ability to play
around with some bleeding-edge features like SELinux. It also is designed
around the 2.6 kernel, which is an important feature.
When one leaves the x86 mainstream, it does not take long to realize that
the well-trodden pathways have been left behind. Mirrors for the x86_64
architecture are relatively scarce and often behind the times. Most
applications do not, yet, come prebuilt for this architecture.
Documentation on how to get x86_64 systems up and running is minimal. It
is all a bit of an adventure.
That said, the FC2t2 distribution works well - as well as could be expected
on any architecture for a development release. And the really nice thing
about the x86_64 architecture is that most 32-bit x86 binaries work just fine,
as long as you have 32-bit versions of the relevant libraries around. That
fact alone makes the transition to this architecture relatively easy.
The need for 32-bit libraries complicates system administration, however.
An x86_64 Fedora system has many duplicated packages installed, and working
with rpm can, occasionally, be a bit confusing. The rpm interface was not,
perhaps, designed for dealing with a world where two packages have the same
name and version number, but are still distinct. Unless you plan to leave
the 32-bit world behind entirely, however, you will need two versions of
the libraries. Chances are that most x86_64 systems will want to run
32-bit binaries for some time - in some cases, they perform better, and, in
any case, some programs in FC2t2 (e.g. OpenOffice.org) are still built that
Building applications can also be a bit of a challenge, at least a first.
Quite a few makefiles and configure scripts assume that libraries live in
/usr/lib. On a Fedora system, /usr/lib has the 32-bit
versions of the libraries; the native versions live in
/usr/lib64. A makefile which uses the default gcc (which compiles
in 64-bit mode) and tries to explicitly link against things in
/usr/lib will fail. Once you learn to recognize this problem, it
gets easy to fix.
Your editor was naturally interested in performance issues. To that end,
he built a version of bzip2 in both 64-bit and 32-bit mode and compared the
results. Both compression and decompression ran about 10% faster in the
64-bit mode. With the x86_64 processor, better performance is generally expected in
the native mode, mainly due to the additional registers which
are available. The executable size and memory usage in 64-bit mode were
larger, but not by much. A second test, using the SoundTouch
library yielded a surprise, however: changing the tempo of a large sound
file ran in less than 1/5 the time in 32-bit mode. The Athlon64 processor,
it would seem, runs certain operations far more slowly in 64-bit mode; your
editor has not, yet, had the time to track this one down.
Despite the paucity of mirrors, the glitches, and the surprises, the x86_64
platform makes for a very nice Linux system. The kernel support for this
architecture is outstanding, the performance is good, and the expanded
address space renders concepts like "high memory" obsolete. After all,
we'll never need more memory than can be addressed with 64 bits...
Seriously, however, this architecture has helped to realize one of the
great promises of Linux: a freedom of choice in hardware as well as
software. 64-bit systems are now available at a price even an LWN editor
can afford. This editor, who just shifted his old Pentium 450 box over to
sacrificial kernel testing duty, is distinctly less grumpy.
Comments (41 posted)
A new version of the much-hyped Nvu "Web Authoring System" is out, as well
as an updated version of the popular Bluefish editor. Since Web development
is an essential component to the success of Linux on the desktop, we
thought we'd take a look at these two releases as a gauge of Web
development tools available for Linux users.
The Nvu web site promises "A
complete Web Authoring System for Linux
Desktop users to rival programs like FrontPage and Dreamweaver." How
close does Nvu come to delivering on that promise?
To evaluate Nvu, one must first install the software. At the time of this
writing, the Nvu website offers packages for Lindows, Fedora Core 2 test 1
and Windows. Other interested parties must compile the application from
source. While this does not usually present a major hurdle for Linux users,
Nvu is not available in anything so straightforward as a source
tarball. The instructions, such as they are, instruct the user to pull
Mozilla from CVS, save a modified .mozconfig into the Mozilla source
directory, download a separate patch from Nvu and finally compile the
software. One almost gets the impression that the Nvu developers are
looking to make life difficult for non-Lindows users.
After jumping through the numerous hoops required to compile Nvu, we set
about evaluating the software. Since Nvu is derived from Mozilla's
Composer, we decided to open both applications up side-by-side to see what
improvements had been made to Composer. Nvu is not drastically different
from Composer, but there are a few new features worth noting. Nvu has some
obvious cosmetic differences, and offers an improved tabbed interface for
multiple document editing. It also includes a "Site Manager" Sidebar, which
is not available in Composer.
Another feature touted for Nvu is the ability to create templates that have
read-only sections and editable sections. Unfortunately, our attempts to
work with templates were less than successful. After creating and saving a
template, an attempt to create a new document based on a simple template
caused Nvu to promptly crash.
Nvu also includes "CaScadeS," a CSS editor that allows fine-grained control
over the styles applied to elements in your documents. The feature is
interesting, but slightly counter-intuitive. To invoke the editing menu for
a specific element, the user must right-click on an element displayed in a
menu displayed at the bottom of the editor. If the user is unaware of the
feature, it's quite likely that it will go completely unnoticed. Once one
is aware of the feature, it is easy to use. However, it would be much more
intuitive if the user was able to right-click on the element itself in the
editing pane to bring up the CaScadeS menu.
Nvu shows a great deal of promise, but it's not quite ready for a showdown
with Macromedia's Dreamweaver.
The Bluefish Web development
tool takes a different approach with its
"What You See is What You Need" interface. Users who wish to try out the
recent 0.13 release will appreciate that Bluefish is provided in a
straight-forward source tarball. Unlike Nvu, Bluefish's feature set is more
appropriate for the experienced Web developer working on more advanced
projects, including dynamic sites that make use of PHP, Perl, Python and
other scripting languages. Bluefish includes syntax highlighting for a host
of languages, everything from HTML to ColdFusion is represented.
It takes some time to fully explore Bluefish and all its features. Bluefish
provides a number of wizards and dialogs that make it much easier to add
forms, tables and so forth to a document. This writer particularly likes
Bluefish's custom menu, which allows the user to create their own dialogs
to generate snippets of code. The "Quickbar," which allows users to add
frequently-used buttons from other toolbars, is also a favorite.
Bluefish offers Web developers as much, or as little, assistance as they
need. A user can opt to use Bluefish as a souped-up text editor with
excellent syntax highlighting, or rely on Bluefish to generate much of
their code through wizards and dialogs.
Another nice thing about Bluefish is that it integrates well with other
tools that Web developers often use. Users can pipe their files in Bluefish
through HTML Tidy, Weblint and other programs to validate their HTML, or
easily configure Bluefish to open their work in their browser(s) of choice.
Despite the low version number, Bluefish is fairly mature and very
stable. It's well worth a look for users who want a flexible Web
There are, of course, a number of other open source Web development tools
for Linux. The Screem website
development package is fairly popular, as is Quanta Plus, which we touched on when KDE 3.2
was released. For many, no IDE or GUI-based tool can replace Emacs or Vim
for churning out websites.
None of the tools available for Linux are quite slick and polished as
Dreamweaver, but there are certainly plenty of options for users who are
looking for a suitable open source Web development tool.
Comments (3 posted)
The CEO of Green Hills Software, a proprietary embedded software company,
has sent out an
amazing press release
on how the use of free software in defense
systems "violates every principle of security." The PR tells us about how
"developers in Russia and China" are contributing to Linux, and the
horrible fate that awaits us:
Linux in the defense environment is the classic Trojan horse
scenario -- a gift of 'free' software is being brought inside our
critical defenses. If we proceed with plans to allow Linux to run
these defense systems without demanding proof that it contains no
subversive or dangerous code waiting to emerge after we bring it
inside, then we invite the fate of Troy.
The strident tone of the release, combined with the focus on threats from
Russia and China, makes it look like something from the Reagan
administration. It's hard to take this thing seriously.
The press release has been quickly written off as a desperate outburst from
a proprietary company that is losing business to Linux. And that is
probably exactly what it is. It would be interesting to hear how Green
Hills would explain this
Cisco security alert which came out on the same day as the anti-Linux
press release. Some of Cisco's products, it would seem, were shipped with a back
door which gives attackers full access; "there is no workaround." It is
also worth noting that the InterBase backdoor
existed in the proprietary product for years, but was discovered when the
product went open source. The remote shutdown "feature" found in a number
of software products is also relevant here. Proprietary software is not
immune to backdoors and Trojan horses; indeed, the opaque nature of
closed-source programs would seem to encourage that sort of misfeature.
Another point worthy of note: attempts to place back doors in free software
have mostly been carried out via the distribution network. Last year's kernel backdoor attempt tried to slip the code
in after compromising a CVS server. Trojan horse attacks on tcpdump, sendmail, OpenSSH, and others have worked by corrupting
distribution files, again via a compromised server. On the other hand, it
is very hard to find any record of an attempt to insert any sort of back
door via the free software development process. Such an attack, it would
seem, is not that easy to carry out; if it were, why would attackers prefer
direct assaults on infrastructure and distribution files - an approach
which is certain to lead to quick detection?
The free software development process is, perhaps, more robust than its
detractors would have people believe. But, once we're done patting
ourselves on the back (and let's not be too long about it) we have to face
a fundamental fact: code containing security vulnerabilities is committed
to project repositories every day. These vulnerabilities do not result
from deliberate attacks; they are, instead, simple bugs. But they get
into the code base, despite our heavily promoted review process.
It is also true that, sooner or later, somebody will certainly attempt to
get bad code accepted by a free software project. That code may contain a
back door, or it may be one of those "intellectual property" violations
that some people would so dearly love to find in Linux. Given that we
prove on a daily basis that insecure code is able to survive our
development process, how confident are we, really, that we'll trap a
deliberate, well-hidden hole? There are reasons to believe that our
processes are better than the proprietary variety; at least some outsiders
are looking at the code, and the chances that a backdoor will lurk for
years are small. But we cannot simply write off this threat; sooner or
later, it is going to come back to us.
Comments (17 posted)
Page editor: Jonathan Corbet
Next page: Security>>