For some years now, the Python development community has been talking about
"Python 3000," the far-future release which would allow a complete
rethinking of the language to fix the various annoyances which had built up over
time. On December 3, that talk came to fruition with the Python 3.0
release is the end result of a great deal of thought and development; it
represents the vision Guido van Rossum and company have for the language
into the indefinite future. Now that it's out, the Python community as a
whole appears to have stopped for a "now what?" moment.
The wider Python development community appears to be split into three camps on Python 3.0;
the situation amusingly resembles the classic folk tale "Goldilocks and the
three bears." One set (the "too large" crowd) seems to think that an
incompatible version of Python should never have been released, that
languages should stay compatible forever. Another group ("too small") can
handle the idea of an incompatible transition, but thinks that the Python
community should have added more shiny features to the language while they
were at it. And, of course, there's a "just right" crowd taking the
position that the changes in Python 3 are just about as they should
be. See this
discussion by James Bennett for a well-argued description of the "just
Time will tell which position is closest to reality. If the "too large"
group is right, Python 3 (or Python in general) will fade away as
developers, unhappy with the break, move to a language they like better.
If Python 3 is too small, there will be strong pressure for a
Python 4 in the too-near future. Your editor, though, thinks that the
Python community has come pretty close to getting it right. Things that
truly needed to be fixed got fixed, but the Python developers resisted the
temptation to try to do too much. They watched, from a safe distance, what
happened with the Mozilla rewrite and Perl 6, and wisely concluded
that their lives - and the lives of those who use Python - would be better
if they avoided a similar experience. So they limited their goals and were
able to get the job done in a reasonable amount of time.
Except, of course, that the job is not really done. To begin with, the
presence of a few difficulties with the 3.0 release should not surprise
anybody. The developers forgot to remove the deprecated cmp()
function, with the result that newly-converted code may come to depend on
it. There are some performance issues. A couple of other features are not
working quite right. Getting Unicode truly straightened out may take a
while yet - a problem which is certainly not unique to Python. The list
seems to be quite short given that this is a
major release of a complex programming language, but there are still things
to fix. So there will almost certainly be a 3.0.1 release before the end
of the year, and a 3.0.2 in (approximately) February.
Meanwhile, the Python hackers have made it clear that the 2.x version of the language
will be supported for some years yet. Version 2.6, available now,
includes a number of features aimed at making the eventual port to 3.0
easier. As the porting projects get serious, other ways to help that
process will become clear; there will be an eventual 2.7 release which
incorporates those lessons wherever possible. A 2.8 release further down
the road has not been ruled out. The current plan seems to be to maintain
Python 2.x for at least the next three years.
For many Python developers, it is not yet really time
to make the jump to 3.0.
That is good because, for many Python developers, it is not yet really time
to make the jump to 3.0. The core language appears to be in
reasonably good shape, but a language like Python involves much more than
the core. Most non-trivial code makes heavy use of the wide variety of
Python libraries, and, at this point, many or most of those libraries do
not support Python 3. So, now is a good time for library maintainers
to be looking at moving to 3.0, but application developers who try to
port their code now are likely to run into frustration. Porting smaller
programs or subsystems as an exercise in learning the new language may make
sense, but complex application porting probably cannot happen for a little
What distributors should be doing is another question. So far, it would
appear that only Fedora is having a (public) discussion on how to handle
the Python 3 transition - see this
thread - and they don't really know what they are going to do yet.
Fedora's maintainers, it seems, would prefer to stay with Python 2 for
the indefinite future; the chances of Python 3 making an appearance in
Fedora 11 are quite small. There is a strong wish to avoid
maintaining both 2.x and 3.x on the same distribution release; they would
rather make a clean switch.
Your editor suspects that the flag-day approach to the language transition
is not going to work. There are a lot of packages which need to be ported,
and many of the people doing the porting would appreciate support from
their distributor. Red Hat dragged its feet for a long time on the
transition to Python 2, with the result that many users had to build
and install the newer version of the language themselves. For Fedora to do
the same with Python 3 is a sure path toward user frustration.
That said, keeping both versions of the language around is not a task for
the faint of heart. Installing a different version of Python itself is
quite easy. Keeping a whole set of modules for multiple versions is
distinctly less so. This will be especially true for Fedora; some other
distributions (especially the Debian-derived ones) have better mechanisms
for (and experience in) maintaining multiple versions of core system tools.
So the reluctance on the part of the Fedora developers to take on this work
is thus unsurprising. Perhaps this would be a good opportunity for offers
of help from the wider Fedora community.
It may well take a couple of years, but this transition will eventually be
made and people will eventually wonder what all the fuss was about. And,
when it's done, we'll have a cleaner, more maintainable, more
Unicode-rational version of an important programming language to work
with. That, one hopes, will be worth the short-term pain involved in
(For more information, see the Python3000 FAQ,
currently under development).
Comments (18 posted)
The KDE office application suite, KOffice is getting closer to its 2.0
release. Beta 3 was announced
November 19, with another beta due any day. The final release is expected
early next year, so it seems like a good time to take it for a spin.
The beta releases are available for Kubuntu
Intrepid Ibex (8.10), making it relatively easy to try out. There are
also openSUSE and Debian packages available as well as source code (of
course). The author didn't look forward to trying to build KOffice on his
normal Fedora 9 desktop, so borrowing an Intrepid laptop from the wife was in
order; after that enabling the "Unsupported Updates" and installing the
koffice-kde4 package (which didn't seem to work through the GUI, but
apt-get worked just fine) is all that it took.
The initial impression was a bit rocky as most of the small handful of ODF
files that were
opened caused KOffice to crash. It is a beta, though, so some of that is
to be expected. Trying again with the imminent Beta 4 and filing bugs for
failures should be high on the author's list. The one presentation file
that successfully opened in KPresenter seemed to have lost much of the
formatting that was present in the original, which was also disheartening.
It should be noted that the author is hardly an office suite "power user".
Normally, OpenOffice.org is used for minimal business documents (invoices
mainly), simple spreadsheets (expense reports, football pools), and boring,
bullet-list slides for
presentations (as anyone who has been to one will attest). By and large,
these simple needs are met by OpenOffice, with the added bonus of being
mostly able to open the various Microsoft-format documents that
unfortunately cross the desktop. Any other office suite with similar
capabilities would serve just as well.
Opening spreadsheets in KSpread provided the most reliable experience when
opening existing documents, but there were still a number of problems.
Formulas did not calculate automatically regardless of the auto-recalculate
setting, but the data was there, unlike some of the other document types.
KWord seemed to be unable to open any of the ODF documents tried, crashing
in all cases. One "handy" .doc file opened, but the formatting and
contents were mangled; OpenOffice can reproduce the formatting of that
document pretty well. KWord also crashed on exit from that
document. Perhaps betas are not the place to try opening
There clearly are many new features
in KOffice 2.0, but the major ones, porting to KDE4/Qt4 and using the Flake object
library throughout, are infrastructural in nature—they aren't
obvious to users. Much like KDE 4.0, it would appear that KOffice 2.0 is a
launching pad for subsequent releases.
There is an emphasis on a consistent user interface between the various
applications which does stand out when using KOffice. For better or worse,
the OpenOffice interface is fairly consistent between applications as well,
but seems more cluttered, or more poorly organized somehow. Using Flake
everywhere will be a boon to those who are power users as it treats
everything as a "shape" that can be transformed (via scale, rotate, skew) and
moved between any of the separate applications. Vector graphics can
cohabitate with raster graphics and text easily.
Using KOffice 2.0 is fairly straightforward for simple tasks. It is
noticeably slower than OpenOffice on the same hardware. Opening files,
even empty documents seems to take an inordinate amount of time. Even
moving around within KSpread or KWord seemed sluggish.
Presumably these are things that will be fixed, whether that will be in the
next few months or for KOffice 2.1 remains to be seen. This beta gives the
impression of great promise, but not yet a very usable tool.
Of course, there is more to KOffice than just the three applications
mentioned. The database application Kexi is not yet part of the KOffice
2.0 release, nor is the Visio-like flowchart program Kivio. Two drawing
applications, Karbon14 for vectors and Krita for raster graphics have been
released with the beta. Other than a quick startup to see if the interface
was consistent with the rest of the suite—it was—the author
didn't try them. The same goes for KPlato, the project management and
planning application, though it has a rather different look—no
toolboxes on the right hand side—likely because of its very different
Perhaps unfairly, the author expected a bit more from this beta release.
It would seem there is still a fair amount of work to do before the final
2.0 version, but there are still a few months left. For whatever reason,
previous attempts to use KOffice had always caused the author to quickly
switch back to OpenOffice. Even though there were so many problems, this
KOffice—or more likely 2.1—somehow seems more plausible to
switch to. Another look in a few more months is likely called for.
Comments (18 posted)
Science fiction writer Vernor Vinge is best-known for novels like A
Fire Upon the Deep and Rainbows End, as well as the concept
Singularity -- the idea that, in the next couple of decades, humans
will become or create a super-human intelligence. What is less well-known
is that Vinge has been a free software supporter since the earliest days of
the Free Software Foundation (FSF). He has served several times on the jury
for the FSF Awards and spoke at an FSF-sponsored event held last month in
San Diego to coincide with the LISA conference. As someone who deals
regularly with large scale speculations, Vinge places free software in a
larger historical context. He even speculates that free software may be one
of the factors that will shortly bring about the Singularity.
Part of Vinge's interest in free software is personal. A mathematician and computer scientist, he quickly found that the rise of proprietary software greatly increased the difficulties of teaching.
"When I looked at contracts and user-agreements," he
recalls, "the legalese was extraordinarily intimidating, not just
because it was complicated, but because it actually seemed to restrict
things to the point where it was really difficult to imagine how a student
could follow the agreement and still do a project. So the openness that was
in the GNU General Public License (GPL) was really very, very
welcome." Vinge soon got into the habit of giving students "a
little spiel about the GPL" and encouraging them to license their
projects under the GPL.
"If they did that," he says, "that would mean I would be
able to use their stuff in later projects with other students. And a very
large percentage of students in most classes though it was a cool enough
idea that they actually did use [the GPL] in their projects."
The historical trend to cooperative infrastructure
However, as important as free software may have been to Vinge in his teaching, what seems to interest him the most is placing free software in a broader historical context. Early on, Vinge came to view free software -- and, later on the Internet and social networking applications that it was instrumental in creating -- as part of a historical trend towards creating an increasingly elaborate "infrastructure of trust and cooperation" that increases the rate of technological advance.
Vinge says: "There are business inventions of the last 2000 years
like the widespread use of loans and credit, the use of insurance, the use
of limited liability corporations, all of which involve at least at the
beginning, a leap of trust." To Vinge, free software, the Internet and
social networking are simply the latest extensions to the infrastructure
created from such institutions. What these institutions all have in common
is that they allow people to interact in more creative and productive
More specifically, he sees free software as the natural and more logical
extension of the insight that had produced the shareware culture a few
years before the start of the GNU Project and the FSF. With
the emergence of the personal computer, entrepreneurs were finding that
"the barriers to entry were so low that you didn't need a lot of the
overhead that was involved in commercial stuff, and you might just be able
to get away with trusting people to pay you. There was much blind feeling
around the concept of producing stuff in some sort of context that was
different from cars."
According to Vinge, what the GPL and the software and institutions that
have grown up around it have produced is "a platform for experimenting with
social invention. In the 20th and 19th century, if you wanted to experiment
with a new infrastructure for people to interact in, in most cases, like
with the railroads, you needed enormous effort. And now -- we can actually
do social experiments -- cooperative experiments -- much more cheaply, and
you can design ways for people to interact based on just the software
guiding what the interactions are like."
Vinge acknowledges that the consequences have not always been beneficial.
"One thing the last ten years have proved is that we seem to be very
bad at thinking how stuff can be abused," he says, no doubt thinking
of such phenomenon as crackers and online predators. "Any time you
can make something a hundred or a thousand times cheaper than it was
before, there are probably side-effects. But there's a tendency when
something works really, really well to push it hard and deliberately avoid
thinking about side-effects."
Still, the main change has been beneficial overall in Vinge's view. In
particular, he says: "One nice thing is that the price of failure is
a lot lower than what you might imagine in the 19th century. Say someone
spent ten million 1850 dollars, to make steam-powered dirigibles. Now, it
doesn't work, and you've just spent a lot of money, and you don't have
anything except a lot of ruined effort. Now, there's still ruined effort if
something doesn't work out, but you can retarget or repurpose much more
easily, and you can justify taking much larger leaps of faith than you
could in 1850." The result is that more experimentation, and more and
quicker development becomes possible.
In this view, free software represents the currently most-advanced
realization of the possibilities inherent in computer technology. "It's an
interesting, science-fictiony, parallel-world story to imagine what would
have happened if Richard Stallman hadn't come along with the GPL," says
Vinge. "Without Richard Stallman's insight, I think we would have
eventually got something like what we got with free software, but it would
have been a very interesting muddle. [The process] could have gone for
years, and it could easily have gone on so many years that it impacted the
era in which really large stuff can be built in the free model. So,
overall, I think we would have got something, but, even now, the low
overhead involved and even the insight that comes from the GPL would not be
In other words, the GPL and modern computer structures are all "in
the tradition of the last few centuries. They're taking the traditions that
we saw with the industrial revolution and adding several layers of
magnitude to that flexibility."
Bringing on The Singularity
Although speculation is part of Vinge's stock in trade as an SF novelist,
he is cautious about predicting the future. "I always rush to say,
'Terrible things could happen!'" he says. "A giant meteor could hit the
earth, or a civil war could happen."
However, caution aside, Vinge does concede that "we have the tools to
keep running along the same lines for some time. And, in the absence of
disaster, it quickly runs to the point where you're talking about stuff
that's of the same significance as the rise of the human race within the
animal kingdom." In other words, the Singularity arrives.
Vinge does not offer a map of exactly how free software and its
infrastructure will lead to the Singularity. But, given the probable
inability of humans to understand super-human intelligence, he
should not be expected to do so. "It's easy to imagine," he says, "but you
run out of adjectives and high-sounding words that could mean anything to
someone like us." All that can really be said is that, as the latest
manifestation of the historical trend to increasingly complex cooperative
infrastructures, free software plays a large role in creating a future in
which the Singularity becomes increasingly inevitable.
"I think that's going to happen in the relatively near historical
future," says Vinge. "And these sorts of trends are all
consistent with that possibility."
Meanwhile, Vinge is personally content with the improvements that have come
to free software in the last couple of years. He is particularly pleased
that you can download and install a stable and easy to use operating system
in an afternoon. "If you look back over the last ten years, you see how
easy it's become to do things," he says. "It's silly to put number to this,
but it's ten or a hundred times easier now. I can remember spending days
getting PPP to work. And now, you just plug this cable into that socket,
and it works. I feel much more able to do what I have to do without having
to worry very much, without having Catch-22s nibble me to death. Things
have really come together in a coherent and useful way."
Comments (36 posted)
Page editor: Jake Edge
Next page: Security>>