The next battle in the war for software and data freedom is likely to be in
the online services realm. There are already calls for legislation to
govern what Gmail and Facebook can do with your data along with efforts to
provide free alternatives to some popular web applications. Coming at the
problem from a different direction, the Forkolator project is looking toward a
world where free web applications are not only free to change, but those
changes are immediately available to use on the same site.
Many of the web applications that people use today are not free in any
sense other than price. There are also lots of applications that are free
software – Wikipedia and Wordpress are often used as examples –
but changing the source code for them does little to change the user's
experience, because the service controls the software version that they
run. This is as it should be, few would argue that Wikipedia
should be forced to run some modified version of their code. Vast
quantities of collaboratively developed data reside there, however, that
any modified version of Wikipedia would want to access. Currently, one
could work with the Wikipedia folks to get the change integrated into their
codebase and eventually rolled-out for users, or one could fork the project.
The Forkolator vision – at this point it is not much more than that
– is to provide a third choice. In a mockup of the
Wordpress management interface, Forkolator founder Erik Pukinskis added
a "fork this page" button. Somewhere down the road, if Wordpress were written to support
Forkolator, that button would instantiate a copy of the server code running
on the server, with access to all the same data. It
would then allow a user to change the underlying code to fix a bug or add a
feature, which would then run live in that instance. Users who accessed
the weblog or management screen would use the updated code.
Obviously, people that are able to host their own Wordpress instances are
able to do this already – it is free software after all. What
may be missing is the collaborative environment that a blog hosted at
wordpress.com provides. Wordpress is free software, but wordpress.com
does not provide a free, as in freedom, service. Likewise for Wikipedia,
most of the value is in the site itself and the data; even forking it only
gives a static version at the point of the fork. The Forkolator concept
would provide another level of freedom; one could have their
own view of Wikipedia running side-by-side with the standard code, allowing
users to decide which they preferred.
At the moment, Forkolator is a PHP application that provides a web-based
environment (IDE) that can be forked and modified live. It
provides a kind of proof-of-concept; an IDE running in the browser may
not provide the ideal development environment. Ruby on Rails already has
Heroku, which shares many traits with the
Forkolator vision. The focus of Heroku seems to be avoiding the pain of
deploying an individual web application rather than Forkolator's explicit
push for freedom in the web services arena.
The problems inherent in allowing users to modify the function of a
server-side application are legion. Forkolator advocate Sandy Armstrong
the problems "staggering" and they are; providing security, privacy,
and stability while still allowing user modification is uncharted
territory. Solving those problems in a sensible fashion will make or break
the project and it is far from clear that they can be solved.
There is talk that some of the problems inherent in the model could be
solved in the same way that wiki defacements are handled; by the community.
If a rogue user modified the web application to be a spambot, for example,
other users could shut down or quarantine the fork. Data access is another
area that will need close attention. Obviously the application needs read
and write access to the database, but how can you keep rogue applications
from trashing the data for everyone else? This goes well beyond defacing
individual pages, wholesale removal of all content could be effected by a
malicious application. The Forkolator team will need to come up with ways
to deal with all of these kinds of problems and more.
Forkolator is in its infancy – perhaps gestation is more accurate
– with an enormous number of serious technical hurdles to overcome,
but it does provide an interesting view of how free web services could
work. It is not a model that all web applications will adopt, with good
reason, but for sites that are largely collaborative in nature, it could make a
great deal of sense. Whether Forkolator, Heroku, or some other framework
can actually deliver the vision remains to be seen. We will be watching.
Comments (18 posted)
The Ninth Real-Time Linux Workshop, held in early November in Linz Austria,
provides a look into the current direction of realtime Linux research as
well as applications of the technology. LinuxDevices has collected up the
available papers from the workshop which make for interesting reading.
Roughly half of the papers cover applications, from robotics to train
monitoring, while the other half cover realtime development and measuring
the impacts of various techniques.
Realtime Linux solutions have branched out quite a bit since the original
RTLinux. Because that solution is patented, now owned by Wind River, and largely unmaintained, various other
solutions are maturing. In addition, the realtime preemption (RT_PREEMPT)
patches are also making their way into
the mainline kernel. For "hard" realtime, guarantees must be made about
the interrupt (and other) latencies in the system; so far Linux with
RT_PREEMPT has not been proven to make those guarantees. It does provide a
solution described by some of the authors as "good enough"
for many hard realtime applications, however.
Several of the papers covered various aspects of the performance of the
RT_PREEMPT kernel. Worst-case latencies for low-end PowerPC and
ARM processors (suitable for embedded applications) were measured and
reported. Two different clock frequencies were used for each processor
to determine if there was a simple relationship between processor speed and
latency: "A better realtime behavior cannot be achieved by simply
choosing a processor with a higher clock frequency."
Another paper measured the impact of RT_PREEMPT on general system
performance to try and gauge the cost of those kernel changes. They found
"no significant impact of [RT_PREEMPT] on the general performance of the system
unlike the preempt patches of earlier kernel versions." They also
measured latencies and jitter to try to determine its suitability for hard
realtime tasks, finding that even though there are no guaranteed worst case
latencies, RT_PREEMPT kernels are not "definitely unsuitable".
The third paper measuring performance looked at the performance
characteristics of an RT_PREEMPT kernel on an industrial controller
board. In addition, the measurements were validated using a paint robot.
Their conclusion provides a nice summary of the progress the Linux kernel
has made for realtime applications:
Linux has for a long time proven that its stability is excellent, and now
we see that the real-time performance is really moving towards other
commercial real-time operating systems. The ability to be able to run a
real-time application on the same processor as other standard applications
is a winning combination. This is really what favors Linux as a real-time
operating system compared to other dedicated real-time operating systems.
Research into how to effectively use multi-processor and multi-core systems
for realtime tasks was the topic of another of the presentation.
is a kernel modification that implements pluggable schedulers. It was created to test different kinds of
scheduling policies to discover which algorithms
work best for realtime
applications on multiple processors.
An area that generally receives little notice in the realtime community is
disk I/O, but a paper
presented looks to change that. The authors looked at existing I/O
schedulers for realtime systems and found them lacking – the models
used are too simplistic and do not take into account prefetching and
write-caching. They implemented a more realistic model into an I/O
scheduler for RTLinux and report their results.
The XtratuM "nanokernel" is a
virtualization solution used in realtime applications. Linux has
also been ported to run on XtratuM for the x86 architecture, which allows
it to run alongside a realtime OS. Two papers were concerned with
XtratuM, one covering a FIFO
implementation between XtratuM domains, allowing communication between
guest OSes. The other covered porting it to the
OS is a compatible replacement for RTLinux, allowing applications built
for that platform to run unchanged. It uses an entirely different
technique, implementing the kernel system calls itself, rather than using
the Linux kernel. This makes the connection to Linux a bit tenuous, but
because it avoids the RTLinux patents and is LGPL licensed, it may be a
useful migration path for RTLinux users.
The participation of Universities at the workshop is something that stands out right away.
The vast majority of the papers came from Universities, mostly European
– unsurprising given the location – but from
China and Mexico as well. LWN raised some questions about the lack
University participation in Linux development back in July, perhaps part
of the answer lies in the realtime realm. It is unclear how much of the
code will actually
reach the mainline, but the number of University participants in the
workshop is impressive.
This article just notes some of the papers presented, for those interested,
there is much more available. The papers covering various applications where
realtime Linux is actually being used are very detailed. We can expect to
see Linux used more frequently in these kinds of applications in the future.
Comments (none posted)
The GNOME Foundation is charged with several tasks, including serving as
the official voice of the project, coordinating releases, deciding which
projects fit under the GNOME umbrella, supporting events, and more. Once a
year, a board of directors is chosen by the Foundation's members. This
time around, there are ten
running for the seven available positions. This
election may seem like another boring bureaucratic exercise, but its
results are important: GNOME is the desktop used by a great many free
software users, and it is the platform supported by the Free Software
In a number of ways, this seems like one of the more tense elections of its
kind in our community. A number of items discussed last year (such as the
hiring of a business development manager and/or executive director) remain
undone. The workings of the board seem distant and obscure to some GNOME
developers. There are clear
tensions between some of the project's leaders. Criticism of the
project's participation in the OOXML standardization process seems unlikely
to let up anytime soon. And there seems to be a general sense of
frustration that the board's members are too busy to get things done and too unwilling to delegate things to others. It's also worth noting that the winners will be serving a relatively long term; a change in the Foundation's bylaws means that the next election will happen sometime around June, 2009.
Given that, the themes which have come out in the electoral debate should
be clear. How should the whole OOXML participation process have been
handled? What should be done with the Foundation's money (about $150,000
in the bank and $50,000 in receivables, according to the minutes from a recent board meeting)?
How should GNOME push forward into interesting areas, such as mobile
applications and web-hosted services? And how can the board become more
effective than it has been in the past?
Along with deciding on these issues, the new board will have one other new
decision ahead of it. Until very recently, the Foundation has operated
under a single president: a certain Miguel de Icaza. Miguel has been
absent from the GNOME development community for some time, and many of the
developers in that community have not found themselves in agreement with
the public positions he has taken. The current board has convinced
Miguel to resign the presidency, and has changed
by-laws its practices to the effect
that, in the future, the president will be appointed by the board. The
interim president will be Quim Gil.
In that context, here are a few selections from recent statements by this
I think it is an important part of the Foundation to encourage new
people to get involved with volunteer aspects of the community. I
would like to encourage more participation from communities that
are not so well represented today. For example, users with
accessibility needs. I think having someone on the board with
accessibility experience is important to foster these sorts of
I think it would add value to spend more on marketing and on
evangelical community building opportunities. For example, Windows
and MacOS have flashy "Welcome to the desktop" presentations.
Perhaps it is time for the GNOME community to find ways to better
One tipping point for GNOME would be when the membership/community
stops thinking of board as visionaries who set the direction and
happenings of project and starts seeing that it's just set of
trusted people who volunteered to do the boring and frustrating
tasks (take my word for that) that are so essential to the project
but no-one else is doing. [...]
As for the issue of single standards, I hate it when people use
standardization as a tool to take advantage over their competitors.
"I got here first, so you can't" is exactly what's broken about the
patent system right now. Think about it.
Personally, I would not mind it if GNOME were more compatible with
web services; however, I would not want a desktop which is
dependent on them. A danger of an online desktop would be the
dependency on non libre software services where we are not invited
to make changes. [...]
There are important topics like the Online Desktop and OOXML which
many are interested in; however, I would like to bring to
everyone's attention that GNOME accessibility could be positioned
as a clear winner over Windows's MSAA and KDE accessibility, but
instead GNOME's accessibility is on the defensive. From an
accessibility perspective, GNOME could be winning the hearts and
minds of corporations and government agencies; however, GNOME
accessibility is being threatened by the deprecation of Orbit2 &
its migration to DBus, and the migration of Microsoft's UIA to
GNU/Linux. Why regress and/or re-engineer when we can beat the
[T]he Online Desktop could be the one thing that will tip the scale
when users choose their desktop environment. I've had the
opportunity to see a few demos and was fairly impressed with its
potential. I believe that it is not up to the Board to decide on
the implementation or even which tools/languages to use, but serve
as a facilitator and guiding light to make sure that the project
stays on track and focused... GNOME users have become used to
expect innovation and great software in every release, so the
Online Desktop could definitely provide that extra buzz!
I'd like to see more support going for the guys behind Abiword,
Glom, Gnumeric, Epiphany, etc... Open Office and Firefox are GREAT
examples of good software but I happen to believe that we already
have great software in our code base that has been delegated to
second place. How about we promote a an event where people who are
involved with the software mentioned before plus anyone who can be
of help and offer insight can sit down and jot down what needs to
be done in order to bring them out of the closet?
I see the GNOME Online push as pulling us into the Wild West of the
Web platform where everyone is staking their claims and there is
yet to be monopolies to stifle innovation. Sure Google is big but
sites like Facebook and Wikipedia were able to emerge. The only
way to defeat entrenched adversaries in business is to outflank
them with disruptive technology. Microsoft did it to IBM with the
Desktop, Google did it to Microsoft with web search and we have the
chance to bring in integrated Open Source web applications to the
mix and even define a new era of Open Services.
Well one weak point is the board seems almost foreign to the every
day GNOME contributor. People vote and pretty much forget about
the inner workings until Slashdot gets a hold on some
sensationalized story and a press release is put out and still to
the outside world the role of the foundation is unclear. It is
hard to figure out weak points because it is hard to see exactly
what the foundation does. I would fix this by communicating any
decision, from the mundane to the sensational, in an easy to digest
format on my blog. Meeting minutes and press releases are just not
enough. Active engagement of the community is a must.
I think the Online Desktop initiative is a great opportunity for us
to enwide the scope of GNOME project from a specific desktop
environment to a broader user experiences set. This means taking
advantage of this huge amount of funny, socially powerful, useful
information and services available on the Web. Embracing Online
Desktop also means trying to bring a new set of goals to GNOME
which are related to a more social and entertaining user
experience, something that, in my opinion, has been lacking in
GNOME for a long time.
I think the most serious problem about GNOME Foundation
participation on ECMA TC45-M was that it wasn't properly explained
and clarified to the community at the time it started. The
statement came after a lot of noise.
About the GNOME Foundation being part of the OOXML ECMA committee:
I've supported this decision and I still do. If we can have someone
asking for clarifications and maybe even have the ability to
improve the format, it'd be wrong to not do it and just complain
about the format. We want our users to read their files, and some
will have OOXML files. This means I'll want our applications to be
able to read such files, and therefore that a better documentation
of the format is good.
We've seen this year that hiring an "executive director" is hard,
very hard. I'm hopeful that hiring a sysadmin would be
(comparatively) easier. And I'm also hopeful that we can get some
funding to hire the sysadmin. So my plan is to hire a sysadmin
using part of what we have in our back account now and using some
new funding, and keep enough cash so that we can hire an "executive
director" too. It might sound too ambitious, but I think it's
doable and that it's the best way to go.
Diego Escalante Urrelo
Support initiatives in Latin America for getting people involved as
users and developers. Concretely, I would like to "deploy" 2 or 3
of our rockstars next year to a LA-tour, as seen on marketing-list
and later gugmasters
the idea has had a positive response. I would like to serve as a
direct link to this initiative and hopefully other similar ones.
I would have included a line in all-caps saying "GNOME Foundation
doesn't like OOXML, we have someone in the committee because
standard or not Ms is gonna push it everywhere, so we are taking
the chance to ask questions and raise concern on all the problems
we can find."
I'll be running again for the Board this year. This will be an
unusual candidacy. I will not be running to do various and sundry
board tasks; I'll be running to do exactly one thing: legal work- a
vote for me is a vote that says 'Luis should be the coordinator of
all GNOME-related legal issues.'
I think it is inevitable that GNOME, or GNOME partners, will be
offering web-backed services to GNOME users. My personal vision for
that is to dot the i's and cross the t's on the legal parts- to
make sure that as we sail into uncharted waters, the rights of
GNOME users and contributors are being protected.
I wish [the statement on OOXML] were more explicit about how the
Foundation feels that the ODF folks have been undermining the
standards process. It isn't obvious to everyone that ODF shares
much of the blame for the politicization of the process, so the
statements about that in the statement are a little vague.
It is ISO's role to facilitate the development of standards in a
coherent, transparent manner, not to determine the market demand
for a given standard. I think it's extremely short-sighted to
protest OOXML on the basis of "competing standards" given that
standards exist for technologies that we are very likely to want
true Free standards for in the future - for example, video encoders
We must have a full time staff member to manage any further hires,
as there is no way our part time administrator should have to deal
with any duties related to management. So, of the two, I'd prefer a
full time, management capable hire before a sysadmin hire.
Ballots must be returned by December 9, and the initial results from
the election are due to be announced on December 11; stay tuned.
Comments (10 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: ITU getting serious about botnets; New vulnerabilities in firefox, kernel, openldap, wireshark...
- Kernel: Tightening symbol exports; kmemcheck; System call updates: indirect(), timerfd(), and hijack().
- Distributions: openSUSE seeks new design for the YaST Control Center; MontaVista Carrier Grade Linux 5.0; Pie Box Enterprise Linux 4AS U6; SUSE Linux Enterprise Real Time 10; CentOS on your laptop; sidux turns 1
- Development: How The Backup Process Has Changed, Unofficial Gnuradio Documentation,
GCC 4.3.0 Status Report, Perl 6 on Parrot Roadmap,
new versions of BusyBox, Samba, GooPackage, OSsonar, 2step, mnoGoSearch,
Quixote, Graphviewer, Icarus Verilog, Atlas-C++, FluidSynth, HylaFAX,
Firefox, Myna, zenTrack, xMarkup.
- Press: Desktop Linux survey trends, Windows vs. Linux security, Akademy-es coverage,
Realtime Linux Workshop papers, Novell gets green light on SCO, running a
business on Linux, interviews with Linus and Google's Andy Rubin,
Firefox 3 Beta 1 review, Linux Audio Editors.
- Announcements: EFF on telecom lobbying, GNOME foundation elections, GNOME Foundation on OOXML,
OLPC sued in Nigeria, Ulrich Drepper's complete memory article,
Google Highly Open Participation Contest, HCAR cfp, Linux Clusters cfp,
ACIS cfp, NLUUG cfp, SCALE cfp, FOSS.IN schedule, LF Collaboration Summit,
OO.o Community Forum, LugRadio, PyCon podcasts.