The next battle in the war for software and data freedom is likely to be in the online services realm. There are already calls for legislation to govern what Gmail and Facebook can do with your data along with efforts to provide free alternatives to some popular web applications. Coming at the problem from a different direction, the Forkolator project is looking toward a world where free web applications are not only free to change, but those changes are immediately available to use on the same site.
Many of the web applications that people use today are not free in any sense other than price. There are also lots of applications that are free software – Wikipedia and Wordpress are often used as examples – but changing the source code for them does little to change the user's experience, because the service controls the software version that they run. This is as it should be, few would argue that Wikipedia should be forced to run some modified version of their code. Vast quantities of collaboratively developed data reside there, however, that any modified version of Wikipedia would want to access. Currently, one could work with the Wikipedia folks to get the change integrated into their codebase and eventually rolled-out for users, or one could fork the project.
The Forkolator vision – at this point it is not much more than that – is to provide a third choice. In a mockup of the Wordpress management interface, Forkolator founder Erik Pukinskis added a "fork this page" button. Somewhere down the road, if Wordpress were written to support Forkolator, that button would instantiate a copy of the server code running on the server, with access to all the same data. It would then allow a user to change the underlying code to fix a bug or add a feature, which would then run live in that instance. Users who accessed the weblog or management screen would use the updated code.
Obviously, people that are able to host their own Wordpress instances are able to do this already – it is free software after all. What may be missing is the collaborative environment that a blog hosted at wordpress.com provides. Wordpress is free software, but wordpress.com does not provide a free, as in freedom, service. Likewise for Wikipedia, most of the value is in the site itself and the data; even forking it only gives a static version at the point of the fork. The Forkolator concept would provide another level of freedom; one could have their own view of Wikipedia running side-by-side with the standard code, allowing users to decide which they preferred.
At the moment, Forkolator is a PHP application that provides a web-based integrated development environment (IDE) that can be forked and modified live. It provides a kind of proof-of-concept; an IDE running in the browser may not provide the ideal development environment. Ruby on Rails already has Heroku, which shares many traits with the Forkolator vision. The focus of Heroku seems to be avoiding the pain of deploying an individual web application rather than Forkolator's explicit push for freedom in the web services arena.
The problems inherent in allowing users to modify the function of a server-side application are legion. Forkolator advocate Sandy Armstrong calls the problems "staggering" and they are; providing security, privacy, and stability while still allowing user modification is uncharted territory. Solving those problems in a sensible fashion will make or break the project and it is far from clear that they can be solved.
There is talk that some of the problems inherent in the model could be solved in the same way that wiki defacements are handled; by the community. If a rogue user modified the web application to be a spambot, for example, other users could shut down or quarantine the fork. Data access is another area that will need close attention. Obviously the application needs read and write access to the database, but how can you keep rogue applications from trashing the data for everyone else? This goes well beyond defacing individual pages, wholesale removal of all content could be effected by a malicious application. The Forkolator team will need to come up with ways to deal with all of these kinds of problems and more.
Forkolator is in its infancy – perhaps gestation is more accurate – with an enormous number of serious technical hurdles to overcome, but it does provide an interesting view of how free web services could work. It is not a model that all web applications will adopt, with good reason, but for sites that are largely collaborative in nature, it could make a great deal of sense. Whether Forkolator, Heroku, or some other framework can actually deliver the vision remains to be seen. We will be watching.
The Ninth Real-Time Linux Workshop, held in early November in Linz Austria, provides a look into the current direction of realtime Linux research as well as applications of the technology. LinuxDevices has collected up the available papers from the workshop which make for interesting reading. Roughly half of the papers cover applications, from robotics to train monitoring, while the other half cover realtime development and measuring the impacts of various techniques.
Realtime Linux solutions have branched out quite a bit since the original RTLinux. Because that solution is patented, now owned by Wind River, and largely unmaintained, various other solutions are maturing. In addition, the realtime preemption (RT_PREEMPT) patches are also making their way into the mainline kernel. For "hard" realtime, guarantees must be made about the interrupt (and other) latencies in the system; so far Linux with RT_PREEMPT has not been proven to make those guarantees. It does provide a solution described by some of the authors as "good enough" for many hard realtime applications, however.
Several of the papers covered various aspects of the performance of the RT_PREEMPT kernel. Worst-case latencies for low-end PowerPC and ARM processors (suitable for embedded applications) were measured and reported. Two different clock frequencies were used for each processor to determine if there was a simple relationship between processor speed and latency: "A better realtime behavior cannot be achieved by simply choosing a processor with a higher clock frequency."
Another paper measured the impact of RT_PREEMPT on general system performance to try and gauge the cost of those kernel changes. They found "no significant impact of [RT_PREEMPT] on the general performance of the system unlike the preempt patches of earlier kernel versions." They also measured latencies and jitter to try to determine its suitability for hard realtime tasks, finding that even though there are no guaranteed worst case latencies, RT_PREEMPT kernels are not "definitely unsuitable".
The third paper measuring performance looked at the performance characteristics of an RT_PREEMPT kernel on an industrial controller board. In addition, the measurements were validated using a paint robot. Their conclusion provides a nice summary of the progress the Linux kernel has made for realtime applications:
Research into how to effectively use multi-processor and multi-core systems for realtime tasks was the topic of another of the presentation. LITMUSRT is a kernel modification that implements pluggable schedulers. It was created to test different kinds of scheduling policies to discover which algorithms work best for realtime applications on multiple processors.
An area that generally receives little notice in the realtime community is disk I/O, but a paper presented looks to change that. The authors looked at existing I/O schedulers for realtime systems and found them lacking – the models used are too simplistic and do not take into account prefetching and write-caching. They implemented a more realistic model into an I/O scheduler for RTLinux and report their results.
The XtratuM "nanokernel" is a virtualization solution used in realtime applications. Linux has also been ported to run on XtratuM for the x86 architecture, which allows it to run alongside a realtime OS. Two papers were concerned with XtratuM, one covering a FIFO implementation between XtratuM domains, allowing communication between guest OSes. The other covered porting it to the PowerPC architecture.
PaRTiKle OS is a compatible replacement for RTLinux, allowing applications built for that platform to run unchanged. It uses an entirely different technique, implementing the kernel system calls itself, rather than using the Linux kernel. This makes the connection to Linux a bit tenuous, but because it avoids the RTLinux patents and is LGPL licensed, it may be a useful migration path for RTLinux users.
The participation of Universities at the workshop is something that stands out right away. The vast majority of the papers came from Universities, mostly European – unsurprising given the location – but from China and Mexico as well. LWN raised some questions about the lack of University participation in Linux development back in July, perhaps part of the answer lies in the realtime realm. It is unclear how much of the code will actually reach the mainline, but the number of University participants in the workshop is impressive.
This article just notes some of the papers presented, for those interested, there is much more available. The papers covering various applications where realtime Linux is actually being used are very detailed. We can expect to see Linux used more frequently in these kinds of applications in the future.ten candidates running for the seven available positions. This election may seem like another boring bureaucratic exercise, but its results are important: GNOME is the desktop used by a great many free software users, and it is the platform supported by the Free Software Foundation.
In a number of ways, this seems like one of the more tense elections of its kind in our community. A number of items discussed last year (such as the hiring of a business development manager and/or executive director) remain undone. The workings of the board seem distant and obscure to some GNOME developers. There are clear tensions between some of the project's leaders. Criticism of the project's participation in the OOXML standardization process seems unlikely to let up anytime soon. And there seems to be a general sense of frustration that the board's members are too busy to get things done and too unwilling to delegate things to others. It's also worth noting that the winners will be serving a relatively long term; a change in the Foundation's bylaws means that the next election will happen sometime around June, 2009.
Given that, the themes which have come out in the electoral debate should be clear. How should the whole OOXML participation process have been handled? What should be done with the Foundation's money (about $150,000 in the bank and $50,000 in receivables, according to the minutes from a recent board meeting)? How should GNOME push forward into interesting areas, such as mobile applications and web-hosted services? And how can the board become more effective than it has been in the past?
Along with deciding on these issues, the new board will have one other new
decision ahead of it. Until very recently, the Foundation has operated
under a single president: a certain Miguel de Icaza. Miguel has been
absent from the GNOME development community for some time, and many of the
developers in that community have not found themselves in agreement with
the public positions he has taken. The current board has convinced
Miguel to resign the presidency, and has changed
by-laws its practices to the effect
that, in the future, the president will be appointed by the board. The
interim president will be Quim Gil.
In that context, here are a few selections from recent statements by this year's candidates.
As for the issue of single standards, I hate it when people use standardization as a tool to take advantage over their competitors. "I got here first, so you can't" is exactly what's broken about the patent system right now. Think about it.
There are important topics like the Online Desktop and OOXML which many are interested in; however, I would like to bring to everyone's attention that GNOME accessibility could be positioned as a clear winner over Windows's MSAA and KDE accessibility, but instead GNOME's accessibility is on the defensive. From an accessibility perspective, GNOME could be winning the hearts and minds of corporations and government agencies; however, GNOME accessibility is being threatened by the deprecation of Orbit2 & its migration to DBus, and the migration of Microsoft's UIA to GNU/Linux. Why regress and/or re-engineer when we can beat the competition now?
Ballots must be returned by December 9, and the initial results from the election are due to be announced on December 11; stay tuned.
Page editor: Jonathan Corbet
Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds