The software side of the OLPC project is just as interesting as the hardware. The project has been occasionally criticized, though, for concentrating on hardware and being slow to get its software together. Much of this criticism is not really warranted; work on the Sugar environment has been underway for quite some time, and there are a number of interesting applications coming together for this platform. In an area or two, however, it does seem like problems are being addressed a little later than might have been optimal.
One of those areas, as evidenced by a series of discussions on the project's mailing list, is the issue of software updates. The OLPC project plans to deploy millions of laptops into environments where skilled system administrators are scarce. It seems certain that, sooner or later, there will be a need to update the software installed on those systems - perhaps urgently. It is reasonable to expect that the children using these laptops might just not be entirely diligent in checking for and installing updates. So something with a relatively high degree of automation will be required.
There are some additional complications which must be taken into account. The OLPC project has decided to dispense with Linux-style package managers in favor of a whole-image approach. OLPC has the resources to fund some fairly strong servers and network bandwidth, but putting together the resources which can handle pushing an update to millions of laptops at the same time might still be a challenge. In fact, simply coping with update-availability queries from that many laptops would require significant resources. So how will OLPC handle software updates? It turns out that they still don't really know.
Discussions started when Alexander Larsson showed up on the list with an announcement that he was working on the software update task. His proposal was an interesting combination of tools. In this scheme, a system image looks a lot like a git repository; it contains a "manifest" which (like a git index) has a list of files associated with SHA1 hashes of their contents. Updating a system involves getting a new manifest, seeing which files have changed, grabbing their contents, and dropping them in place. The actual safe updating of the system image is done by way of the Bitfrost security model which was announced last February.
Alex's proposal uses the Avahi resource discovery protocol to find updates. Once one system on a given network (often the school server) obtains a copy of the update, it advertises it via Avahi. All laptops on the network can then notice the availability of the update and apply it. Once a laptop has the update, it, too, can make that update available over the mesh network, facilitating the distribution of the update to all systems on the net.
Ivan Krstić, the author of Bitfrost, has a different approach. It starts by taking advantage of one of the OLPC's more controversial features: the phone-home protocol. Laptops have to make regular contact with special servers to check whether they have been stolen; laptops which have been reported stolen can be shut down hard by the anti-theft server. Ivan's update proposal has the laptops checking for software updates while doing the "am I stolen?" check; the servers will be able to reply that the laptop remains with its owner, but that it is running old software and should update.
If the laptop needs an update, it will attempt to obtain the necessary files (using rsync) from the school server. If these attempts fail for a day or so, the laptop will eventually fall back to an "upstream master server" for the update files. The use of rsync allows updates to be transferred in a relatively bandwidth-friendly manner. Only changed parts of changed files need be transmitted over the net. It also has the advantage of being a known quantity; there is no doubt that rsync can be made to work in this setting. There is some concern that rsync tends to be resource intensive on the server side, meaning that those upstream master servers would probably have to be relatively powerful systems. If all goes well, though, the load on those servers would be mitigated by distributing updates through the school servers and staggering updates over time.
Ivan's proposal has also been criticized because it requires the use of central servers rather than distributing updates through the mesh network. He responds:
As an aside, this conversation also brought out some serious unhappiness about the use of Linux-VServer in Bitfrost. The (seemingly permanent) out-of-tree status of Linux-VServer makes it harder to support over the long term; it seems that the project may well move to a different solution once it has shipped its first set of systems.
Back on the update front, yet another proposal was posted by C. Scott Ananian. In this scheme, each laptop will occasionally poll a master server to see if an update is available; this poll might take the form of a DNS lookup. The more systems there are on the local network, the less frequently these polls will happen.
If a laptop discovers that an update is available, it will start pulling it down from the master server. This update will be divided into a number of small chunks, each of which is independently checksummed and signed. As those chunks come in, the receiving laptop will send them out to a multicast address on the local mesh; all other laptops in the area should then see it and grab a copy as it goes by. Once all of the required pieces have been received, the update can be applied. If a laptop misses a segment as it goes by, it will eventually time out and start actively grabbing (and rebroadcasting) pieces itself.
Which approach will be adopted is not clear; if the project has decided on a proposal (or a combination of them), that decision has not been posted on a public list. Time is tight, though, and a rock-solid solution will have to be in place before the first production systems ship. It is, after all, risky to count on being able to fix the remote update system (remotely) after the fact.
For a more general view of the state of OLPC software, a look at this message from Walter Bender (the OLPC president for software and content). A lot is happening, but a number of desired features (including the famous "view source" key) will not be functioning when the first systems ship. The OLPC software, he says, is a work in progress - much like the rest of our software. The "progress" part is clearly happening, though, and OLPC appears to be on course to deliver a system which will bring computing power and network connectivity to millions of children - and which will change our views of how that should be done.
In early 2005, before the rewrite process really took off, Eben Moglen gave a talk which discussed what was coming. There were, he said, four completely different sets of goals which a new license had to meet:
That is a wide set of criteria to satisfy; this is not a challenge that just anybody would want to take on.
Regardless of what one thinks of the final result, one cannot fault Eben Moglen for not having thought hard about the process. Several committees were formed to represent the interests of different constituencies. Lawyers from all over the world were called together to work on language with truly global applicability. Major industry players were brought together on regular conference calls to discuss the progress of the license. Several draft releases were made - each with supporting documentation - and a mechanism by which anybody could make comments was created. Meetings were held all over the planet.
The process would appear to have met all of the objectives set out for it. The language of GPLv2 is very much oriented toward U.S. law; GPLv3 makes it global. The free software industry, for the most part, has made a show of welcoming the new license; this appears to be a code of conduct that it can live with. The people who identify themselves strongly with the free software movement seem to be quite happy with this license. And, one expects, Richard Stallman is not overly displeased with what he got.
Others in the community have been very vocally unhappy with GPLv3. To them, this license overreaches, trying to regulate how people use the software instead of just how they distribute it. It has too many legal kludges and special cases. It has, in the view of some people, failed to live up to the Free Software Foundation's promise that revisions of the GPL would be "similar in spirit" to GPLv2. Instead, they say, the FSF has taken this rewrite as an opportunity to force its views on a world which may not otherwise be ready to adopt those views.
The good news is that those people, and the projects they represent, need not move to GPLv3. Version 2 of the license remains valid and usable; despite its American-style language it appears to be enforceable over much of the world. Nobody is trying to force any project to change to a license it does not like.
Expect spirited discussions within some projects as they try to decide whether to move to the new license or not. But the wider discussion is done, and GPLv3 is a reality. It will take years to see what the effect of this new license is. The patent licensing and anti-DRM clauses may well cause some companies to reconsider the use of free software in their products; in the worst case we could be seeing the beginning of the BSD comeback. As worst cases go, that one can only be seen as relatively benign.
This rewrite has probably gone as well as it could have, given the parameters within which the FSF operates. Never before has the FSF sought so much input - and actually acted on it. Whether one likes the end result or not, it is appropriate to thank the FSF for putting in its best effort, and especially to thank Eben Moglen for devoting so much of his life to such a difficult project.
Greg Kroah-Hartman has been digging through the kernel source repositories for statistics much like your editor has. The resulting numbers are similar, though Greg has cranked through the full 2+ years of history in the mainline git repository and, thus, has a longer-term sort of view. Among other things, he concluded that, in that time, the kernel developers have averaged almost three changes per hour - every hour - during that time. About 2000 lines of code are added every day. That is a pace of development which is matched by few - if any - projects anywhere in the world. Greg also notes that the number of developers involved is growing with each release. This, he says, is a good sign; the kernel community is bringing in new developers, important to keep the process healthy.
Those interested in the detailed numbers can find them in Greg's paper (all of the OLS papers are available online). What many people found as interesting as the numbers, however, was Greg's chain-of-trust poster. He took the signed-off-by path from every patch in 2.6.22 and plotted all of them as a big graph. The result, showing the approximately 900 developers who got patches into 2.6.22, was a plot some 40 feet long which crashed almost every printer he tried to print it on. The plot for the entire git repository history would have been nice, but, Greg says, it would have printed out at 250 feet.
One might have expected the plot to look like a nice, neat tree showing how patches move up through the subsystem maintainers toward the mainline. In fact, says Greg, it's "a mess." The interactions between kernel developers are broad and do not fit into any sort of simple hierarchy; it is a loose and flexible system. Greg encouraged all developers represented on the plot to sign their little bubbles; after the poster has run up some frequent-flier miles and acquired enough signatures, it will be auctioned off for some good cause. Over the course of the conference, just over 100 developers added their signatures.
Jon "maddog" Hall is not quite the ubiquitous figure at Linux conferences that he was a few years ago. So it was nice to see him show up at OLS this year. Maddog remains an engaging and amusing speaker. His topic this time was how we are really going to get Linux systems to the masses - especially in the urban environments which house much of the population in the developing world. His answer is thin clients. He would like to see most users working with small, low-power, fanless boxes with a nice screen and the ability to talk with a central server which hosts software, user files, and more. All running Linux, of course.
His vision for where this could go is ambitious: he would like to see 150 million of these thin clients deployed in Brazil, for example, supported by as many as 2 million servers. This would bring affordable computing to almost all of Brazil's city dwellers in an ecologically sensible way while providing about 2 million technical jobs. And it could all be done through private initiatives. If this sort of development can be made to happen, says Maddog, we may truly achieve the potential offered by computers and by free software.
Martin Bligh has an interesting job: he gets to find out what causes the occasional machine to go wrong in the middle of the massive Google network. It can be a real pain when, on occasion, one machine out of thousands will crash or slow way down in a non-reproducible way. And only in production, of course. Martin described a few such problems and how they were tracked down through the use of a set of tracing tools used at Google. Finding this kind of problem requires the ability to collect data in a flexible manner without disrupting ongoing operations. Google has developed the tools to do this sort of tracing; much of the resulting work will be merged into LTTng project and made available to the community.
The keynote speaker this year was James Bottomley, who spoke on the topics of diversity and evolution. Diversity is the stream of new ideas which are always being directed toward any active free software project; evolution is the (sometimes harsh) process which selects the ideas which actually work. Evolution in this context is selecting mostly on the patience and innovation of the development community - not necessarily on the usefulness of a given patch. KAIO (kernel asynchronous I/O support) was given as an example here.
Maintainers play a vital part in the evolutionary process. The key to being a good maintainer - one who helps move the community forward - is to not reject changes out of hand but to work with developers to bring things up to kernel standards. Being a maintainer, says James, is not about saying "no"; it is about saying "no, but..."
Fragmentation is often raised by proprietary vendors as a way of scaring people away from Linux. Bringing up fragmentation is a way of calling up memories of the Unix wars, where fragmentation truly was a damaging phenomenon for just about everybody involved. In the free software world, though, we don't have fragmentation; instead, we have forking. James claims that forking is an essential source of diversity; it's necessary for continued innovation. No project, he says, is truly open unless it can fork. In the end, openness and evolution drive forks to merge back together, propagating the good ideas that resulted.
One final topic was nearly inevitable: closed-source drivers. Unlike some other speakers, James was unwilling to characterize such drivers as being either illegal or immoral. Instead, he looked at the costs involved in keeping drivers closed source - costs for both the vendor and the users - and concluded that closed-source drivers are simply "bloody stupid." Happily, he says, some vendors are figuring this out. He announced that Adaptec has become the first vendor to make use of the Linux Foundation's NDA program to provide information for the creation of free drivers for its products.
This year marks the first time in some years that the Kernel Summit was not held just before the Linux Symposium started; many people expressed concerns that kernel developers would stay away this year and OLS would not be as interesting an event. There was a reduction in the number of high-profile kernel developers this year, though quite a few were still in evidence. The 100 signatures on the 2.6.22 poster make an effective demonstration that OLS is able to attract kernel developers even without the summit. One result of the change may be that a few more relatively new and inexperienced developers were able to present this year; that should be seen as a good thing.
Something that fewer people worried about, but which may have hurt the conference more was the absence of the desktop developers summit. Desktop developers were generally absent, making the 2007 Linux Symposium, if anything, even more kernel-centered than in previous years. Bringing together developers from all over our wider community is an important function of a conference; one hopes that the desktop folks will be back next year.
On the other hand, it was a pleasure to see the large "Linux on Cell" contingent sent by Sony. The embracing of Linux by a company which has not always been known for its openness can only be a good thing, and nobody was complaining about the frequent giveaways of Playstation 3 systems - though your editor, with his usual luck, failed to win one. The Cell architecture seems destined to do interesting things, especially if the companies which are working with it continue to promote and support the use of Linux.
Back to the topic of next year: 2008 will be the tenth Linux Symposium; the organizers are clearly already thinking about how they can make it the best one yet. There is thought of moving it out of Ottawa to another Canadian city, and some possible changes to the organization of the event, including a track-oriented schedule and tutorial days, have been mentioned. This is all good; OLS is probably due for a makeover after all of these years. The 2007 event has shown that OLS can be successful on its own, without leaning on the kernel summit; perhaps 2008 will show us where this important community event can go in the future.
Honeypots, hosts specifically set up to attract abuse, have been around since at least 1990. Typically, they have been used to detect attacks against various network services, such as SMTP or SSH, but have not been very successful at detecting a wide range of web application attacks. Open proxy honeypots provide a more attractive target for malicious web traffic. Combining several open proxies leads to the Distributed Open Proxy Honeypots (DOPH) project which centralizes the monitoring of open proxies installed all over the globe.
Standard honeypot techniques do not provide much of interest to a web attacker, there is no high profile website to deface or high value information stored there. The honeypot is unlikely to be able to respond correctly to attempts to probe for vulnerable web applications. This makes it difficult to gather information on the variety of web attacks that are being used "in the wild". What is needed is a way to listen in on malicious traffic, which is exactly what a proxy can do.
Probably the most famous open proxy was the default configuration of sendmail (before version 8.9.0 in 1998) which would forward email to and from any destination. Before the explosion of spam, it was considered neighborly to relay mail for anyone who asked.
A system configured as an open proxy for web traffic can record information about what it sees, with luck some portion of it will be malicious. But there is a subtle problem with this approach, the proxy host may be facilitating attacks on vulnerable web servers, attacks which appear to originate with the proxy. There is also concern that recording the "conversation" could run afoul of wiretapping laws. These problems require an open proxy honeypot, at least one that wants to avoid legal trouble, to take some steps to minimize them.
Informing someone that you are recording is typically enough to avoid wiretapping violations, so the DOPH project uses two separate warnings. The first is on the proxy host's webpage, but since most malicious users will never see that page, an additional warning was added to the HTTP headers returned by the host. Typically only programs see those headers, but it is, at least, an attempt to inform the recorded party.
A much more difficult problem is to stop "bad" traffic while proxying "good" traffic. The proxy must seem to function correctly or it will never be used, but honeypot operators are interested in stopping web abuse, so they want to minimize the chances of being used in a real attack. It is a very fine line, they want the bad traffic to study, but not to pass on.
The DOPH project uses the ModSecurity module for the Apache webserver to filter content based on a set of rules maintained by Got Root. The rules specify the signature of various attacks which causes ModSecurity to flag them as it inspects the website traffic. To try to fool attackers and/or their programs, a HTTP 200 (OK) status is returned when an attack is detected. The ModEvasive Apache module is also used to detect and stop the proxy being used in a denial of service attack.
Fully configured versions of the proxy are available from the project as VMware images that can be run using the "free as in beer" VMware server software. The DOPH proxy communicates back to a central data collection server, sending the ModSecurity audit log information. This allows the project to aggregate the information to determine what kinds of attacks are currently ongoing. A Web Security Threat Report (PDF), covering the first few months of the project, was released in April. Seven, geographically diverse, hosts participated during the first reporting period and the project is always looking for more people, willing to run proxy hosts, to increase their data gathering abilities.
Open proxies are used by attackers to mask their true location. It is not uncommon for a chain of proxies to be used, as it makes it more difficult to track back to the originator. If the chain crosses borders, using proxy servers in different countries, each with its own set of laws and procedures to access the server log files, it makes it that much harder. The DOPH project does not specify how they publicize their proxies, that might be giving too much information to attackers, but during the first four months of 2007, their servers handled around a million web requests of which roughly 20% was malicious or suspicious.
Attackers are likely to get more sophisticated over time and their tools will get better at recognizing these kinds of techniques, but there is still value in gathering the data. The proxy techniques will evolve as well which will allow statistics to be gathered and new attacks to be spotted. As the attackers recognize the threat, they will be more inclined to use proxies in an attempt to mask their location, which provides a kind of feedback loop driving more traffic to the honeypots. Open proxy honeypots cannot and will not fool all of the attacks, but they provide a way to study some of them.
|Created:||June 28, 2007||Updated:||December 23, 2008|
|Description:||Avahi is vulnerable to a local denial of service that can be caused by making an erroneous call to the assert() function.|
|Package(s):||c-ares||CVE #(s):||CVE-2007-3152 CVE-2007-3153|
|Created:||June 28, 2007||Updated:||July 3, 2007|
|Description:||Versions of the c-ares DNS library below 1.4.0 are vulnerable to application DNS cache poisoning caused by a predictable DNS "Transaction ID" field in a DNS query.|
|Created:||July 2, 2007||Updated:||March 27, 2008|
|Description:||The Firebird DBMS has a buffer overflow vulnerability involving the processing of connect requests with an overly large p_cnct_count value. Remote attackers can send a specially crafted request to the server in order to potentially execute arbitrary code with the permissions of the Firebird user.|
|Created:||July 2, 2007||Updated:||July 3, 2007|
|Description:||The fireflier-server interactive firewall rule creation tool has a vulnerability in the way that it uses temporary files. The vulnerability may be used locally to remove arbitrary files from the system.|
|Created:||June 28, 2007||Updated:||February 27, 2008|
|Description:||The gimp image editor has several vulnerabilities, including a problem where it can open PSD files with excessive dimensions and a possible stack overflow in the Sunras loader.|
|Created:||July 4, 2007||Updated:||July 4, 2007|
|Description:||The GNU C library (prior to version 2.5-r4) suffers from an integer overflow vulnerability in the dynamic linker which could, maybe, be exploited to run code with root privileges.|
|Created:||July 2, 2007||Updated:||July 3, 2007|
|Description:||The gsambad GTK+ configuration tool for samba uses temporary files unsafely. A local attacker can use this vulnerability to truncate arbitrary files.|
|Created:||June 29, 2007||Updated:||July 3, 2007|
|Description:||Kazuhiro Nishiyama found a vulnerability in hiki, a Wiki engine written in Ruby, which could allow a remote attacker to delete arbitrary files which are writable to the Hiki user, via a specially crafted session parameter.|
|Created:||July 2, 2007||Updated:||July 3, 2007|
|Description:||The unicon-imc2 Chinese input method library does not safely use an environment variable. It is possible to use this to cause a buffer overflow and execute arbitrary code.|
|Package(s):||wireshark||CVE #(s):||CVE-2007-3390 CVE-2007-3392 CVE-2007-3393|
|Created:||June 28, 2007||Updated:||February 27, 2008|
|Description:||The wireshark network traffic analyzer has three vulnerabilities that can be used to create a denial of service. These include off-by-one overflows in the iSeries dissector, vulnerabilities in the MMS and SSL dissectors that can cause an infinite loop and an off-by-one overflow in the DHCP/BOOTP dissector.|
Page editor: Jonathan Corbet
Brief itemsreleased by Linus on July 1. "It's hopefully (almost certainly) the last -rc before the final 2.6.22 release, and we should be in pretty good shape. The flow of patches has really slowed down and the regression list has shrunk a lot." This is the last chance to test 2.6.22 and find bugs before they slip into the final release.
As of this writing, about 60 patches have been merged into the mainline git repository after -rc7. They are mostly fixes, but there is also the removal of a large set of private ioctl() functions from the libertas (OLPC) wireless driver.
The current -mm tree is 2.6.22-rc6-mm1. Anybody wanting to build and test this tree should certainly read Andrew's notes at the top of the announcement. Recent changes to -mm include kgdb support for several architectures, tickless support for the x86_64 architecture, the ability to force-enable the HPET timer even when the BIOS leaves it disabled, an updated file POSIX capabilities patch, and Intel IOMMU support.
Kernel development news
SELinux is refused because the shandy mixer opened a beer object and
shandy inherited beer typing
AppArmor gets drunk because /shandy and /beer are clearly different
The first talk (on the tickless kernel) was supposed to be given by Suresh Siddha, who was unable to attend the event. The dynamic tick patches have been covered here before. Suresh/Len's talk was not really about how these patches work, but, instead, about the work which remains to be done to take full advantage of the tickless design. It seems that the work which has been done so far is just the beginning.
The problem is that, on a system used by Suresh and company, the average processor sleep time was still less than 1ms even after the dynamic tick code was enabled. Given that one of the driving reasons for dynamic tick was to let the processor sleep for long periods of time - thus saving power - this is a disappointing result. It turns out that there is a lot which can be done to improve the situation, though.
Step number one is to address a kernel-space problem: there are a lot of active kernel timers which tend to spread out over time. As a result, the kernel wakes up much more often than it would if the timers were sufficiently well coordinated to expire at the same time whenever possible. As it happens, many kernel timers do not need great precision; a timer which fires some number of milliseconds later than scheduled is not a problem. So, if the kernel could defer some timers to fire at the same time as others, it can reduce the number of wakeups. The deferrable timers patch does exactly that; the round_jiffies() function added in 2.6.19 can also help the kernel line up events. Adding this code brought the average sleep time up to 20ms, with the system handling 90 interrupts per second.
Next is the problem of hardware timers. On the i386 architecture, the preferred timer is the local APIC (LAPIC) timer, which is built into the processor and very fast to program. Unfortunately, putting the processor into a deep sleep also puts the LAPIC timer to sleep, a situation Len compared to unplugging one's alarm clock before going to bed. In either case, oversleeping can be the unwanted result. The programmable interval timer (PIT) remains awake and is easily used, but it has a maximum event time of 27ms. If one wants the processor to sleep for longer than that, another solution must be found. That solution is the high-precision event timer (HPET), which has a maximum interval of at least three seconds. Getting access to the HPET can be hard, though; good BIOS support is spotty at best and the HPET is often disabled. If it can be forced on, however, the system can go to an average sleep period of about 56ms, handling 32 interrupts per second.
Even better is to get the HPET out of the "legacy mode" currently used by Linux. This mode is simple to use, but it requires the rebroadcasting of timer interrupts on multiprocessor systems. But the HPET can work with per-CPU channels, eliminating this problem. The result: average sleep time grows to 74ms.
At this point, the problem moves to user space. Since the release of powertop, there has been a lot of progress in this area; user-space applications which cause frequent wakeups unnecessarily stand out immediately and can be fixed. But, as Len noted, "user space still sucks."
One gets the sense that Len is a little tired of people complaining about ACPI in Linux. His response was a talk on "ten ACPI myths" - though the list had grown to twelve by then.
#1: There is no benefit to enabling ACPI. Len's answer to this had two parts, the first of which being that, increasingly, there is no alternative. The older APM interface is deprecated, and, in particular, Microsoft's Vista has removed APM support altogether. So, soon, there will be no hardware support for APM at all; it is a dead standard. The MPS standard (used for discovering processors) is also old and dying. Like it or not, ACPI is needed to be able to make use of one's hardware.
On the positive side, using ACPI gives better access to hardware features like software-enabled power, sleep, and lid buttons. Smart battery status information becomes available, as well as the potential for reduced power consumption and better battery life. True hotplug and (especially) docking support also become possible with ACPI.
#2: Suspend-to-disk problems are ACPI's fault. In fact, ACPI is a very small part of the suspend-to-disk process - everything else is in other parts of the kernel code. If you have suspend-to-disk problems, suggests Len, "complain to Pavel [Machek], not me."
#3: If the extra buttons don't work, it's ACPI's fault. The issue here is that support for "hotkeys" is not actually a part of the ACPI specification. All of those extra buttons found on laptops are vendor-specific added features. The reverse-engineered drivers currently found in the kernel are a "heroic effort," but they should not be necessary. Vendors should be supplying drivers for their hardware.
#4: Boot problems with ACPI enabled are ACPI's fault. Len allows that this one might just be true some of the time. But disabling ACPI at boot-time also disables other hardware features - the IO-APIC in particular. So any problems associated with those other parts of the system will be masked by turning off ACPI. It looks like ACPI was the actual problem, but the truth is more complicated.
#5: ACPI issues are due to sub-standard platform BIOS. It turns out that there are three general sources of ACPI incompatibilities. Just one of them is the BIOS violating the ACPI specification; incompatibilities which don't break Windows will often slip through the testing process. The firmware developer kit produced by Intel can help in this regard. Another source of problems is differing interpretations of the specification, which is a long and complex document. The Linux ACPI developers have been working to help clarify the specification when this sort of problem arises. Finally, there can also simply be bugs in the Linux-specific code.
#6: The Linux community cannot help to improve the ACPI specification. In fact, the ACPI team has been submitting improvements, mostly in the form of "specification clarifications." Many of those have been incorporated and shipped with specification updates.
#7: The ACPI code changes a lot but is not getting better. Intel has put together a test suite with over 2000 tests; ACPI changes must now pass that suite before being merged. The number of new bug reports has been dropping - though, perhaps, more slowly than one might like.
#8: ACPI is slow and bad for high-performance CPU governors. The ACPI interpreter is not used in performance-critical paths, and, thus, cannot be slowing things down. ACPI's role is in the setup and configuration process.
#9: The speedstep-centrino governor is faster than acpi-cpufreq. The acpi-cpufreq governor has seen considerable improvements, and is now able to access MSRs in a fast and (more importantly) supportable way. So its performance is where it should be, and the speedstep-centrino governor is scheduled for removal.
#10: More CPU idle power states is better. This may be true for any given processor, but you cannot compare processors on the basis of how many idle states they provide. All that really matters is how much power you save when you use those states.
#11: Processor throttling will save energy. The problem here is a confusion of "power" and "energy." A throttled processor may draw less power, but it has to run longer to accomplish the same work. So throttling the processor (while maintaining the same voltage) may have the effect of increasing energy use rather than reducing it. The better approach is almost always to run at the fastest clock frequency afforded by the current voltage level and get the work done quickly; Len characterized this as the "race to idle."
There are second-order effects to consider; in particular, batteries will last longer if they are discharged over longer periods of time. A throttled processor may also run cooler, allowing fans to be turned off. Throttling may be necessary for temperature regulation. But, from an energy-savings perspective, these are truly second-order effects.
#12: I can't contribute to improving ACPI in Linux. Like any other project, Linux ACPI would love to have more developers. And, failing that, one can always test kernels and report bugs. There is, in reality, plenty of opportunity for improving the ACPI code.
Len's final talk moved away from power consumption toward its effects - the generation of heat, in particular. The creation of excess heat is not a welcome behavior in any device, but it becomes especially undesirable in handheld devices. Devices which make the user's hand sweat are less fun to use; those which get too hot to hold comfortably can be entirely unusable. So temperature management is important. But the nature of these devices can make thermal regulation tricky: there's no room for fans in a Linux-powered cellular phone, and the dissipation of heat can be hard in general.
The ACPI 3.0 specification includes a complicated thermal model. The device is divided up into zones, and each component has its thermal contribution to each zone characterized. Implementing this specification is a complex and difficult task - enough so that the Linux ACPI developers have no intention of doing it. They will, instead, focus on something simpler.
That something is the ACPI 2.0 thermal model. It includes thermal zones, each of which comes with temperature sensors and a set of trip points. The "critical shutdown" trip point is set somewhere just short of where the device begins to melt; should things get that warm, the device just needs to turn itself off as quickly as possible. Various other trip points will be encountered first; they should bring about increasingly strong measures for controlling temperature. These can include turning on fans (if they exist), throttling devices, or suspending the system to disk. ACPI 2.0 includes an embedded controller which monitors the system's temperature sensors and sends events to the CPU when something interesting happens.
The in-progress thermal regulation code just uses the existing critical shutdown mechanism built into ACPI. There is also support for some of the passive trip points which bring about CPU throttling. For the non-processor thermal zones, though, the best thing to do is to let user space figure out how to respond, so that's what the ACPI code will do. There will be a netlink interface through which temperature events can be sent, and a set of sysfs directories for reading sensor values. The sysfs tree will also include control files which can be used by a user-space daemon to throttle specific devices in response to temperature events.
In the end, the kernel is really just a conduit, conducting events and control settings between the components of the device and user space. There were some questions on whether there will be a standardized set of sysfs knobs for every device; the answer appears to be "no." Each device is different, with its own control parameters; it is hard to create any sort of standard which can describe them all. Beyond that, the target environment is embedded devices, each of which is unique. It is expected that each device will have its own special-purpose management daemon designed especially for it, so there is no real benefit in trying to make things generic.
The impression one gets from all these talks is that quite a bit is happening in the power management area - a part of Linux which, for some time, has been seen as falling short of what it really needs to be. The increasing use of Linux in embedded systems can only help in this regard; there are a number of vendors who have a strong interest in improved support for intelligent use of power. Given time and continued work, power management may soon be one of those past problems which is no longer an issue.completely fair scheduler (CFS) patch continues to develop; the current version, as of this writing, is v18. One aspect of CFS behavior is seen as a serious shortcoming by many potential users, however: it only implements fairness between individual processes. If 50 processes are trying to run at any given time, CFS will carefully ensure that each gets 2% of the CPU. It could be, however, that one of those processes is the X server belonging to Alice, while the other 49 are part of a massive kernel build launched by Karl the opportunistic kernel hacker, who logged in over the net to take advantage of some free CPU time. Assuming that allowing Karl on the system is considered fair at all, it is reasonable to say that his 49 compiler processes should, as a group, share the processor with Alice's X server. In other words, X should get 50% of the CPU (if it needs it) while all of Karl's processes share the other 50%.
This type of scheduling is called "group scheduling"; Linux has never really supported it with any scheduler. It would be nice if a "completely fair scheduler" to be merged in the future had the potential to be completely fair in this regard too. Thanks to work by Srivatsa Vaddagiri and others, things may well happen in just that way.
The first part of Srivatsa's work was merged into v17 of the CFS patch. It creates the concept of a "scheduling entity" - something to be scheduled, which might not be a process. This work takes the per-process scheduling information and packages it up within a sched_entity structure. In this form, it is essentially a cleanup - it encapsulates the relevant information (a useful thing to do in its own right) without actually changing how the CFS scheduler works.
Group scheduling is implemented in a separate set of patches which are not yet part of the CFS code. These patches turn a scheduling entity into a hierarchical structure. There can now be scheduling entities which are not directly associated with processes; instead, they represent a specific group of processes. Each scheduling entity of this type has its own run queue within it. All scheduling entities also now have a parent pointer and a pointer to the run queue into which they should be scheduled.
By default, processes are at the top of the hierarchy, and each is scheduled independently. A process can be moved underneath another scheduling entity, though, essentially removing it from the primary run queue. When that process becomes runnable, it is put on the run queue associated with its parent scheduling entity.
When the scheduler goes to pick the next task to run, it looks at all of the top-level scheduling entities and takes the one which is considered most deserving of the CPU. If that entity is not a process (it's a higher-level scheduling entity), then the scheduler looks at the run queue contained within that entity and starts over again. Things continue down the hierarchy until an actual process is found, at which point it is run. As the process runs, its runtime statistics are collected as usual, but they are also propagated up the hierarchy so that its CPU usage is properly reflected at each level.
So now the system administrator can create one scheduling entity for Alice, and another for Karl. All of Alice's processes are placed under her representative scheduling entity; a similar thing happens to all of the processes in Karl's big kernel build. The CFS scheduler will enforce fairness between Alice and Karl; once it decides who deserves the CPU, it will drop down a level and perform fair scheduling of that user's processes.
The creation of the process hierarchy need not be done on a per-user basis; processes can be organized in any way that the administrator sees fit. The grouping could be coarser; for example, on a university machine, all students could be placed in one group and faculty in another. Or the hierarchy could be based on the type of process: there could be scheduling entities representing system daemons, interactive tools, monster cranker CPU hogs, etc. There is nothing in the patch which limits the ways in which processes can be grouped.
One remaining question might be: how does the system administrator actually cause this grouping to happen? The answer is in the second part of the group scheduling patch, which integrates scheduling entities with the process container mechanism. The administrator mounts a container filesystem with the cpuctl option; scheduling groups can then be created as directories within that filesystem. Processes can be moved into (and out of) groups using the usual container interface. So any particular policy can be implemented through the creation of a simple, user-space daemon which responds to process creation events by placing newly-created processes in the right group.
In its current form, the container code only supports a single level of group hierarchy, so a two-level scheme (divide users into administrators, employees, and guests, then enforce fairness between users in each group, for example) cannot be implemented. This appears to be a "didn't get around to it yet" sort of limitation, though, rather than something which is inherent in the code.
With this feature in place, CFS will become more interesting to a number of potential users. Those users may have to wait a little longer, though. The 2.6.23 merge window will be opening soon, but it seems unlikely that this work will be considered ready for inclusion at that time. Maybe 2.6.24 will be a good release for people wanting a shiny, new, group-aware scheduler.covered here back in March. Since then there has been quite a bit of discussion, but there is still no fallocate() system call in the mainline - and it's not clear that there will be in 2.6.23 either. There is a new version of the fallocate() patch in circulation, so it seems like a good time to catch up with what is going on.
Back in March, the proposed interface was:
long fallocate(int fd, int mode, loff_t offset, loff_t len);
It turns out that this specific arrangement of parameters is hard to support on some architectures - the S/390 architecture in particular. Various alternatives were proposed, but getting something that everybody liked proved difficult. In the end, the above prototype is still being used. The S/390 architecture code will have to do some extra bit shuffling to be able to implement this call, but that apparently is the best way to go.
That does not mean that the interface discussions are done, though. The current version of the patch now has four possibilities for mode:
As an example of how the last two operations differ, consider what happens if an application uses fallocate() to remove the last block from a file. If that block was removed with FA_DEALLOCATE, a subsequent attempt to read that block will return no data - the offset where that block was is now past the end of the file. If, instead, the block is removed with FA_UNRESV_SPACE, an attempt to read it will return a block full of zeros.
It turns out that there are some differing opinions on how this interface should work. A trivial change which has been requested is that the FA_ prefix be changed to FALLOC_ - this change is likely to be made. But it seems there's a number of other flags that people would like to see:
All told, it's a significant number of new features - enough that some people are starting to wonder if fallocate() is the right approach after all. Christoph Hellwig, in particular, has started to complain; he suggests adding something small which would be able to implement posix_fallocate() and no more. Block deletion, he says, is a different function and should be done with a different system call, and the other features need more thought (and aggressive weeding). So it's unclear where this patch set will go and whether it will be considered ready for 2.6.23.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
News and Editorials
Package management is one of the key defining characteristics of a distribution. The question of where package management is going should be of interest to anyone involved with a distribution or administering a Unix-based box of any sort. In many distributions, package management appears to have reached a near standstill. For example, the RPM format has hardly changed in years. In Gentoo, however, ongoing development of package management is so popular that three separate, actively developed package managers exist.
Over the past couple of years, many developers have grown increasingly unsatisfied with Gentoo's default package manager, Portage. Portage is a high-level interface to Gentoo's package format, a series of scripts called ebuilds. Unfortunately, Portage wasn't planned out in the first place, and features have been added ad hoc over the course of many years. Today, it's extremely difficult to add features to Portage or interface with it because there are complex interdependencies and a pretty much nonexistent API. Consequently, two groups of developers decided to start fresh with two separate projects: paludis and pkgcore.
Paludis is implemented in C++ and bash, with a C++ API and an optional Ruby scripting API. One of the biggest features that Portage lacks but Paludis supports is the ability to remove all unused dependencies of a package when removing that package. Also, it has a much more flexible configuration system, user-definable hooks into the build process, user-defined sets of packages, and clean support for multiple repositories. In Portage, secondary repositories (called "overlays") are second-class citizens. Furthermore, Paludis added a number of features Gentoo developers have been requesting for years that add flexibility to how dependencies can be specified. Paludis contains a number of modules, including:
Paludis includes experimental Portage support as of the end of March. This means you can try it out without wasting time migrating config files over, which significantly lowers its barrier to adoption.
Pkgcore is implemented in Python, the same language as Portage, with a few time-critical modules in C. It was designed so that there's no reason it has to be Gentoo-specific—it could easily support other package formats. Its philosophy is to maintain complete backwards compatibility with Portage while recoding it in a clean, maintainable, extensible fashion. Some of the code written for Pkgcore has been pulled back into Portage, such as the cache-handling code. Its 0.3 release finally reached a point of usability because it added frontends with comprehensible output—one that mirrors Portage and another that mirrors Paludis. Despite being in Python, it runs shockingly fast—it is a good example that not all programs written in high-level languages need be slow. The Pkgcore API is also viewable online. Some of the utilities Pkgcore includes are:
A couple of interesting features Pkgcore has are N-parent inheritance of eclasses (a Portage feature that allows inheritance to be used in bash code) and an ebuild daemon. The daemon has a number of benefits including near-linear scaling to multiple processors for some tasks—Pkgcore's home page cites ~90% scaling on a quad Pentium 3. And of course, one benefit over Paludis is that you don't need to use the occasionally less-than-speedy g++ to compile it.
Pkgcore and Paludis seem fairly well-matched in the features department. They both support sets, the additional dependency flexibility, integrated checking for security vulnerabilities, and Portage's on-disk format. Another useful feature they both support is the ability to restrict packages to install based on their licenses. This gives users the choice of how free they want their installations to be, from FSF-compliant to packed with proprietary. Both projects have active teams working on them of between 5 and 10 developers each. In comparison, Portage is primarily maintained by potential masochist Zac Medico—a glance through the ChangeLog showed that he was the only committer since January.
The advent of multiple package managers accelerated Gentoo's need to adopt a formal Package Manager Specification. In the past, new features or breaks in backwards compatibility in Portage simply forced a wait of roughly 6 months, at which point it was assumed that nobody was using those old Portage versions anymore. Problems with that should be readily apparent. When new package managers came along, additional questions came up of which aspects of ebuild behavior were intrinsic behavior and which were Portage-specific details. With only one implementation and no spec, it's hard to draw a line.
Together, these two developments motivated creation of an Ebuild API or EAPI. The current generation will be EAPI=0, which is being documented in a formal specification. Once this spec is done, Gentoo will have a process in place for dealing with ebuilds using new features and for dealing with breaks in compatibility via setting in each ebuild the EAPI that ebuild supports. This will enable near-instant use of new features that Gentoo developers have already been awaiting for years as well as agreement upon how all these package managers must act in common and where they have flexibility to be different.
New Releasesbeen released. "Well folks, it's that time to announce a new stable Slackware release again. So, without further ado, announcing Slackware version 12.0! Since we've moved to supporting the 2.6 kernel series exclusively (and fine-tuned the system to get the most out of it), we feel that Slackware 12.0 has many improvements over our last release (Slackware 11.0) and is a must-have upgrade for any Slackware user." Tribe 2 is the second in a series of milestone CD images that will be released throughout the Gutsy development cycle. The Tribe images are known to be reasonably free of show-stopper CD build or installer bugs, while representing a very recent snapshot of Gutsy."
Distribution NewsGCC 4.2 was released on May 13 and has been in unstable since roughly that time. The default version of gfortran was recently switched to 4.2 and the Debian GCC maintainers would like to move to 4.2 as the default compiler in unstable for all architectures and for all languages with the exception of Java (which will follow later). This message describes the plan to make this transition possible."
New DistributionsFluxbuntu is a LPAE-standard compliant, Ubuntu-based distribution. It is lightweight, swift and efficient. These features support the Fluxbuntu Linux Project's Goal of running on a wide range of mobile devices and computers (low-end & high-end). According to the Release Schedule the first test release is expected to coincide with the Gutsy Tribe 3 release on July 19. ["LPAE" appears to stand for "lightweight, productive, agile, and efficient" - we had to look it up too.]
Distribution NewslettersFull Circle Magazine, the independent magazine for the Ubuntu community, is online. Topics include: Flavour of the Month - Kubuntu, How-To - Ubuntu on the Intel Mac Mini, Virtual Private Networking, Learning Scribus Part 2 and Ubuntu for your Grandma!, Review - System 76 Darter, Top 5 - Widgets and MyDesktop, MyPC and more! DistroWatch Weekly for July 2, 2007 is out. "The release of the General Public Licence version 3 and the new Linux edition of Google Desktop were the primary generators of headlines on most Linux news sites during the past week. In contrast, all was quiet on the distribution development front, with only Dreamlinux, Scientific Linux and a few minor projects announcing new stable releases. But don't despair; this week's DistroWatch Weekly is still packed with interesting topics, including an interview with Clement Lefebvre from Linux Mint, a rebuttal by John Murga from the Puppy Linux forums, and information about some other interesting news of the week, such as the new PC-BSD LiveCD and the latest version of the GNU/Linux distro timeline. And if you are looking for something to test and play with during the slow months of July and August, don't miss the new distributions section which presents no fewer than 6 (six!) new distro projects that were submitted to DistroWatch last week."
Distribution meetingsMako gave a fantastic feelgood talk about how Debian is really interesting to all sorts of people from outside the direct field of computing, like sociologists, lawyers, voting reform advocates, etc. It made us all proud to be part of Debian, and of course gave us an insight into how what we do affects the world at large."
Newsletters and articles of interesttakes a quick look at the release of Scientific Linux 4.5. "The Scientific Linux project last week announced the release of Scientific Linux 4.5, an install-only distribution rebuilt from source code for Red Hat Enterprise Linux 4. It features a 2.6.18 kernel, GNOME default desktop, multilingual support, and Xen paravirtual guest capabilities."
Distribution reviewsreviews Yoper 3.0. "Yoper claims to be a high-performance Linux distribution optimized for newer processors. It incorporates components from other distros, but its packages have been built from scratch to provide enhanced performance. I tested a beta of Yoper 3.0 on my desktop a year ago and was so impressed that when 3.0 was released this month, I installed it on my new Hewlett-Packard Pavilion dv6105 notebook. Using it, however, left me disappointed." reviews TinyMe, a scaled down version of PCLinuxOS 2007. "TinyME might make a good start for a server as all the important LAMP packages are in the PCLOS repositories as well. One doesn't need all the extra goodies that come with the big desktops these days for a server and LXDE would be good for those that like graphical server tools such as webmin. I didn't have an older computer handy on which to test it, but I imagine it would be great for it. PCLOS developers build support for about everything into their kernels and LXDE only requires a Pentium II and 128 MB ram if one wishes to use like Firefox or OpenOffice.org. It is said that LXDE alone can run in as little as 64 MB ram."
Page editor: Rebecca Sobol
Tim Bray explains his reasons for creating mod_atom:
Features and goals of the mod_atom project include:
The mod_atom project is currently in a state of development:
The author is requesting comments and contributions, a project TODO list has been published for those who are interested in lending a hand.
Clusters and GridsWe are pleased to announce the release of version 0.4 of Allmydata-Tahoe, a secure, decentralized storage grid under a free-software licence. This is the follow-up to v0.3 which was released June 6, 2007".
Filesystem Utilitieshas been announced. "GNU ddrescue copies data from one file or block device (hard disc, cdrom, etc) to another, trying hard to rescue data in case of read errors. The basic operation of ddrescue is fully automatic. That is, you don't have to wait for an error, stop the program, read the log, run it in reverse mode, etc."
Mail Softwarediscusses the use of mailbox.py for processing email in an O'Reilly article. "Archived mail can be stored using many different file formats. The mailbox module in the Python standard library supports reading and modifying five different formats, all formats that are primarily used on Unix systems. The mailbox module was greatly enhanced in Python 2.5. For a long time the mailbox module only supported reading mailboxes, not modifying them. Gregory K. Johnson, as his project for Google's 2005 Summer of Code, wrote code for adding and deleting messages; these new features went into Python 2.5, released in September 2006."
Networking Toolshas been announced. "This is jwhois, an improved Whois client capable of selecting Whois server to query based on a flexible configuration file using either regular expressions or CIDR blocks."
Web Site DevelopmentDjango status update covers the latest developments to the Django web development platform.
Audio ApplicationsArdour, a multi-track audio workstation, has been released. See the Changes document for a list of new features and bug fixes. QjackCtl 0.2.23 has been released and is the one first ever introducing explicit JACK MIDI support (JACK >= 0.107.0)."
Calendar Softwarereports that the Mozilla Calendar Project has released Lightning 0.5 and Sunbird 0.5. "Notable improvements include a polished user interface, automatic data migration from iCal and Evolution, improved printing, better integration of Lightning into Mozilla Thunderbird and support for Google Calendar (via the Provider for Google Calendar extension)".
Desktop Environmentsrecommends ten ways to improve the GNOME desktop. "We love GNOME. Sometime around 2.6 it started becoming really, really damned good, and a lot faster and more responsive. All kinds of nice things like Network Manager, the Nautilus CD burner and the SFTP support popped up. It helps that most major Linux apps like like Firefox, Evolution, GAIM, and OpenOffice use the same toolkit and themes too. Obviously were not alone either: Ubuntu, RHEL and SuSE all use GNOME by default. Heres a bunch of ideas to improve it." (Found on GnomeDesktop.org).
Desktop PublishingWe expect this to be the last release before 1.5.0, and until the first stable release only critical bugs and regressions will be addressed. We encourage users to try this release candidate and report any feedback or problems to lyx-devel at lists.lyx.org. Compared with the first release candidate we have mostly fixed bugs and polished the graphical interface."
ElectronicsgEDA/gaf, has been announced. "The focus of this release was bug fixing. This is also the first release created using git." Kicad, an electronic schematic and printed circuit CAD application, is out with bug fixes and other enhancements to the pcbnew and eeschema components.
Financial ApplicationsSQL-Ledger, a web-based accounting package, is out with various enhancements. See the What's New document for details.
Gameshas been announced on the WorldForge virtual world game site. "This version is the first to use CEGUI 0.5 and Ogre 1.4. It also includes a new entity editing framework which allows for real time authoring of the world."
GraphicsThis is the fifth update in cairo's stable 1.4 series. It comes roughly three weeks after the 1.4.8 release. The most significant change in this release is a fix to avoid an X error in certain cases, (that were causing OpenOffice.org to crash in Fedora). There is also a semantic change to include child window contents when using an xlib surface as a source, an optimization when drawing many rectangles, and several minor fixes."
GUI Packagescovers the release of the Qyoto C#/Mono bindings for Qt 4.3. "After the recent final release of QtJambi, Trolltech's Java bindings, I'm pleased to announce another new member of the Qt bindings family, the Qyoto C#/Mono bindings for Qt 4.3, which are available for download on the Qyoto/Kimono site, where there is also a help forum for your Qyoto programming questions." The article also mentions the release of QtRuby 1.4.9.
Interoperabilityannounced. Changes include: many MSHTML improvements, a few more sound fixes, many Direct3D fixes and lots of bug fixes.
Music ApplicationsQuickstart Guide and Primer, have been published for the Aeolus organ synthesizer application.
Office ApplicationsGNU gv, has been announced. "GNU gv allows to view and navigate through PostScript and PDF documents on an X display by providing a user interface for the ghostscript interpreter. gv is a(n) improved derivation of Timothy O. Theisen's Ghostview developed by Johannes Plass. "
Speech SoftwareeSpeak text to speech converter is out. Changes include a move to GPLv3, bug fixes, language improvements, a new options parameter and new breath attributes.
Web Browsersnotes the release of Mozilla Firefox 184.108.40.206. "Users of Mozilla Firefox 220.127.116.11 have been offered a major update to Mozilla Firefox 18.104.22.168 via the automatic update notification. As reported earlier, Mozilla Firefox 22.214.171.124 is the last release from the Firefox 1.5 Branch. As per the the ReleaseRoadmap policy, the previous release of Firefox (1.5 in this case) is supported for six months beyond the release of a major revision (2.0 in this case)." has announced the release of Gran Paradiso Alpha 6. "New features in this development milestone of Mozilla Firefox 3 include an upgraded SQLite engine, improved cookie performance, support for site-specific text size preference and various Gecko 1.9 bug fixes. Some of the changes in Alpha versions of Gecko 1.9 affect the web and platform compatibility of Gran Paradiso Alpha 6."
MiscellaneousGNU Cpio, a classic Unix application for archiving files, is out with a bug fix. This is a bugfix release of the 2.1 series of the WengoPhone, which fixes a number of important problems, and updates the translations of 13 languages, bringing the number of fully translated languages to 15."
Languages and Tools
LispThis version improves interrupt safety and bignum printing performance, has some bug fixes, and more."
LibrariesGNU libmatheval is available. "GNU libmatheval is a library that makes it possible to calculate mathematical expressions for given variable values and to calculate expression's derivative with respect to a given variable. The library supports arbitrary variable names in expressions, decimal constants, basic unary and binary operators and elementary mathematical functions."
Page editor: Forrest Cook
Linux in the news
Recommended Readinga transcription of a talk that FSF attorney Eben Moglen gave to the Scottish Society for Computers and Law on June 26. "The theme is the connection between GPLv3, mathematics, and the sharing of human knowledge. As a jumping off point, he asks us to imagine a world in which arithmetic has become property. Even stating that diminishes it, actually." writes about the impermanence of proprietary data formats. "I think proprietary record formats will present a problem for historians. Perhaps not in the short-term, but certainly in the medium to long term (and remember I'm talking about hundreds if not thousands of years now). Imagine that some historian in 500 years time discovers Vice President Cheney's "undisclosed location" and finds his secret laptop computer. "Finally," the historian thinks, "we will know who advised this administration about energy policy!" as he swims back to the surface of the ocean above the Washington monument. Unfortunately it turns out the data was written in the "Word-mangler for Windows 2002" format, for which no specifications were ever published, and which was deliberately designed to be difficult for the competition to read."
Trade Shows and Conferencescovers the beginning of the aKademy 2007 conference. "aKademy 2007 has started! Saturday, the first day of the conference, brought us many talks about various topics, ranging from very technical to more practically oriented. These talks are so content-rich that our coverage of the user conference will require several consecutive articles. Read on for the first aKademy 2007 Report, the First Impression." the Keynotes and the Tracks. "Saturday opened with Lars Knoll, talking about KDE from the perspective of a troll. Trolltech employs over 50 full-time developers on Qt itself, accompanied by an assortment of testers and support personnel. Following the ideas behind 'extreme programming', Qt employs extensive code reviews and an incremental design." The official KDE Conference Press Brochure [PDF] is also available. covers the first day of the Ottawa Linux Symposium. "The opening day of the 9th annual Ottawa Linux Symposium (OLS) began with Jonathan Corbet, of Linux Weekly News and his now familiar annual Linux Kernel Report, and wrapped up with a reception put on by Intel where they displayed hardware prototypes for upcoming products." reports on some talks at OLS. "The ninth annual OLS has begun in Ottawa's sweltering summer heat. There are as many as three different talks and two different tutorial topics being presented in each time slot. This is a summary of the talks I attended in day 1." These talks include The Kernel Report - Jon Corbet, KVM: The Kernel-Based Virtual Machine - Avi Kivity, Kernel Support For Stackable File Systems - Josef Sipek, and more. day two of the Ottawa Linux Symposium. "OLS topics on day two including Linux Kernel Development, EXT4, Cell Broadband Engine, Debugging Google clusters and LinuxBIOS." this summary from the third day. "OLS topics on day three including Lguest, SMB2, Large memory allocations and Concurrent Pagecache." covers a talk by Greg Kroah-Hartman at OLS. "As the number of Linux kernel contributors continues to grow, core developers are finding themselves mostly managing and checking, not coding, said Greg Kroah-Hartman, maintainer of USB and PCI support in Linux and co-author of Linux Device Drivers, in a talk at the Linux Symposium in Ottawa Thursday."
Companiesnotes the availability of Google Desktop for Linux. "Google's popular search application that indexes data on a computer, rather than online, is now available for Linux machines after the company's latest beta release. The Linux version of Google Desktop joins a fully complete Windows program and a Mac version that is currently also in beta. It features all the indexing and searching features seen on other platforms but lacks some of the frills of the Windows application." reports on Red Hat's latest financial results. "The big Linux-business question of the latest financial quarter was: Would Red Hat be battered by Oracle? Knocked around by Microsoft and its new Linux partners, Novell, Xandros, and Linspire? Daunted by a Sun revival? Or would the Raleigh, NC-based Linux company turn in a great quarter? And, the answer is, with total revenue of $118.9 million, an increase of 42 percent from the year ago quarter and up 7 percent from the prior quarter, Red Hat is back to kicking rump and taking names in business Linux." reports on licensing talks between Red Hat Inc. and Microsoft Corp. "Red Hat Inc. Chief Executive Matthew Szulik said his company last year held talks with Microsoft Corp. over a patent agreement that broke down before the software giant signed a deal with Red Hat rival Novell Inc. The developer of Linux software, has yet to sign such a deal which could see Novell, its biggest rival, woo customers away from Red Hat and work on product development and sales with the world's No.1 software maker."
Interviewsan interview with Mauricio Fernandez. "Last week, Mauricio Fernandez announced a new Ruby to OCaml bridge that hes working on, called rocaml. With the growing interest in functional languages in the Ruby world, this seemed like the sort of thing I needed to talk to him about, so I sent off a quick set of questions, and this is what I heard back."
Resourcespart one of a series by Jack Herrington on Google Gears. "Web applications are great, that is until you go off the grid. As more and more Ajax-driven tools are created that mimic desktop applications through web interfaces, the ability to use those applications once the Wi-Fi signal is lost becomes more important. Jack Herrington gives us an introduction to Google Gears, a tool that allows just that kind of functionality." part one of a Linux Journal series on troubleshooting Linux audio systems. "I have a friend who has had nothing but nightmares result from his attempts at setting up the fabled low-latency high-performance Linux audio system. In sympathy with his plight I present here a primer in three parts for troubleshooting common and uncommon problems with the Linux sound system. Parts 1 & 2 will present programs used to analyze and configure your audio setup. Part 3 will list the most frequently encountered problems along with their suggested solutions." works with Ruby on Rails in an O'Reilly article. "Paul and CB are back, and this time CB wants Paul to convince the Boss to try a new approach to testing, one that leverages the powerful tools Rails can offer. In the latest installment of Bill Walton's monthly series, you'll learn how to build effective testing into your Rails projects."
Reviewsreviews a new release of the ATI Control Panel. "NINE MONTHS ago I wrote with surprise about how ATI's Linux Mobility drivers "didn't suck" yet how the Control Panel sucked. AMD has surprisingly made my complaints obsolete. The latest Catalyst for Linux package on AMD's ATI/Linux support page at the time of this writing is version 8.38.6, a 51MB+ download released six days ago, and which I have been running so far for five days with my testing workhorse, the Gateway 7422 notebook which sports one ATI Mobility Radeon 9600 chipset with 64MB of video memory." takes a look at the options in OpenOffice.org Calc. "Like other OpenOffice.org applications, Calc has several dozen options in how it is formatted and operates. These options are available from Tools -> Options -> OpenOffice.org Calc. Thanks to OpenOffice.org's habit of sharing code between applications, some of the tabs for these options resemble those found in other OpenOffice.org applications. Others are unique to Calc and the business of spreadsheets. Either way, the more you know about Calc's options, the more you can take control of your work."
Miscellaneousnotes that the state of Massachusetts has added Ecma-376 Office OpenXML to the list of potentially acceptable "open formats". "OpenXML doesn't belong on any list of usable standards until it is one, a real one, where the playing field is even. Instead, I gather from Weir's description that it's like traveling to a new town and asking for a map, but the directions are written in such a way that only longtime dwellers can read and follow them. You as a newcomer have no way to understand them and hence find your way around. If the directions say, "Go right when you get to the road where Nellie used to live until she married that musician and moved to Memphis," you don't know Nelly or where she lived before."
Page editor: Forrest Cook
Non-Commercial announcementsVeteran FFII campaigner Benjamin Henrion, founder of the noOOXML.org site, explains: "Microsoft is spending millions on rent-a-crowd support for international certification for its proprietary Office format, OOXML. But we already have an ISO standard for word processing, called ODF (Open Document Format). OOXML is Microsoft's attempt to subvert this existing standard, to keep its strangle-hold on the world of documents. It's time for activists across the world to stand up, to reach out to their national ISO bodies, and to explain why Microsoft's format is not open, not a standard, and not XML."" six questions on Microsoft's "Open Office XML" format which, they believe, any agency pondering standardizing on that format should be able to answer. "MS-OOXML is accompanied by an unusually complex and narrow 'covenant not to sue' instead of the typical patent grant. Because of its complexity, it does not seem clear how much protection from prosecution for compatibility it will truly provide... Does your national standardisation body have its own, independent legal analysis about the exact nature of the grant to certify whether it truly covers the full spectrum of all possible MS-OOXML implementations?" this page for the full text. There have been some changes since the "last call" draft, but they are mostly minor tweaks. Click below for the press release. Tivoization and the iPhone? "Tivoization" is a term coined by the FSF to describe devices that are built with free software, but that use technical measures to prevent the user from making modifications to the software -- a fundamental freedom for free software users -- and an attack on free software that the GPLv3 will put a stop to. The iPhone is leaving people questioning: Does it contain GPLed software? What impact will the GPLv3 have on the long-term prospects for devices like the iPhone that are built to keep their owners frustrated?" reports that KDAB has become a new patron of KDE. "The KDE e.V. and KDAB are happy to announce continued collaboration on the Free Desktop, with KDAB becoming the latest new Patron of KDE. KDAB is known for its high-quality software services." OpenMoko -- together with all of you in the community -- will design, from the ground up, open devices and write the free software platform that powers them. FIC will build the hardware and help us set phones free around the world. This is about the most perfect relationship we can think of."
Commercial announcementsannounced its launch of new open-source on-demand data integration software tools and services. "Apatar helps users integrate information between databases, files, and applications. Imagine you could visually design (drag and drop) a workflow to exchange data and files between files (Microsoft Excel spreadsheets, CSV/TXT files), databases (such as MySQL, Microsoft SQL, Oracle), applications (Salesforce.com, SugarCRM), and the top Web 2.0 destinations (Flickr, RSS feeds, Amazon S3), all without having to write a single line of code. Users install a visual job designer application to create integration jobs called DataMaps, link data between the source(s) and the target(s), and schedule one-time or recurring data transformations. Imagine this capability fits cleanly and quickly into your projects." announced the launch of the myFUNAMBOL portal. "Funambol, the mobile open source software company, announced it will begin inviting consumers today to join the new myFUNAMBOL portal to access free mobile email, contacts and calendars on everyday cell phones. myFUNAMBOL also provides the first over-the-air mobile contacts application for the new iPhone, which demonstrates the pace of innovation available with open source." announced a collaboration with Microsoft involving document translation software. "Linspire, Inc., developer of the Linspire commercial and Freespire community desktop Linux operating systems, today announced it will join the current efforts to improve the ability of OpenOffice.org users to work with the Office Open XML format by increasing the interoperability between ODF and Open XML. Linspire is joining with others who have signed on to this effort, including Novell and Xandros, to create bi-directional open source translators for word processing, spreadsheets and presentations between ODF and Open XML." The Ministry of Agriculture and Fisheries chose to migrate its local servers (about 400 machines) from Windows NT Server to Mandriva Corporate Server 4.0, within two years. This migration comes with a complete range of personalized services (training and support). Mandriva was chosen to provide these services which will continue over a period of 18 months and will potentially involve more than 200 people in the ministry." announced plans to form an industry-wide consortium that will support the OpenSAF (Service Availability Forum) project. "The company also announced the first release of the open source code related to the project. The consortium also will manage any future development of the OpenSAF code base. Leading companies including Ericsson, HP and Nokia Siemens Networks have expressed support for this initiative." OpenSceneGraph 2.0, written entirely in Standard C++ and built upon OpenGL, offers developers working in the visual simulation, game development, virtual reality, scientific visualization and modeling markets a real-time visualization tool which rivals established commercial scene graph toolkits in functionality and performance." announced the availability of i-flex FLEXCUBE for Corebanking on the IBM System z mainframe platform. ""Financial institutions are balancing requirements to control global costs while enhancing responsiveness to rapidly capitalize on new growth opportunities," said Rajesh Hukku, Senior Vice President and General Manager, Oracle Financial Services Global Business Unit and Chairman at i-flex solutions. "Oracle's increased leverage of the i-flex core banking applications to provide support for zLinux reflects the company's commitment to provide financial services companies with applications and additional mainframe options that help accelerate development of new products and services while and enhancing the value of existing systems."" announced the addition of GPLv3 detection to its IP Amplifier intellectual property detection and reporting solution. "Palamida(TM), the leader in software risk management solutions for open source, today announced that it has enhanced IP Amplifier, the company's flagship intellectual property detection and reporting solution, with the addition of the GPL v3 analyzer functionality. Further expanding their solutions and services, Palamida has also created a comprehensive GPL v3 online educational resource repository, http://gpl3.palamida.com, to assist organizations looking to implement software licensed under GPL v3." Valentina technology release 3.1 provides many improvements to the technologies that set Valentina apart from other databases. These features include: Link Refactoring Commands. New native API methods VLink2.CopyLinksTo() and VLink2.CopyLinksFrom(), plus SQL COPY LINKS are powerful and flexible methods to convert from relational database M:M schemas to much, much faster Binary Link M:M. The same techniques support translations between Relational FK, Binary Links and ObjectPtrs."
New Bookshave been published. The titles include the OpenSceneGraph Quick Start Guide, and the OpenSceneGraph Reference Manual v1.2.
Event Reportsavailable online. Whether or not you were at the event, the papers are a useful reference on the topics which were discussed. [TOC] was the first of its kind: a conference dedicated to utilizing technology to create and maximize publishing opportunities. Drawing nearly 500 attendees, this event was produced by O'Reilly Media, Inc. Attendees included book publishers, editors, marketing and production managers, publishing consultants, authors, and business managers in publishing."
Upcoming Events"We are very happy to welcome Mandriva as a silver sponsor for aKademy 2007", said Jonathan Riddell of the aKademy Team. "As a long term supporter and distributor of KDE the summit organising team is looking forward to giving our developers, contributors and industry partners at the conference a special present from Mandriva."" RailsConf Europe, taking place 17-19 September in Berlin, is being co-presented by Ruby Central, Inc. and O'Reilly Media, Inc. This three day event, held at the Maritim proArte Hotel, is dedicated entirely to Ruby on Rails. The Ruby on Rails development framework, only three years old, has gone from cult favorite to major player in the web development world."
|PostgreSQL 8.2 Bootcamp at the Big Nerd Ranch||Atlanta, USA|
|IV GUADEC-ES||Granada, Spain|
|DIMVA 2007||Lucerne, Switzerland|
|July 14||UK Gentoo Meeting 2007||London, UK|
|GNOME Users' And Developers' European Conference||Birmingham, England|
|GCC and GNU Toolchain Developers' Summit||Ottawa, Canada|
|Ubuntu Live||Portland, OR, USA|
|O'Reilly Open Source Convention||Portland, OR, USA|
|Asterisk Bootcamp with Jared Smith at Big Nerd Ranch||Atlanta, USA|
|Open Group Enterprise Architecture Practitioners Conference||Austin, TX, USA|
|Ninth course on the Exim mail transfer agent||Cambridge, UK|
|Black Hat USA 2007||Las Vegas, NV, USA|
|Ruby on Rails Bootcamp at the Big Nerd Ranch||Atlanta, USA|
|Wikimania 2007 (Annual Wikimedia conference)||Taipei, Taiwan|
|DefCon 15||Las Vegas, NV, USA|
|LinuxWorld Conference & Expo||San Francisco, CA, USA|
|16th USENIX Security Symposium||Boston, MA, USA|
|LinuxWorld Conference and Expo||San Francisco, CA, USA|
|Flash Memory Summit 2007||Santa Clara, CA, USA|
|7as Jornadas Regionales de Software Libre||Córdoba, Argentina|
|Chaos Communication Camp||Finow airport, Germany|
|August 10||August Penguin 2007||Tel Aviv, Israel|
|August 11||Picn*x XVI - The Linux 16th Anniversary Picnic||Sunnyvale, CA, USA|
|Virtual FudCon8||Online, IRC|
|Scientific Tools for Python||Pasadena, CA, USA|
|August 19||Open Source Health Informatics Working Group||Brisbane, Australia|
|PHP Training at the Big Nerd Ranch||Atlanta, USA|
|DallasCon 2007-cancelled||Dallas, Texas, USA|
|Python 3000 Sprint||Mountain View and Chicago, USA|
|Summercon 2007||Atlanta, GA, USA|
|FrOSCon 2007||Sankt Augustin (near Bonn), Germany|
|International Computer Music Conference 2007||Copenhagen, Denmark|
|KVM Forum 2007||Tucson, AZ, United States|
|September 1||ENOS 2007||Caldas da Rainha, Leiria, Portugal|
|LinuxConf Europe 2007||Cambridge, England|
|HITBSecConf2007||Kuala Lumpur, Malaysia|
|RAID 2007||Gold Coast, QL, Australia|
|2007 Linux Kernel Developers Summit||Cambridge, UK|
|Office 2.0 Conference||San Francisco, CA, USA|
|Intelligent Data Acquisition and Advanced Computing Systems||Dortmund, Germany|
|LinuxWorld China 2007||Beijing, China|
|LinuxChix Brasil||Asa Sul, Brazil|
|GITEX Technology Week||Dubai, United Arab Emirates|
|PyCon UK 2007||Birmingham, UK|
If your event does not appear here, please tell us about it.
Page editor: Forrest Cook
Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds