The opening plenary session at the second day of the 2011 Linux Filesystem,
Storage, and Memory Management workshop was led by Michael Cornwall, the
global director for technology standards at IDEMA
, a standards organization for disk
drive manufacturers. Thirteen years ago, while working for a hardware
manufacturer, Michael had a hard time finding the right person to talk to in the
Linux community to get support for his company's hardware. Years later,
that problem still exists; there is no easy way for the hardware industry
to work with the Linux community, with the result that Linux has far
less influence than it should. His talk covered the changes that are
coming in the storage industry and how the Linux community can get involved
to make things work better.
The International Disk Drive Equipment and Materials Association works to
standardize disk drives and the many components found therein. While some
may say that disk drives - rotating storage - are on their way out, the
fact of the matter is that the industry shipped a record 650 million
drives last year and is on track to ship one billion drives in 2015. This
is an industry which is not going away anytime soon.
Who drives this industry? There are, he said, four companies which control
the direction the disk drive industry takes: Dell, Microsoft, HP, and EMC.
Three of those companies ship Linux, but Linux is not represented in the
industry's planning at all.
One might be surprised by what's found inside contemporary drives. There
is typically a multi-core ARM processor similar to those found in
cellphones, and up to 1GB of RAM. That ARM processor is capable, but it
still has a lot of dedicated hardware help; special circuitry handles
error-correcting code generation and checking, protocol implementation,
buffer management, sequential I/O detection, and more. Disk drives are
small computers running fairly capable operating systems of their own.
The programming interface to disk drives has not really changed in
almost two decades: drives still offer a randomly-accessible array of 512-byte
blocks addressable by a logical block addresses. The biggest problem on
the software side is trying to move the heads as little as possible. The
hardware has advanced greatly over these years, but it is still stuck with
"an archaic programming architecture." That architecture is going to have
to change in the coming years, though.
The first significant change has been deemed the "advanced format" - a
fancy term for 4K sector drives. Christoph
Hellwig asked for the opportunity to chat with the marketing person who
came up with that name; the rest of us can only hope that the conversation
will be held on a public list so we can all watch. The motivation behind
the switch to 4K sectors is greater error-correcting code (ECC)
efficiency. By using ECC to protect larger sectors, manufacturers can gain
something like a 20% increase in capacity.
The developer who has taken the lead in making 4K-sector disks work
with Linux is Martin Petersen; he complained that he has found it to be
almost impossible to work on new technologies with the industry.
Prototypes from manufacturers worked fine with Linux, but the first
production drives to show up failed outright. Even with his "800-pound
Oracle hat" on, he has a hard time getting a response to problems.
"Welcome," Michael responded, "to the hard drive business." More
seriously, he said that there needs to be a "Linux certified" program for
hardware, which probably needs to be driven by Red Hat to be taken
seriously in the industry. Others agreed with this idea, adding that, for
this program to be truly effective, vendors like Dell and HP would have to
start requiring Linux certification from their suppliers.
4K-sector drives bring a number of interesting challenges beyond the
increased sector size. Windows 2000 systems will not properly align
partitions by default, so some manufacturers have created
off-by-one-alignment drives to compensate. Others have stuck with normal
alignment, and it's not always easy to tell the two types of drive apart.
Meanwhile, in response to requests from Microsoft and Dell, manufacturers
are also starting to ship native 4K drives which do not emulate 512-byte
sectors at all. So there is a wide variety of hardware to try to deal
with. There is an evaluation kit
available for developers who would like to work with the newer drives.
The next step would appear to be "hybrid drives" which combine rotating
storage and flash in the same package. The first generation of these
drives did not do very well in the market; evidently Windows took over
control of the flash portion of the drive, defeating its original purpose,
so no real performance benefit was seen. There is a second generation
coming which may do better; they have more flash storage (anywhere from 8GB
to 64GB) and do not allow the operating system to mess with it, so they
should perform well.
Ted Ts'o expressed concerns that these drives may be optimized for
filesystems like VFAT or NTFS; such optimizations tend not to work well
when other filesystems are used. Michael replied that this is part of the
bigger problem: Linux filesystems are not visible to the manufacturers.
Given a reason to support ext4 or btrfs the vendors would do so; it is,
after all, relatively easy for the drive to look at the partition table and
figure out what kinds of filesystem(s) it is dealing with. But the vendors
have no idea of what demand may exist for which specific Linux filesystems,
so support is not forthcoming.
A little further in the future is "shingled magnetic recording" (SMR). This
technology eliminates the normal guard space between adjacent tracks on the
disk, yielding another 20% increase in capacity. Unfortunately, those
guard tracks exist for a reason: they allow one track to be written without
corrupting the adjacent track. So an SMR drive cannot just rewrite one
track; it must rewrite all of the tracks in a shingled range. What that
means, Michael said, is that large sequential writes "should have
reasonable performance," while small, random writes could perform poorly
The industry is still trying to figure out how to make SMR work well. One
possibility would be to create separate shingled and non-shingled regions
on the drive. All writes would initially go to a non-shingled region, then
be rewritten into a shingled region in the background. That would
necessitate the addition of a mapping table to find the real location of
each block. That idea caused some concerns in the audience; how can I/O
patterns be optimized if the connection between the logical block address
and the location on the disk is gone?
The answer seems to be that, as the drive rewrites the data, it will put it
into something resembling its natural order and defragment it. That whole
process depends on the drive having enough idle time to do the rewriting
work; it was said that most drives are idle over 90% of the time, so that
should not be a problem. Cloud computing and virtualization might make
that harder; their whole purpose is to maximize hardware utilization, after
all. But the drive vendors seem to think that it will work out.
Michael presented four different options for the programming interface to
SMR drives. The first was traditional hard drive emulation with remapping
as described above; such drives
will work with all systems, but they may have performance problems.
Another possibility is "large block SMR": a drive which does all transfers
in large blocks - 32MB at a time, for example. Such drives would not be
suitable for all purposes, but they might work well in digital video
recorders or backup applications. Option three is "emulation with hints,"
allowing the operating system to indicate which blocks should be stored
together on the physical media. Finally, there is the full object storage approach where the drive knows
about logical objects (files) and tries to store them contiguously.
How well will these drives work with Linux? It is hard to say; there is
currently no Linux representation on the SMR technical committee. These
drives are headed for market in 2012 or 2013, so now is the time to try to
influence their development. The committee is said to be relatively open,
with open mailing lists, no non-disclosure agreements, and no oppressive
patent-licensing requirements, so it shouldn't be hard for Linux developers
Beyond SMR, there is the concept of non-volatile RAM (NV-RAM). An NV-RAM
device is an array of traditional dynamic RAM combined with an
equally-sized flash array and a board full of capacitors. It operates as
normal RAM but, when the power fails, the charge in the capacitors is used
to copy the RAM data over to flash; that data is restored when the power
comes back. High-end storage systems have used NV-RAM for a while, but it
is now being turned into a commodity product aimed at the larger market.
NV-RAM devices currently come in three forms. The first looks like a
traditional disk drive, the second is a PCI-Express card with a special
block driver, and the third is "NV-DIMM," which goes directly onto the
system's memory bus. NV-DIMM has a lot of potential, but is also the
hardest to support; it requires, for example, a BIOS which understands the
device, will not trash its contents with a power-on memory test, and which
does not interleave cache lines across NV-DIMM devices and regular memory.
So it is not something which can just be dropped into any system.
Looking further ahead, true non-volatile memory is coming around 2015. How
will we interface to it, Michael asked, and how will we ensure that the
architecture is right? Dell and Microsoft asked for native 4K-sector
drives and got them. What, he asked, does the Linux community want? He
recommended that the kernel community form a five-person committee to talk
to the hard disk drive industry. There should also be a list of developers
who should get hardware samples. And, importantly, we should have a
well-formed opinion of what we want. Given those, the industry might just
start listening to the Linux community; that could only be a good thing.
Comments (34 posted)
Nokia announced that it would be pursuing an open
governance model for Qt in June of 2010 at Akademy. After nearly a year
of discussions and preparation, Thiago Macieira provided an update at Camp KDE outlining the governance model
that Nokia would be pursing and the next steps. Though Macieira did not have all of the details, it seems that Qt will be reasonably open for a project that began life as a non-free toolkit.
Macieira, a senior product manager for Qt Software under Nokia, noted
early in the talk that this was really another step in a long process of
opening up Qt. From its initial humble beginnings as a non-free toolkit to
an open source license, then the GPL license, then finally adding LGPLv2.1
and making all development open to the public. Once there were only daily
snapshots, and before that even less visibility, now Macieira says "it's not really news" that Nokia continues to open up Qt development and governance.
Not surprising, perhaps, but still newsworthy and interesting to the KDE developers who depend heavily on Qt and may have been quite worried about its future following — as Macieira put it "the events of February 11" when Nokia announced its partnership with Microsoft.
Why is Nokia doing this? Macieira says it's in Nokia's best interest, in
that Qt is growing "faster than what we can or should grow" in
reference to Nokia's Qt R&D team. He continued:
It's in our best interest that others use Qt but don't depend on us doing
everything for them. [We] don't want to do everything, can't do everything
people want to do with Qt, let people join in and do what they need.
What will be happening now? Macieira says that Qt will be developed
using open participation, which is "what Qt developers have always wanted, not what KDE has always wanted, what our engineers have always wanted." He noted that the model that Nokia had chosen for Qt would be more akin to the way the Linux kernel team works, with a distributed approval system, with public discussions and decision-making on mailing lists. He noted that with KDE development "everybody works on what they want to work on" which wouldn't work well for Qt development.
Though KDE is a very visible consumer of Qt with a long history of
working with Nokia (and before it Trolltech) on opening Qt, KDE was
not the only project that has influenced its open governance model.
Macieira noted that MeeGo and Qxt have also been
Code will make it into Qt after it's been approved by maintainers and has passed regression testing. Anything that doesn't pass regression testing will be backed out. Historically, says Macieira, they would be reluctant to accept changes in many cases because that meant that Qt developers would be signing up to maintain the changes. No longer — when developers propose changes "it's your responsibility now, if you broke a test, it's your responsibility to go fix it."
As for the actual "governance," Macieira says that Qt will not have an elected board or anything like that — simply a tiered system that starts (at the bottom) with contributors, then approvers, then maintainers who have responsibility for a given module or port (such as Webkit), and finally the "Chief Troll" who will lead the maintainers. The Chief Troll, of course, would be analogous to Linus Torvalds. Who will be the Chief Troll? Macieira wouldn't say, but said they "have an idea" who would be taking the troll role.
Macieira says that the timeline for the announcement is "within
the next two months" but it could be sooner. He says that Nokia
now needs to contact the people that it has been considering and
asking if they're willing to take on the maintainer and Chief Troll
roles. "As soon as we get the people to say yes, we'll probably
announce," he said.
Filling out the project
Macieira says that the system will be bootstrapped by Nokia and many positions will be filled by people already in the organization, though he also said that the company would ask external contributors to be maintainers as well. He said that maintainers would "naturally appear" from the people contributing to modules, and that there would likely be changes after a shaking out period where maintainers had a chance to establish themselves. Some might decide, for example, that they didn't wish to keep the responsibility.
There will be other roles as well — QA, release management, project management, community management, and so on. This was a bit sketchy, but it seems to follow the model used by many companies that sponsor FOSS development projects. He asked the community to "see how it goes" and "there will be adjustments." He also invited the audience to the upcoming Qt Contributor's Summit to meet other developers and participate in the process.
Though Nokia is committing to a more open governance model, it's worth noting that the company is not turning everything over to the community. The company will continue to hold the trademark, and it will continue to ask developers to sign its contributor agreement so that the company can continue to offer Qt under commercial license as well as under the GPLv3 and LGPLv2.1.
However, Macieira says "there's no ownership here" aside from the trademark. Contributors have to allow Nokia rights to use the code to contribute, but the contributor agreement doesn't require them to sign copyright over to Nokia fully — contributors still retain the copyright. Macieira says that, because Qt is under the LGPL "anybody can take it elsewhere" if they're unhappy with the way that the community is run or the direction of the project, or for any other reason. However, he says that Nokia "wants to make it so that this community is attractive, and that people can come and work with us. Your needs can be met inside so you don't have to fork. "
After the presentation, I asked Macieira to identify the biggest
hurdle for Qt open governance. He said there was not a single major
issue, but "a lot of small issues" that could derail the
project. In particular, he cited the lengthy process of opening Qt:
We've been at this a long time and we're risking losing the
interest and participation of key influential people. Without them, we
may be unable to convince that this is a legitimate effort and to get
the necessary training of people external to Nokia
KDE and Nokia
How does the KDE community feel about this? Cornelius Schumacher, a KDE
developer and president of KDE e.V., was at Macieira's presentation. He says that it will "make it easier for us to directly contribute to Qt, and participate in the maintenance of our foundational toolkit." Schumacher also credits Nokia for being very open about the process and inviting the community to participate. Though there are a number of details that need to be worked out, he says that he's optimistic that it will work out well.
That seems to sum up the feeling of most of the audience — Nokia seems to have quite a bit of goodwill in the KDE community and seems to be on the road to a model that will work well for the larger Qt community. Macieira emphasized a number of times that the governance model that was outlined is simply what Nokia thinks will work based on its observation of other well-functioning communities and feedback it has received in the process of moving to an open governance model. Macieira asks the community to work with it and see what works, and what doesn't.
Nokia may not be going quite as far as some community members would wish, but it does seem that the company is making a very good faith effort and satisfying most of the community's concerns. The devil, of course, is in the details — it will be interesting to see who Nokia appoints as maintainers and "Chief Troll," and how many of the decision-makers initially are from outside the Nokia corporate walls. A clearer picture should be available after the Qt Contributor's Summit in Berlin in June.
Comments (5 posted)
Ken Starks of the HeliOS Project delivered the keynote talk at the second annual Texas Linux Fest (TXLF) in Austin on Saturday. HeliOS is a not-for-profit initiative that refurbishes computers and gives them to economically-disadvantaged schoolkids in the Austin area — computers running Linux. Starks had words for the audience on the value of putting technology into young hands, as well as a challenge to open source developers to re-think some of their assumptions about users — based on what HeliOS has learned giving away and supporting more than one thousand Linux systems.
How HeliOS works
Starks led off by giving the audience an overview of HeliOS, both its mission and how it operates in practice. It is under the federal non-profit umbrella of Software In the Public Interest (SPI), which supports Debian, Freedesktop.org, and many other projects. The program started in 2005, and since then has given away more than 1200 computers (some desktops, some laptops) to Austin-area children and their families.The families are important in discussing HeliOS's work, Starks said, because the 1200 number only counts the child "receiving" the computer. When siblings, parents, and other family members are included, he estimates that more than 4000 people are using HeliOS's machines.
The hardware itself is donated by area businesses and individuals. But the project does not accept just any old end-of-life machines. The goal is to provide the recipient with a working, useful system, so the project only accepts donations of recent technology. At present, that means desktops with Pentium 4 or Athlon XP processors and newer (at 2GHz and above), 1GB of RAM or more, with 40GB of storage. The full list of accepted hardware reveals some additional restrictions that the project must make (it no longer accepts CRT monitors for liability and transportation reasons) as well as predictable pain points, such as 3D-capable graphics cards. Starks has said in the past that roughly one third of all computers donated to HeliOS must have their graphics card replaced in order to be useful on a modern desktop.
Referrals come from a variety of sources, including teachers, social workers, police officers, and even hospitals. Starks and HeliOS volunteers make a visit to the home to get to know the family and scope out the child's situation before making a donation commitment. A family that can afford a high-priced monthly cable bill, he suggested, might get a call back in a few days recommending that they lower their cable package and purchase a reduced-price computer from HeliOS instead. But a computer is always in tow for the first visit, ready for immediate delivery.
Volunteers assemble and repair each PC, then install HeliOS's own custom Linux distribution — currently an Ubuntu remix tailored to include educational software, creative and music applications, and a few games. The team delivers and sets up the computer in the family's home, providing basic training for everyone in the household. They continue to stay involved with the families to provide support as needed. Support for the hardware and the Linux distribution, that is.
Periodically, HeliOS receives a call from a recipient's family member asking for help with a copy of Windows that they installed after erasing Linux from the machine. The child never removes Linux, Starks said, only a parent, and the support call almost always means trouble with viruses, malware, or driver incompatibility. At that point, HeliOS politely refuses to support the Windows OS, but will gladly reinstall Linux. This type of event is a rarity; Starks mentions on his blog that it happened just eight times in 2010, out of 296 Linux computers. It never matters to the kids what OS is on the computer, he said, the kids are simply "jacked" to be finally entering the world of computer ownership.
But Linux is not merely a cost-saving compromise HeliOS uses to make
ends meet (although Microsoft did offer the project licenses for Windows XP
at a reduced rate of US $50 apiece). The project includes virtual machine software in its distribution, and has a license donated by CodeWeavers to install Crossover Pro for those occasions when a specific Windows application is required, Starks said. The real reason Linux is the operating system of choice is that it allows the children to do more and learn more than they can with a closed, limited, and security-problem-riddled alternative. Our future scientists and engineers are the students learning about technology as children today, he said, and HeliOS wants them to know how Linux and free software can change that future.
What HeliOS can teach the developer community
Over six years of providing Linux computers to area schoolkids (the oldest of whom include five just entering graduate school), Starks said, the project has amassed lots of data on how children and new users use computers, which allows him to give feedback to the developer community that it won't hear otherwise. The open source community creates a lot of islands, he said — KDE island and GNOME island, for example. But the most troubling one is User island and Developer island, between which people only talk through slow and ineffective message-in-a-bottle means. Because open source lacks the inherent profit motivation that pushes proprietary software developers to keep working past the "works for me" point, too many projects reach the "good enough" stage and stop.
Starks explored several examples of the user/developer disconnect, starting with the humorous indecipherable-project-name problem. He listed around a half-dozen applications that HeliOS provides in its installs, but with names he said reinforce the impression that Linux is not only created by geeks, but for geeks: Guayadeque, Kazehakase, Gwibber, Choqok, Pidgin, Compiz, and ZynAddSubFX. The pool of available project names may be getting low, he admitted, but he challenged developers to remember that when they introduce a new user to the system, they are implicitly asking the user to learn a whole new language. When there is no "cognitive pathway" between the name of the application and what it does, learning the new environment is needlessly hard.
He then presented several usability problems that stem from poor defaults, lack of discoverability, and confusing built-in help. In OpenOffice.org Writer, for example, most users simply choose File -> Save, unaware that the default file format is incompatible with Microsoft Word, which starts a day-long firestorm for the user when they email the file to a friend and it is mysteriously unusable to the recipient. The lxBDPlayer media player — in addition to making the awkward-name list — confronts the user with a palette of Unix-heavy terminology such as "mount points" and "paths" even within its GUI.
Time ran short, so Starks skipped over a few slides, but he does blog about many of the same issues, further citing the experience of HeliOS computer families. The message for developers was essentially to rethink the assumptions that they make about the user. For example, it is common to hear the 3D graphics-card requirement of both Ubuntu's Unity and GNOME 3's Shell defended by developers because "most people" have new enough hardware. Starks touched on that issue briefly as well as in a February blog post, and might amend that defense to say "most middle-class people" have new enough hardware. Most users do not have any problem with the application name GIMP, but Starks asks the developers to consider what it is like when he has to introduce the application to a child wearing leg braces. Most developers think their interface is usable, but Starks asks them to try to remember what it was like when they used Linux — or any computer — for the very first time.
Starks concluded his talk by assuring the audience that the example projects he talked about were chosen just to stir up the pot, and not cause any real offense. He poked fun at the Ubuntu Customization Kit's acronym UCK, for example, but said HeliOS indebted to it for allowing the project to create all of its custom software builds. Indeed, Starks can dial up his "curmudgeonly" persona at will to make a humorous point (as he did many times), but also switch right back into diplomatic mode when he needs to. He ended the talk by thanking the open source community for all of its hard work. "Sure, we give away computers, but without what you do, we give away empty shells," he said.
Starks believes in the mission of the HeliOS project because the next generation will discover and innovate more than the past two generations combined — and they will be able to do it because they will learn about technology using the software created by the community. It is a humbling and exciting future to contemplate, he said, one that if the developer community stops to consider, makes for a far better incentive to innovate than the profit motivation that drives the proprietary competition.
I am part of the organizing team for TXLF, so I can tell you that among the reasons the team invited Starks to deliver the keynote this year were the opportunity to present a "Linux story" from outside the typical IT closet environment and the major distributions, and Starks's ability to present a challenge to the community. He certainly delivered on both counts. What remains an open question is whether that challenge gets taken seriously, or gets lost in the well-oiled machinery of the release cycle.
After all, most of us have heard the "project name" dilemma before, and yet it remains a persistent problem. Is the fact that HeliOS has hands-on, real-world examples of new users being put off by application names going to prompt any project to re-evaluate its name? Who knows. It is easy to dismiss Starks's stories as anecdotal (and he readily admits that his data is not controlled or scientific), but the project does install around 300 Linux computers per year, in the field.
In the meantime, it is good to know that the project will keep up that
work. Starks took time out of his allotment to present volunteer Ron West
with the "HeliOS volunteer of the year" award, and mention some of the
ongoing work the initiative is currently engaged in. It recently moved
into a new building, and has started The Austin Prometheus
Project to try and raise funds to provide Internet service to HeliOS
kids, 70 percent of whom have no Internet connection. Of course, that
statistic flies in the face of yet another assumption the development
community makes all the time about always-on connectivity. I suppose the
challenges never end.
Comments (13 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Deliberately insecure Linux distributions; New vulnerabilities in asterisk, ffmpeg, glibc, xorg, ...
- Kernel: ARM wrestling; the 2011 Linux filesystem, storage, and memory management summit.
- Distributions: Camp KDE: Using Slackware to investigate KDE 4; Ubuntu, MeeGo, CentOS, ...
- Development: Geolocation; Coccinelle, GNOME 3, ...
- Announcements: Google's patent pile, Mozilla, NASA OSS, LSS 2011, LAM: Best of 2010 mix