Leading items
On GNOME and its Foundation: an interview with Luis Villa
LWN recently posted a brief article on the GNOME Foundation's plea for support to help it get through a difficult year. Some of the comments on that news questioned the role of the foundation and its executive director. In response, the Foundation offered to make a board member - Luis Villa - available for an interview. Luis quickly answered our questions, despite being in the middle of final exams at the time; some people, it seems, will do anything to get out of studying. The result is an interesting view into the state of the GNOME project and where it is heading.LWN: Could you tell us about your involvement with GNOME and the board?
What does the GNOME board do?
On the support side, we take a look at what our community and corporate partners are working on, and try to match people, projects, and resources. The biggest part of that, historically, has been getting everyone together at GUADEC. In the past few years we've been trying to expand that - we've done more events and hackfests; we've helped out with marketing; we've started giving grants for certain kinds of hacking (primarily a11y [accessibility]); and we've tried to make resources available to spur work on GNOME Mobile and other subprojects.
On the stewardship side, the Foundation owns the GNOME trademark, controls GNOME funds, and generally manages other resources (technically we own several servers, for example, though in practice they all live in other people's colos.) And technically most GNOME teams (like the release team) report to the board, though in practice we have a very, very light hand on the tiller.
One thing we don't do, very explicitly, is technical leadership. That comes from the community.
With all this under the Foundation's purview, the board ends up making a number of small decisions that matter to GNOME, and in practice, we do a lot of the work of the Foundation as well.
The GNOME Foundation recently posted a budget and announced that, if funding is not found from somewhere, the foundation would have to cut either the executive director position or the activities budget. In your opinion, how dire is the budget forecast, and how did this situation come to be?
How it came to be is fairly straightforward. After we cut our last director's salary from the budget, we ran a large surplus for several years. It was hard for us as an essentially all-volunteer organization to actually spend this money - organizing events and doing coordination is really time-consuming, and frankly isn't something that we (as hackers) are terribly great at even if it were our full-time job. At the same time, we felt there was a need there for more events, resources, etc., and there seemed to be a willingness on the part of our corporate partners to invest even more if we could give them a way to do it.
So last year the board felt that it was time to expand. We grew our investments in things like hackfests. We also decided to hire a new ED who could help us do more for our developer community and for our users, and help us grow financially. We knew that this extra salary and extra spending would put us in the red for a few years. But we thought that this was a classic 'spend money to make money' situation- we thought the investment in events and in Stormy would allow us to reach more sponsors and would bring more value to our existing sponsors.
Our timing, obviously, couldn't have been worse - we hired Stormy in July, just as the recession began to break. So the investment hasn't paid off like we thought it would. We have increased the number of sponsors we've got, and many of our existing sponsors have increased their level of investment, so it hasn't been all bad, but definitely not enough. And obviously under the economic circumstances it isn't going to get any easier. Hence the message to our membership you referred to.
Stormy has been the executive director since last July. Can you summarize what she has done for the Foundation since then? Why does the Foundation need an executive director?
We're seeing lots of the former and some of the latter already with Stormy, and I fully expect to see more of it. I won't bore your readers with the full list, but among other things she's helped us expand our fundraising, helped organize events (inc. GUADEC and hackfests), improved communications with our advisory board, helped restart our marketing group, dealt with some legal questions, helped broker a deal to upgrade our bugzilla, and worked on a plan to hire a sysadmin. So I think our initial decision to make this investment and take the risk was the right one. Of course, whether it makes sense long-term is still an open question - we will have to balance our budget eventually.
Some commenters on LWN have suggested Stormy's first responsibility should be to raise enough money to pay for her own existence. Does the GNOME board see things that way?
In the past, you've expressed concerns that a poorly-handled GNOME 3 initiative could encounter the same difficulties as KDE 4. How do you feel about where the GNOME 3 effort is going?
I think GNOME 3 ran the same risk as KDE 4 when we were focusing on gtk 3 as the driver behind GNOME 3. But we're focusing now on what users are going to see - on the new Shell, and on Zeitgeist. I don't think either of those are perfect, by any stretch, but I think they have at least the potential to offer a really compelling answer to the question of 'why should I use this?' The KDE team, by the way, is moving in that direction as well - I think their social desktop work, for example, has the potential to offer a very compelling story for users. If I were them, once that is mature and well-integrated I'd go ahead and call that KDE 5. Whether GNOME or KDE, that kind of user-focused, problem-solving feature is way more important than what version of the toolkit you build on.
The recent discussion of the one-slider GNOME volume control has brought back charges that the GNOME project values simplicity over giving control to the user. Is that your view of the GNOME project? Why do you think GNOME continues to have that reputation?
The long, and more serious answer is, well, long. There are a couple aspects of our philosophy that cause this problem:
(1) One aspect of our philosophy is that we always prefer to fix underlying problems instead of papering them over in the UI. As someone put it c. 2001, 'many options in a lot of our tools are really a switch that means 'work around this bug.'' Our philosophy is that you should fix the bug instead of adding the option. As a result, some of our software, particularly when it is very new, can be a real pain if it turns out you were relying on those bugs or on workarounds for those bugs.
Network Manager was like that for a long time - it worked on the majority of hardware and use cases, but certainly not all of it, so people kept screaming for new options. But the developers stuck with it, introducing new features only when they were sure they could do it as automagically as possible, and fixing bugs at lower levels instead of hacking around them at the UI level. And the entire Linux platform - for GNOME users and for non-GNOME users - is better now because we've forced wireless drivers to fix their bugs instead of providing workarounds in the UI. As a result, we've now got a tool that is reliable for virtually everyone and simple to use. Still not perfect, but I think comparable in ease-of-use and power with anything on any OS. I think the volume control will eventually be the same way, though admittedly it seems rough enough that I'm not sure I would have shipped it quite yet if it were my call.
(2) Another aspect of our philosophy is that options have a cost. For developers, they have a cost in QA; they have a cost in debugging; they have a cost in maintenance. Everyone who has done QA in free software has piles of stories about the horrors of debugging something because all the options weren't set just right. So we think that overall we make more software, and better software, by focusing in this way. More importantly, for users, options have a cognitive cost. It takes time and mental effort to figure these things out; time and effort that could be better spent doing the things you use a computer for - working on projects; talking with your friends; or whatever. You or I, who are experts and have used Linux as part of our day job every day for over a decade now, don't notice this cost. But for people who view Linux as a means to an end - getting their other work done - these costs are present every time they try to mess with the system. Again, why does my girlfriend want to see 8 volume switches when she goes to play her music? She just wants one, just like she just wanted her networking to work - and now it does.
(3) Finally, we believe that you can't make software that pleases everyone. You can make software that pleases experts, but most of the time non-experts hate that software. (Office, for example, was like this for a long time.) We're unabashedly trying to make software that works well for average users and not experts. We hope, obviously, that experts will use it, like it, and help us make it even better. (For example, you could help us work on a better plugin infrastructure so that we could move more options into plugins, like Firefox does ;) But if you like spending hours tweaking things so that you feel like you have more 'control', then yeah - it might be better for everyone if we just agree to disagree.
Obviously, I think these are all reasonable and important parts of our software philosophy; I think it means we make better software. If everyone understood them, we would still have some disagreements, but the disagreements would be made on more substantive grounds, with better understanding of the tradeoffs involved. We'd really want to see people criticize us on solid grounds - like, did we switch to the new volume control too early? how can we enable experts in ways that don't have big costs? - rather than on what we think of as fairly unreasonable grounds like 'I want my switches back.' For those who do want to understand this philosophy better, I'd recommend reading chapter five of the 37 Signals book 'Getting Real' - I don't agree with all of it, but that's the best reference I can think of for how we feel about features.
Is there anything else you'd like to tell LWN's readers?
Past that... I'm sure I'll think of something about an hour after the article goes up ;)
Your hour starts now :). Thanks to Luis for taking the time to answer our questions in such depth.
Open fonts at Libre Graphics Meeting 2009
École Polytechnique in Montreal played host to the fourth annual Libre Graphics Meeting (LGM) May 6 through 9, gathering around 100 developers and users of free graphics software from across the globe to collaborate, discuss, and learn. One of the biggest topics of the week was free and open fonts: their licensing, design, and integration with the free software desktop. In just a few short months, the release of Firefox 3.5 will push the issue into the forefront courtesy of Web Fonts, and the free software community aims to be ready.
![[Dave Crossland]](https://static.lwn.net/images/lgm2009-1_sm.jpg)
Dave Crossland and Nicholas Spalinger of the Open Font Library (OFLB) project each delivered a talk about OFLB (Crossland on the project's web site relaunch, and Spalinger on the challenges it faces moving forward), but the importance of free-as-in-freedom fonts permeated into several other talks as well. Developer Pierre Marchand demonstrated changes in an upcoming revision of his FontMatrix application, and the World Wide Web Consortium's (W3C) Chris Lilley spoke about Web Fonts and other developments in CSS3.
Additionally, the "users" represented at LGM included graphic artists, but also professionals deeply invested in free font support for open source software — including XeTeX creator and Mozilla's font specialist Jonathan Kew, Brussels-based design agency Open Source Publishing, and Kaveh Bazargan, whose company uses free software to handle typesetting and file conversion for major academic publishing houses like the Institute of Physics and Nature.
A free font and free software primer
As with software, the main front in the battle over free fonts is licensing. Historically, digital type foundries like Adobe and Monotype have sold proprietary fonts to graphic design houses and publishers under very restrictive licensing terms that prohibit all redistribution. Freely redistributable fonts have existed for years, but licensing them in a free software context can be complicated, too.
When the font is used solely to produce printed output, licensing is not a problem, but when the font must be embedded inside a another digital file (such as a PDF) incompatibilities arise because fonts contain executable code (such as hinting, which algorithmically adjusts the width and height of glyph strokes to align with the pixel grid of the display device to optimize sharpness) in addition to glyphs themselves. Including the font inside another document that contains executable code — such as PDF or PostScript — makes the resulting document a derivative work of the font.
A "font exception clause" for the GPL was written to allow font designers to license their creations under GPL-compatible terms without activating the GPL for all documents embedding the font. That solution did not catch on with type designers for a number of reasons, including the naming conventions of the type design world — where derivative fonts customarily do not reuse the upstream font's name to avoid confusion. Nonprofit linguistics organization SIL International created the simpler, font-specific Open Font License (OFL) to address designers' concerns while permitting redistribution, modification, and extension. The Open Font Library project was started to foster the creation and distribution of high-quality free fonts under the OFL.
OFLB has grown steadily since its inception, presently hosting around 100 fonts, but the project anticipates a sea change when Firefox 3.5 is publicly released this spring. Firefox 3.5 will add support for Web Fonts via the @font-face CSS rule, which allows a web page to specify text display using any font accessible using an HTTP URI. Before @font-face, the only fonts available for selection through CSS were the ten "core fonts for the Web" from Microsoft: Andale Mono, Arial, Comic Sans, Courier New, Georgia, Impact, Times New Roman, Trebuchet MS, Verdana, and the always popular Webdings.
Because commercial type foundries by and large still object to redistribution of their products — even for display purposes only — the advent of @font-face marks a tremendous opportunity for OFLB and free fonts in general.
OFLB gets a redesigned site
Crossland previewed OFLB's newly visually- and technologically-revamped web site. Donations paid for a professional redesign to appeal to graphic designers regardless of their interest in free software principles, and the new site runs on the ccHost content management system developed by Creative Commons.
The OFLB site will allow type designers to upload their fonts for public consumption; users will search and download them, and can re-upload "remixes" of the originals. Font "remixes" are expected to center around filling in missing glyphs, allowing the OFLB community to flesh out support for non-Latin alphabets, but remixes that make aesthetic changes to the original are also supported. In keeping with the OFL, remixes and originals will be cross-linked to each other, but remixes will have to choose a distinct name.
The new site will foster WebFont usage by allowing direct linking to its resources in @font-face directives. Each font's page contains the required CSS code snippet for simple copy-and-pasting into a page or template. OFLB has also worked to get its online library directly integrated into the font editing application FontForge. Crossland noted that although proprietary web page design software like Dreamweaver is popular with graphic designers, no such GUI tool is common for free software users, who tend to create sites with content management systems (CMS). The project is interested in integrating OFLB support into open sources CMSes such as Wordpress or Drupal that support theming, but nothing is in the works yet.
![[Pierre Marchand]](https://static.lwn.net/images/lgm2009-2_sm.jpg)
Between talks, discussion turned to the possibility of integrating features from Marchand's FontMatrix into the OFLB site. FontMatrix is a tool for maintaining large collections of fonts, selectively activating only those needed so as to conserve memory and make selection easier within design applications, but Marchand has added more and more diagnostic features to the program with each revision. The new version of FontMatrix he demonstrated can explore font metadata in depth, allowing searching through font collections based on such facets as language support, style, weight, license, and creator. The OFLB site could re-use some of that code to empower visitors to search its font collection in ways more powerful than today's tag-based browsing.
Growing the free font tent
![[Collaboration Lab]](https://static.lwn.net/images/lgm2009-4_sm.jpg)
Spalinger's OFLB talk focused on the challenges the project faces, including the possibility that users will attempt to upload fonts to the site that they do not own, such as proprietary fonts from commercial foundries. The project is debating how best to manage the site to ensure that only properly attributed, OFL-licensed work is submitted. Lilley observed that it may not be the project's legal responsibility to police the site, but only to respond appropriately when a type designer registers a complaint. Crossland concurred with that sentiment, but added that the project also wants to establish a bright line between its service, which aims to provide a designer-friendly, high-quality collection, and the scores of low-quality "free font" sites that garner little credibility or trust because of their policies.
Crossland added that one possibility would be to approach commercial foundries and offer to perform font fingerprinting on their products using FontMatrix's tools, then alert the foundries if a possible match was uploaded. Kew thought this approach unlikely to succeed, suggesting instead that it was better to do the reverse: make a public feed available of the fingerprints of the OFLB fonts, then respond to questions and concerns of the foundries if they detect a problem.
Other concerns include proposals for font file formats that include DRM — such as Microsoft's Embedded OpenType — and how best to encourage font designers to collaboratively extend OFLB fonts (such as adding new alphabets) without creating a glut of remixes for each source font that are never merged back into the upstream original.
Conclusion
Back in April, Mark Pilgrim famously ranted at the foundries for their stubbornness and refusal to acknowledge the importance of WebFonts. Crossland referenced Pilgrim's comments in his talk, observing that the ability of @font-face to disrupt the legacy foundries' business model was a golden opportunity for OFLB and, by extension, free software. The foundries think that @font-face will cannibalize sales, but the end users who see the type displayed via @font-face were never the foundries' customers to begin with. The graphic designers are the customers, and graphics designers love fonts. If the foundries offer them nothing for use in WebFonts, OFLB may well be their only option.
Other LGM sessions over the four-day event featured updates from major open source graphics and design applications like Scribus, Inkscape, and Gimp, research and technical demonstrations, and debates on critical issues such as usability, the rise of non-free web applications, and combining free software with profitability. All of the conference presentations and Q&A sessions were recorded by Bazargan, and are now available online in multiple video formats.
NLUUG: The bright future of Linux filesystems
As the maintainer for the ext4 file system, Ted Ts'o was the perfect speaker to open the recent NLUUG Spring Conference with the theme "File systems and storage". In his keynote at the conference in the Netherlands, he placed into context some developments and changes in file system and storage technologies.
His central question was: why has there been a flowering of new file
systems showing up in Linux in the last 18 months? New file systems that
have recently become available in the mainline kernel include ext4, btrfs,
and UBIFS. The next Linux
kernel release, 2.6.30, adds three new file systems: Nilfs, Pohmelfs, and exofs (formerly
known as osdfs). Ts'o said that "it's now a fairly exciting time for
file systems
" and he added that this is partly thanks to Sun:
"Sun woke up the field with their file system ZFS and they should
deserve credit for it. Before the appearance of ZFS, the development of
file systems virtually stood still for decades.
" At the moment, the
Linux kernel tree lists 65 file systems, although most of them are
optimized for a specific task and are not much used. Ts'o sees this as an
opportunity for developers to experiment and innovate.
Of course the development of all these file systems doesn't come out of
the blue. They are driven by some new developments in storage technology,
such as the advent of solid state drives (SSDs), data integrity fields, and
4K sectors. SSDs have especially changed a lot in the storage stack:
"The shift from relatively slow hard disks to fast SSDs means that
many assumptions in the storage stack don't hold anymore.
" Even though
Ts'o expects SSDs not to replace HDs completely, he sees the shift as an
interesting opportunity: "This spurs a lot of development, as people
are finally talking about changing storage interfaces.
"
One change that is happening now is the shift from 512-byte physical sectors to 4K in hard drives. The abstraction of 512-byte sector sizes has been here for decades, and it's not easy to change, as the transition affects a lot of subsystems that don't accept a 4K sector size currently. For example, the partitioning system and the bootloader require changes because they both rely on the fact that partitions start from the 63rd sector of the drive, which is misaligned with the 4K sector boundary. A proposed solution is to align 512-byte logical sectors in a way that the first logical sector starts from the second octant (512 bytes) of the physical first 4K sector. However, Microsoft Windows spoils the party because it starts the partition table at a 1M boundary, which is incompatible with this "odd-aligned scheme". According to Ts'o, this is one of the reasons why storage vendors like to talk to open source projects: they want to move forward instead of holding on to legacy solutions. It remains to be seen whether Windows will join the party.
Another change that Ts'o deems important is object-based storage. Instead of
presenting the abstraction of an array of blocks, addressed by their index
in the array (as traditional storage systems do), an object store presents
the abstraction of a collection of objects, addressed by a unique id. If
the operating system uses object-based storage, it stores an object with an
id, without having to know overly low-level details such as the sector or
cylinder of the block on the hard drive. When the operating system wants to
read the object later, it only has to know the object's id. Ts'o sees many
advantages in this approach: "With object-based storage, the
operating system can push more intelligence into the hard disk, which is
better placed anyway to make intelligent decisions and improve
performance.
"
Ts'o also notes that abstractions such as disks, RAID, logical volume
management, and file systems are more and more blending into each
other. "Maybe those different interfaces don't make sense anymore?
ZFS figured this out very well by building all those interfaces under the
umbrella of the file system, and btrfs will do something similar.
"
But he warns that this doesn't mean that people should settle with ZFS or
btrfs: "I hope that developers will keep exploring abstractions to
find the right interfaces.
" Ts'o also expressed his hope that the
license incompatibility between ZFS (CDDL) and Linux (GPL) would get
fixed.
As a typical example of the proliferation of specialized file systems, Jörn Engel talked at the NLUUG conference about LogFS, his scalable file system for flash devices. Because most current file systems are designed for use on rotating drives, and because flash-based storage has some quirks, Engel decided to design a file system explicitly for flash. He started with a fast filesystem (FFS) style design and adjusted a lot of the algorithms to work better with flash. For example, for copy-on-write, FFS rewrites blocks in place after the copy. Because flash storage cannot be simply overwritten, a flash block must be erased and rewritten in two separate steps, a requirement which can cause serious performance problems. Engel's solution was to use a log-structured design instead. Another issue was that the journal is written often to the storage. Because there are limits to the number of times a block of flash memory can be erased and rewritten reliably, Engel's solution is to move the journal from time to time.
Engel said that LogFS is almost ready for use. He is still chasing one hard-to-replicate bug, but, after that, he plans to submit the code for inclusion in the Linux kernel tree. LogFS should be better than JFFS2 on larger devices, because JFFS2 stores no filesystem directory tree on the device. This means that JFFS2 has to perform a time- and memory-consuming scan when it mounts the file system, building the directory tree at that time. Putting the tree on the device, as LogFS does, reduces mount time and memory requirements.
At the NLUUG Spring Conference a lot of recent developments were talked about, not only regarding file systems, as Ts'o showed, but also higher in the storage stack. Michael Adam for example stressed that Samba, which started as a free re-implementation of Microsoft's SMB/CIFS networking protocol, allows for setting up a clustered CIFS server, a feature that current Microsoft servers do not offer.
The NLUUG Spring Conference was an interesting event thanks to the breadth of the topics presented. On the one hand there were introductory talks about the possibilities of ZFS, the virtual filesystem libferris and practical experiences with WebDAV. On the other hand, visitors could get some first-hand and highly specific information about the future direction of projects like DRBD, device-mapper and LogFS. This way, the conference had something for everyone: it gave a broad overview of the current state of the art in file systems and storage, while providing enough technical details for those interested in it. At least your author came home with a better understanding of file systems and storage in the Linux ecosystem.
Page editor: Jonathan Corbet
Next page:
Security>>