LWN.net Logo

Fighting small bugs

July 21, 2009

This article was contributed by Bruce Byfield

Paper cuts, points of pain, obstacles and annoyances — whatever description you prefer, in the last six weeks, small bugs have started receiving closer attention from developers of the free software desktop. In Ubuntu, they are the focus of One Hundred Paper Cuts, and in Fedora of Fit and Finish, but in both cases, the goal is the same: to significantly improve the user experience by tackling bugs that can be quickly corrected. As an important side effect, these efforts are also allowing free software developers to approach usability in new ways.

Concentrated efforts at bug-squashing have a long history in free software development, so the idea of focusing on small bugs probably has multiple origins. However, one origin is the 0.8 release of GNOME Do in January 2009. The release fixed 111 bugs — over three times the number fixed in the 0.8.1 release — and received enthusiastic feedback from users.

When GNOME Do leader David Siegel joined the Design and User Experience team at Canonical a few weeks after the 0.8 Do release, he took the small bug meme with him. The Design and User Experience team is the group within Canonical whose task is to realize Mark Shuttleworth's challenge of improving the Ubuntu desktop, and Siegel soon saw similarities to Do: "We started to notice all these small things that were in each release that were never getting fixed," Siegel said. "And I said to Ivanka Majic, who's the leader of the Design team, 'We need a way to red flag these things that are obviously wrong, and make sure they are fixed before they go out.'" Within a couple of months, Siegel found himself leading One Hundred Paper Cuts.

According to Red Hat employee Matthias Clasen, a similar situation exists in Fedora. "There is a challenge in working on the Fedora desktop between feeling squeezed to finish cool new features in time for the next release, and fighting to get Rawhide [Fedora's development branch] into a somewhat working state, with too little time to devote to fit and finish" — that is, to polishing and removing rough edges. Like Siegel, Clasen now finds himself at the head of an effort to provide that missing attention to detail.

Whether Fedora and Ubuntu influenced each other in these developments is uncertain. However, in both cases, the advantage of focusing on small bugs is obvious. As Siegel explained, "They're low-hanging fruit. They allow us to quickly, inexpensively improve the user experience. We don't have to create new interfaces; we just have to fix these tiny, trivial bugs. It's just a small component, but it's something that can have an immediate impact."

Same problem, different approaches

Despite the similarity of the problems, Ubuntu and Fedora have organized their solutions in somewhat different ways. At Canonical, the One Hundred Paper Cuts project set a goal of addressing one hundred bugs during the development cycle of the upcoming "Karmic Koala" release, which is scheduled for October 2009. To narrow the focus enough to make it manageable, the project decreed that the bugs would center around what Siegel calls "the space between applications" or features such as the panel and the Nautilus file browser. Users were invited to report a paper cut bug via Ubuntu's Launchpad, and the initial one hundred paper cuts were chosen from the several thousand that were submitted.

For this first effort, the project chose (despite its name) to divide its efforts into ten rounds of eleven bugs each, ten in each round concerning Ubuntu's main GNOME desktop, and one concerning Kubuntu, the Ubuntu KDE variation. The project also defined what it would cover with extreme clarity. According to Siegel, a paper cut is a bug that users would encounter with a default installation from a Live CD. Although it would not actually prevent users from completing everyday tasks, it might cause momentary annoyance or distraction.

Theoretically, a paper cut must be correctable by one developer in one day. But, in practice, Siegel said, "We set a sort of gray area. For example, some of the paper cuts we've confirmed and want to fix for Karmic really take weeks to fix, but the work is going to take place anyway, and the paper cut is just a little bit of extra work beyond that."

By contrast, Fedora's Fit and Finish project chose a less formal approach. As Clasen explained, Fedora's Quality Assurance team was already in the habit of holding test days, in which participants prepared by downloading designated software and discussing what they found on IRC. Fit and Finish simply decided to hold its own test days during the development of Fedora 12, based on input submitted to the project's Bugzilla or by email. For each test day, instructions for participation are listed on a separate page, and both bugs and "things that work right" are summarized on the page after the discussion.

One difference from Ubuntu is that Fedora chose "to focus on user tasks, as opposed to individual features," according to Clasen. Nor did Fit and Finish limit the number of bugs to be covered in a single test day or to be fixed. However Clasen did add, "I realize that numbers — like the points awarded by bugzilla.gnome.org — can be a strong motivation, so we may revisit this at some point."

Early results

To date, Fit and Finish has held just one test day on the subject of display configuration, although pages for the topics batteries and suspend and peripherals are already posted. Clasen noted that this first day did not have "an overwhelming participation." He added, though, that the relatively low turnout was probably due to the fact that many desktop developers were at the Grand Canaria Desktop Summit that was in progress on the same day. A week later,the bugs arising from the testing day have been filed and assigned to a tester, but none have been closed.

In comparison, One Hundred Paper Cuts has had two rounds so far, and the bugs to tackle in each future round are already posted. Of the ten bugs in the first round, seven are now listed as fixed, one as having had a fix committed, and another two as "in progress," with only one listed as incomplete. In the second round, which finished on July 11, two are marked as fixed, another three have a fix committed but not yet applied, four are in progress, and two are listed as confirmed. The preliminary appearance is that One Hundred Paper Cuts is producing quicker results, possibly because its goals are better defined.

However, a disadvantage of the One Hundred Paper Cuts approach is the appearance it creates that some bugs are being given special treatment. For this reason, Siegel felt compelled to emphasize that "Paper cuts are just an additional level of importance being attached to bugs. A lot of people whose bugs get rejected for paper cut status will get angry and frustrated and say, 'Does this mean that the bug's not going to get fixed?' But it just means it's not going to be the focus. Many, many bugs will be fixed for Karmic; this set of one hundred is just getting a little extra push and a little extra attention."

As might be expected given Fit and Finish's more narrow topic, the bugs it generated fall into recognizable categories. "Naturally, a lot of the bugs that we have collected on that day are X driver bugs, bugs in the GNOME modules that play a role in display configuration (the display capplet, gnome-settings-daemon, and libgnome-desktop)," said Clasen. "Another cluster of bugs has to do with applications that have some sort of 'presentation mode,'" that requires a dual monitor setup.

"In terms of their severity," Clasen added, "The issues ranged from minor UI annoyances (wrong colors, too large fonts) to feature improvements (make the gthumb slideshow mode display on a connected projector) to serious bugs (rotating a second monitor renders the screen at an offset."

In comparison, bugs addressed by One Hundred Paper Cuts tend to be of low or medium severity and more diverse. Many, though, center on Nautilus and the names given to menu items, and the composition and behavior of the path bar and toolbar. Others, no doubt inspired by Canonical's efforts to improve notifications, center on system messages, some asking for messages that explain more clearly, and others for the removal or editing of unclear statements.

Usability testing at last

At this point, any evaluation of Fit and Finish or One Hundred Paper cuts must be tentative. Both are scheduled to be evaluated after the development versions they are focused upon are officially released. However, one early problem that has already emerged is that the upstream project — GNOME — does not give the small bugs the same priority that Ubuntu and Fedora are assigning them. "Our goal is to get them fixed on a weekly basis," Siegel says, "But an upstream bug will just sit there. They don't have the same sense of urgency. That's been just a little bit frustrating, because priorities are different."

Still, while the work flow in the two projects may be refined by internal review, talking to the organizers of these small bug projects, you get the impression that neither effort is going away. As Siegel described the situation, such projects represent "the biggest bang for your development buck." In other words, they have quick results, and are obvious to the average user. Another advantage of these projects is that they encourage user participation in the development cycle. As a result of One Hundred Paper Cuts, "A lot of people are submitting their first time bugs," Siegel observes — an advantage that, after his experience with GNOME Do, he describes as "definitely calculated."

In much the same way, Clasen observed that "test days also serve as 'meet our user' days. While the audience is far from unbiased — most participants are certainly tech-savvy fedora-devel-list readers — having written use cases helps a lot when trying to shed the developer perspective." An important aspect of this exchange is that it encourages developers to look beyond their own sets of packages, as happened when X Window developer Adam Jackson fixed and enhanced GNOME's handling of monitors.

In the short term, too, such projects have the advantage of encouraging more people to try development releases — a goal that many projects often find elusive. Referring to Fedora's development release, Clasen says, "If Rawhide is more stable, more people will use it, broadening our tester base. And if we don't have to fight Rawhide breakage, we have more time to devote to the user experience issues identified by our test days."

Usability testing has always been difficult in free software, partly because people rarely meet face to face and partly because it requires a trained perspective to do well. But with projects like Fit and Finish and One Hundred Paper Cuts, free software may have just discovered its own method for approaching usability issues and for giving people a chance to learn by doing. In the end, this encouragement of usability testing might be as significant a result of the small bug meme as the improvements it brings to the desktop.


(Log in to post comments)

Fighting small bugs

Posted Jul 23, 2009 4:51 UTC (Thu) by xav (guest, #18536) [Link]

s/Klasen/Clasen/ ?

Fighting small bugs

Posted Jul 23, 2009 13:08 UTC (Thu) by mclasen@redhat.com (subscriber, #31786) [Link]

Yeah, I'd like to have my C back, please :-)

Fighting small bugs

Posted Jul 23, 2009 13:14 UTC (Thu) by jake (editor, #205) [Link]

> Yeah, I'd like to have my C back, please :-)

Fixed now, sorry about that!

jake

Fighting massive data loss bugs

Posted Jul 23, 2009 6:59 UTC (Thu) by Cato (subscriber, #7643) [Link]

This is important stuff, and well worth supporting. Right now, however, I'm far more concerned with the really huge bugs - the ones that lose large amounts of data. Here's a recent example...

A relative's PC runs Ubuntu 8.04 (stable version) - a couple of weeks ago there was a filesystem corruption on the root FS, which uses ext3 on LVM. The kernel remounted the FS as readonly, but without notifying the user. I noticed this only recently, so I did a remote login - there weren't any hardware or kernel errors in the logs, only the remount message. The system was still usable and bootable at that point.

I ran e2fsck to fix the block device (FS was still read-only) - thousands of errors were found. One of these must have been in a key library, so that executing any command failed, although most files were still there. I now have to spend at least a day driving over there, re-installing Ubuntu, recovering from backups, etc.

This PC was built a year ago, with only high quality components (robust PSU, good motherboard capacitors, etc) and a conservative setup, including a UPS, and I chose Linux largely so I could maintain it remotely without hassles. Clearly this has not worked...

There are several things wrong here:

- the data loss bug itself - given the reports on Launchpad I strongly suspect a kernel or e2fsck bug, probably the latter. This is the second time I've had data loss due to an ext3 corruption without hardware errors - both times on LVM, perhaps that's a factor. I don't think I've ever lost a whole filesystem on Windows with FAT or NTFS other than with hardware errors.

- the fact that a stable version of a major Linux distro can have such major data loss problems over a year after its release

- (somewhat on topic) lack of an unmissable and persistent notification to the user (or ideally a remote administrator) that a significant error has happened (kernel noticing FS corruption and remounting the FS read-only. In fact, https://bugs.launchpad.net/bugs/28622 which covers this could be called a 'papercut' bug if the consequences weren't more serious

- the lack of really good and low cost online remote backup for Linux - I use SpiderOak which worked well in this case, and I find better than JungleDisk or Dropbox, but on another PC it has silently not done any backups for over a month.

- lack of continuous fscks (with meaningful notification to user or administrator) for PCs that are left switched on most of the time.

I use Linux and ext3 because I like reliability - I'm really stunned to find a massive data loss bug like this in 2009 on such a mature filesystem in a stable distro. Obviously such bugs are hard to reproduce but they are reported a lot.

Ubuntu 9.04 Jaunty apparently has a post-release kernel update that *introduces* a new ext3 data loss bug, yet this update has not been pulled... https://bugs.launchpad.net/ubuntu/+source/linux/+bug/346691 has the details. I really like Ubuntu but the non-handling of this data loss regression is rather horrifying - simply removing the updated kernel would be enough. I just installed Ubuntu 9.04 on a friend's PC last weekend - fortunately it was only as a recovery OS alongside Windows, in light of this.

Fighting massive data loss bugs

Posted Jul 23, 2009 7:16 UTC (Thu) by Cato (subscriber, #7643) [Link]

Correction: the Ubuntu Jaunty data loss bug mentioned in last paragraph is not ext3 related - most likely it's in the ata_piix module.

Fighting massive data loss bugs

Posted Jul 23, 2009 11:06 UTC (Thu) by michaeljt (subscriber, #39183) [Link]

Quite agree, although I don't see a contradiction (in my own experience, the "paper cut" sort of bugs are the ones that you fix while you are taking a break from tracking down the serious ones). My own priority here would be, on the one hand bugs with serious consequences (see your example), and on the other, easily fixed bugs which have a reasonable chance of being noticed by more than one or two people. Again, my experience is that you can often quickly tell the last category when a group of users start collaborating over the bugtracker and successfully isolate the bug enough to make it trivial to fix.

Fighting massive data loss bugs

Posted Jul 23, 2009 11:09 UTC (Thu) by michaeljt (subscriber, #39183) [Link]

Slightly (but only slightly) off-topic, I feel that it is a shame that people suggesting ways non-programmers can contribute to free software don't emphasise this sort of collaboration enough. It can be done with minimal (though not non-existent) technical skills and no knowledge of programming, and I'm pretty sure that seeing bugs fixed as a result of this sort of work is the sort of rewarding experience that makes people want to use free software.

Fighting massive data loss bugs

Posted Jul 23, 2009 14:04 UTC (Thu) by Baylink (subscriber, #755) [Link]

I could ask why the root FS was on an LVM, but I probably shouldn't. :-)

Fighting massive data loss bugs

Posted Jul 23, 2009 16:55 UTC (Thu) by Cato (subscriber, #7643) [Link]

Fair point, but isn't LVM supposed to be production quality these days? This particular corruption doesn't have the indicators of LVM being involved (such as writing beyond the end of the volume.) Mostly I thought by avoiding RAID and LVM snapshots it was safe enough. However I will remove LVM on the recovered machine, mostly so I can enable ext3 barriers which LVM doesn't permit.

Most of the FS corruption reports I've seen don't mention LVM, so I suspect a bug elsewhere. One report had repeatable corruption on VMware VMs with Ubuntu 8.04 guests, for example, and other corruption reports are on ext3, XFS, JFS, etc, with common factor being an Intel chipset. Quite a few are probably due to dodgy hardware, which makes this hard to pin down. In fact for all I know this is due to bad RAM or a hard disk problem that doesn't appear in the system logs.

Assuming LVM is stable, it's quite easy to recover an LVM machine these days - Knoppix, SystemRescueCD and others support LVM. It's only if there's disk corruption that LVM makes things harder, but that's what backups are for.

I am discovering that SpiderOak is not as good at recovery as it should be - client doesn't work on one machine, and the web download feature generates an invalid ZIP for one directory... So I'm open to recommendations for inexpensive online backup for Linux machines that don't involve rolling your own (I've already done that and want something easier to maintain - but I may go with rsnapshot in future just to avoid the hassles of backup services that don't quite work the way they should).

Fighting massive data loss bugs

Posted Jul 23, 2009 18:07 UTC (Thu) by Baylink (subscriber, #755) [Link]

I suppose it is, but personally, the only unusual thing I want / and /boot on is "real" hardware RAID 1. When things start to go to hell, the fewer distractions you have, the better off you are.

I'm a sysadmin; I'm paid to be paranoid.

Fighting massive data loss bugs

Posted Jul 23, 2009 22:54 UTC (Thu) by Cato (subscriber, #7643) [Link]

I've now had a look at the machine - there are two disks which have mostly LVMed partitions, and one disk is showing classic signs of LVM errors resulting in ext3 corruption (not the one with the root FS, but one with several LVMs for local backup). I used SystemRescueCD initially and the LVM commands showed the LVM state was quite messed up, plus log messages like this:

Jul 23 19:06:57 sysresccd attempt to access beyond end of device
Jul 23 19:06:57 sysresccd sda: rw=0, want=198724030, limit=66055248
Jul 23 19:07:20 sysresccd attempt to access beyond end of device
Jul 23 19:07:20 sysresccd sda: rw=0, want=198723798, limit=66055248

One weird thing is that the LVM on the main disk that hosted the root FS didn't show any LVM related errors, but that was the one with the major corruption. Of course the backup LVM had major corruption but I wasn't focusing on that. In fact a generally odd thing is that the Ubuntu logs didn't show any errors on either disk (i.e. ext3 or LVM type errors), apart from the 'FS remounted' one, yet SystemRescueCD showed them right away.

Another weird thing is that despite the root FS being remounted read-only, the logs in /var were still being written to for 10 days after the first root corruption - surely this is a bug as it can only increase FS corruption.

I haven't yet run a memory test but the system doesn't show any other signs of bad RAM such as randomly crashing applications. The logs also don't show any disk hardware errors.

Anyway, the lesson is simple: never, ever use LVM again. Gparted is pretty good these days for resizing/moving partitions, and the time I have saved on LVM is far less than the hassle of this recovery exercise.

Sorry for going so far off topic, but perhaps LWN would like to write a piece on data loss bugs and how best the community should address them - maybe starting with LVM...

Fighting massive data loss bugs

Posted Jul 23, 2009 22:56 UTC (Thu) by dlang (✭ supporter ✭, #313) [Link]

is /var a separate mount? if so it should keep going even if / gets remounted ro

if the underlying device becomes ro, the OS can buffer writes in ram that it wants to get to the filesystem, but can't because it's ro.

this causes more lost data, but not more corruption.

Fighting massive data loss bugs

Posted Jul 23, 2009 23:27 UTC (Thu) by Cato (subscriber, #7643) [Link]

In this case there was no separate /var FS, and the updates to /var/log/messages have persisted across at least one reboot. So somehow the root FS was mounted readonly (I got errors on trying to write to files so it really was readonly), yet the log files were being updated on /var ...

Fighting massive data loss bugs

Posted Jul 25, 2009 20:22 UTC (Sat) by Cato (subscriber, #7643) [Link]

Now that I've rebuilt the PC... I actually lost the contents of two filesystems (root and backup), each on a separate physical disk, and hosted on LVM logical volumes within separate volume groups. There weren't any physical disk errors, or noticeable errors relating to PATA/SATA cables, memory, etc. The only common factor is that both LVM VGs were, well, handled by LVM. It's also suspicious that only one LVM FS was uncorrupted, plus all of the non-LVM FSs.

I suspect that some combination of disk write caching plus LVM and possibly ext3 caused these problems. At least some of the problem was purely at the LVM level, since I couldn't even access the VGs on the backup disk, and got LVM errors.

In the hope that it helps someone else:

- To help avoid integrity problems in future, I used the ext3 'data=journal,barriers=1' options in fstab, and also used tune2fs to set the journal_data option on the root FS (only way that worked for root). I also disabled disk level write caching with "hdparm -W0 /dev/sdX' on both hard disks. This will have some performance cost but this PC is ridiculously fast for light email and web surfing anyway.

- I've dropped SpiderOak for online backup - it didn't back up most of the files (on two PCs, in different ways), generated a corrupt ZIP file on recovering some files via web interface, and the GUI client got stuck recovering files, and generally makes it hard to track backups/restores.

- I have implemented local backups with rsnapshot, which is really outstanding for multi-verson rsync based, and will extend this for online backups, possibly using DAR to encrypt and compress for remote backups.

- Sbackup (Simple Backup) is great for really quick backup setup (literally 2 minutes to install, configure and have first backup running), but I wouldn't rely on that alone.

Also, if you haven't used etckeeper before, it's worth a try - version control for the whole of /etc using git, hg, bzr, or darcs, and also tracks APT package installs that generate /etc changes. Great if you need to replicate some or all of the setup at a later date.

Fighting massive data loss bugs

Posted Jul 23, 2009 21:42 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

My #1 guess would be failing RAM. DIMMs don't often go bad, but it does happen and there is nothing in most PCs that will detect it, you just start to see the wrong bits, and of course most of those bits are either coming from or going to files, so it's easy to blame the filesystem.

Filesystem bugs are like any other bugs, they tend to be repeatable, they do something stupid and wrong but not entirely ridiculous (e.g. they don't flip a few bits in the middle of a file, but overwrite an entire block with something else) and so on. If you see weird problems, and especially if you see problems that don't have any clear pattern, that's _much_ more likely to be bad RAM.

My personal paper cut

Posted Jul 23, 2009 14:06 UTC (Thu) by Baylink (subscriber, #755) [Link]

is the complete lack of user-level (or even administrator-level) manageability of file associations in Firefox... all the way up to 3.5.

Go ahead: make your firefox use some program to play .gsm audio files; I dare you.

My personal paper cut

Posted Jul 23, 2009 17:35 UTC (Thu) by jimparis (subscriber, #38647) [Link]

Hmm?
Seems easy.
First I run:
$ sudo apt-get install sox libsox-fmt-gsm
Then I find some random website:
http://fox-den.com/ASTERISK/sounds/2005-07-30-new/gsm/
I click on a file.
It pops up a dialog: "What should Iceweasel do with this file?"
I select "browse"
I type "/usr/bin/play"
I click "Do this automatically for files like this from now on"
I hit OK
I hear the sound.
I click another file.
I hear the sound.
What problem are you having?

My personal paper cut

Posted Jul 23, 2009 17:41 UTC (Thu) by Baylink (subscriber, #755) [Link]

The problem I'm having is that, um, it doesn't do that.

That's what *I'd* expect, too. Sometimes, with some point releases, one some platforms, that's what I get. Sometimes not. Doesn't seem deterministic.

And since the very simple "Add" button is missing from whatever version of that association mapping dialogue you have, there's no manual way to fix it if that happens to you... as it happened to us.

My personal paper cut

Posted Jul 23, 2009 17:48 UTC (Thu) by jimparis (subscriber, #38647) [Link]

I'm not sure what you mean by the "add" button. I can go into Edit -> Preferences -> Applications, scroll down to "GSM file", and change the Action to anything, including setting it back to "Always ask".

I'm still unclear on what your actual problem is, besides "sometimes it doesn't work". What exactly, on your system, happens when you click on a GSM file? Does it open with an application that you don't like, does it only offer to save to disk, or does it not do anything? Do you have something like mozplugger installed that is taking over the association? Do you have an audio/x-gsm entry in your .mailcap file or in /etc/mailcap?

My personal paper cut

Posted Jul 23, 2009 17:58 UTC (Thu) by Baylink (subscriber, #755) [Link]

The problem is "scroll down to GSM file".

I *very often* (nearly always) *do not have* GSM file in that list, on a fresh install of Firefox. And if the type of file you have isn't in the list and you aren't presented with the "open with" option, then you're screwed.

I know that's not supposed to happen, but it does, and not infrequently.

Additionally, the dialog of which you speak -- and this complaint is closer in spirit to the sort of thing they're actually looking for --

1) has 2 columns, with an immovable divider
2) cannot be resized
3) lists the human readable name of the filetype (which is almost always useless) followed by the mimetype (which is very often useless, and is often clipped off partially or completely because of 1 and 2) and does not show the only thing you can actually see -- the file extension -- at all.

My personal paper cut

Posted Jul 23, 2009 18:10 UTC (Thu) by jimparis (subscriber, #38647) [Link]

I also did not have "GSM file" on that list, until I clicked one for the first time. And the behavior of clicking on a file (that is not on that list) is to pop up that dialog box where you pick an application, in my experience.

So your problem is that nothing at all happens -- you click the file and it behaves as if you didn't click the file?

Which means we're back to my previous questions -- do you have something like mozplugger installed (check about:plugins), or do you have an action defined in one of the mailcap files (grep gsm /etc/mailcap ~/.mailcap)?

(Are you actually interested in fixing this or just complaining about it? I'm trying to help but you're really not giving enough information for me to provide any useful advice).

My personal paper cut

Posted Jul 30, 2009 15:22 UTC (Thu) by nye (guest, #51576) [Link]

My 'solution' is, whenever Firefox pops up a dialogue asking how to open a file, to choose '/usr/bin/xdg-open'. This allows me to use whatever file handler I define in my DE, neatly bypassing Firefox's hilariously bad 'open with' system.

My personal paper cut

Posted Jul 30, 2009 16:27 UTC (Thu) by bronson (subscriber, #4806) [Link]

haha, this is brilliant! And obvious -- in all these years of pain (papercut-quality pain) I'm surprised I didn't think of this.

Good call nye.

My personal paper cut

Posted Jul 24, 2009 16:30 UTC (Fri) by cortana (subscriber, #24596) [Link]

Having to send a file browser to /usr/bin to find an executable to handle a file type is insane. There are 3240 files in that directory on my system. How a non-expert user is supposed to cope is beyond me--if they don't give up after Firefox 'crashes' (freezes while stat'ing all the files in there) first.

Compare with the user experience in Epiphany, which actually makes an effort to integrate with the GNOME desktop; the user is presented with the option to open the file with their default handler, or save it to disk. The default handler is determined via the freedesktop.org MIME spec, which is the same thing used by everything else in GNOME. An administrator may change that for a particular user, or all users, using the standard Unixoid methods outlined in the spec.

Where Epiphany falls down is in letting the user change their preferred handler for a file type; there is no UI for that yet. So the user would have to save the file, then change the handler for it in Nautilus. Eventually, hopefully Epiphany will be enhanced to ask the MIME database who could handle the MIME type it's been served, and give the user a nice list of all the programs that can handle it, in the same way that Nautilus does. Nothing's perfect, eh? :)

My personal paper cut

Posted Jul 24, 2009 16:49 UTC (Fri) by jimparis (subscriber, #38647) [Link]

Forgive me if I'm dense, but I'm still not seeing what Firefox is doing wrong here. As far as I can tell, it's following the exact same method as everyone else -- it uses /etc/mime.types to map mime-types to file extensions, and it uses /etc/mailcap and ~/.mailcap to choose a default application. If you want to change the default application, then that's the only time you need to open up the file browser -- and I rarely use it to actually browse /usr/bin, I just type the desired executable name directly.

(Incidentally, the Freedesktop.org spec only seems to cover mime types, not mailcap. I don't see which of their specs covers default applications?)

My personal paper cut

Posted Jul 24, 2009 16:52 UTC (Fri) by jimparis (subscriber, #38647) [Link]

By the way, that's not to say Firefox can't use improvements in this area. For example, bug https://bugzilla.mozilla.org/show_bug.cgi?id=83305 has been an occasional issue for me for many years now -- with upstream's response being basically "blah, I can't be bothered to fix this". One of these days I'll be motivated to fix it myself, but my brief glances into the Firefox code always scare me...

My personal paper cut

Posted Jul 24, 2009 23:06 UTC (Fri) by cortana (subscriber, #24596) [Link]

The problem is that you are an expert, whereas most users--for instance, my parents are not. :)

They will be presented with a file selection dialog box and have no idea that they are supposed to go to /usr/bin, and then wait for Firefox to unfreeze, and then pick one of thousands of similarly named items that have absolutely no connection with what they want to do.

For example, opening a PDF document... how is a normal person supposed to know to select evince? :(

As for the freedesktop.org spec... I actually mis-spoke (typed?) earlier. The spec I mentioned allows programs to declare MIME types (that is, provide a mapping from MIME type to human-readable description). It serves a similar purpose to /etc/mime.types, except that it is more modular (it allows applications to define new MIME types) and it allows for the MIME types to have human-readable descriptions, localized to many different languages.

The spec I should have mentioned is the Desktop Entry spec; this is where applications ship .desktop files (in /usr/share/desktop and other places) that specify (among other things) which MIME types an application may handle. It is similar in purpose to the mailcap mechanism, but again it is more modular and allows internationalization, as well as desktop integration (e.g., application menu entries are derived from the .desktop files).

So, Firefox should be reading these .desktop files and offering the user's default handler for a file, along with a selection of other applications that declare that they handle the MIME type. On my system:

$ grep application/pdf /usr/share/applications/mimeinfo.cache
application/pdf=evince.desktop;gimp.desktop;

My personal paper cut

Posted Jul 24, 2009 20:50 UTC (Fri) by nix (subscriber, #2304) [Link]

Whatever FF is doing that makes it freeze, it's not statting:

nix@hades 40 /home/nix% /usr/bin/time stat /usr/bin/* >/dev/null
0.21user 0.14system 0:00.78elapsed 46%CPU (0avgtext+0avgdata
0maxresident)k
0inputs+0outputs (1major+335minor)pagefaults 0swaps

Maybe it's running file(1) or libmagic on every single one? That could
look like a freeze if you didn't notice the disk pounding away:

nix@hades 41 /home/nix% /usr/bin/time file /usr/bin/* >/dev/null
0.17user 0.58system 0:26.16elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+735minor)pagefaults 0swaps

(This is on a system with 3977 binaries in /usr/bin.)

My personal paper cut

Posted Jul 24, 2009 21:18 UTC (Fri) by jimparis (subscriber, #38647) [Link]

> (This is on a system with 3977 binaries in /usr/bin.)

Now it's a challenge :)

$ ls /usr/bin | wc -l
2660
$ ssh psy ls /usr/bin | wc -l
3180
$ ssh bucket ls /usr/bin | wc -l
2221
$ ssh neurosis ls /usr/bin | wc -l
2605
$ ssh oldneurosis ls /usr/bin | wc -l
4036

Finally!

My personal paper cut

Posted Jul 24, 2009 22:08 UTC (Fri) by nix (subscriber, #2304) [Link]

Aha. I just installed KDE4 on that machine (in parallel with KDE3, OK,
yes, I'm reaching):

nix@hades 3 /home/nix% ls -l /usr/bin | wc -l
4102

:)

(is this the single most pointless contest that has ever been carried out
on LWN? I bet I have more symlinks in /usr/bin than you: 4099...)

Copyright © 2009, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds