|
|
Subscribe / Log in / New account

Left by Rawhide

By Jonathan Corbet
July 16, 2012
Your editor has long made an effort to keep a variety of Linux distributions around; this is done both to avoid seeming to endorse any particular distribution and to have a better idea of what the various distributors are up to. The main desktop system, though, has been running Fedora Rawhide for many years. That particular era appears to be coming to an end; it is worthwhile to look at why that is happening and how it reflects on how the Fedora project operates.

Rawhide, as it happens, is older than Fedora; it was originally launched in August, 1998—almost exactly 14 years ago. Its purpose was to give Red Hat Linux users a chance to test out releases ahead of time and report bugs; it could also have been seen as an attempt to attract users who would otherwise lean toward a distribution like Debian unstable. Rawhide was not a continuously updated distribution; it got occasional "releases" on a schedule determined by Red Hat. One could argue that Fedora itself now plays the role that Red Hat had originally envisioned for Rawhide. But Rawhide persists for those who find Fedora releases to be far too stable, boring, and predictable.

The Rawhide distribution does provide occasional surprises, to the point that any rational person should almost certainly not consider running it on a machine that is needed for any sort of real work. But, at its best, Rawhide is an ideal tool for LWN editors, a group that has not often been accused of being overly rational. Running Rawhide provides a front-seat view into what the development community is up to; fresh software shows up there almost every day. And it can be quite fresh; Fedora developers will often drop beta-level software into Rawhide with the idea of helping to stabilize it before it shows up in finished form as part of a future Fedora release. With Rawhide, you can experience future software releases while almost never having to figure out how to build some complex project from source.

Rawhide also helps one keep one's system problem diagnosis and repair skills up to date—usually at times when one would prefer not to need to exercise such skills. But that's just part of the game.

In the early days of Fedora, Rawhide operated in a manner similar to Debian unstable, but with a shorter release cycle. When a given Fedora release hit feature freeze, Rawhide would freeze and the flow of scary new packages into the distribution would stop. Except, of course, when somebody put something badly broken in anyway, just to make sure everybody was still awake. While the Fedora release stabilized, developers would accumulate lots of new stuff for the next release; it would all hit the Rawhide repository shortly after the stable release was made. One quickly learned to be conservative about Rawhide updates during the immediate post-release period; things would often be badly broken. So it seemed to many that Rawhide was a little too raw during parts of the cycle while being too frozen and boring at other times.

Sometime around 2009, the project came up with the "no frozen Rawhide" idea. The concept was simple: rather than stabilize Fedora releases in the Rawhide repository, each stable release would be branched off Rawhide around feature-freeze time. So Rawhide could continue forward in its full rawness while the upcoming release stabilized on a separate track. It was meant to be the best of both worlds: the development distribution could continue to advance at full speed without interfering with (or getting interference from) the upcoming release. It may be exactly that, but this decision has changed the nature of the Rawhide distribution in fundamental ways.

In May, 2011, Matthew Miller asked the fedora-devel list: "is Rawhide supposed to be useful?" He had been struggling with a problem that had bitten your editor as well: the X server would crash on startup, leaving the system without a graphical display. The fact that Rawhide broke in such a fundamental way was not particularly surprising; Rawhide is supposed to break in horrifying ways occasionally. The real problem is that Rawhide stayed broken for a number of weeks; the responsible developer, it seems, had simply forgotten about the problem. Said developer had clearly not been running Rawhide on his systems; this was the sort of problem that tended to make itself hard to forget for people actually trying to use the software.

So your editor asked: could it be that almost nobody is actually running Rawhide anymore? The fact that it could be unusably broken for weeks without an uproar suggested that the actual user community was quite small. One answer that came back read: "In the week before F15 change freeze, are you really surprised that nobody's running the F16 dumping ground?" At various times your editor has, in response to Rawhide bug reports, been told that running Rawhide is a bad idea (example, another example, yet another example). There seems to be a clear message that, not only are few people running Rawhide, but nobody is really even supposed to be running it.

The new scheme shows its effects in other ways as well. Bug fixes can be slow to make it into Rawhide, even after the bug has been fixed in the current release branch. Occasionally, the "stable" branch has significantly newer software than Rawhide does; Rawhide can become a sort of stale backwater at times. It is not surprising that Fedora developers are strongly focused on doing a proper job with the stable release; that bodes well for the project as a whole. But this focus has come at the expense of the Rawhide branch, which is now seen, by some developers at least, as a "dumping ground."

Recently, your editor applied an update that brought about the familiar "GNOME just forgot all your settings" pathology, combined with the apparent loss of the ability to fix those settings. It was necessary to return to xmodmap commands to put the control key where $DEITY (in the form of the DEC VT100 designers) meant it to be, for example. Some time had passed before this problem was discovered, so the obvious first step was to update again, get current, and see if the problem had gone away. Alas, that was just when Rawhide exploded in a fairly spectacular fashion, with an update leaving the system corrupted and unable to boot. Not exactly the fix that had been hoped for. Fortunately, many years of experience have taught the value of exceptionally good backups, but the episode as a whole was not fun.

But what was really not fun was the ensuing discussion. Chuck Forsberg made the reasonable-sounding suggestion that perhaps developers could be bothered to see if their packages actually work before putting them into Rawhide. Adam Williamson responded:

That's not how Rawhide works. The images in the Rawhide tree are automatically generated. There's no testing or release process. They just get built periodically. If they work, great. If they don't, no-one guaranteed that they would.

This, in your editor's eyes, is not the description of a distribution that is actually meant to be used by real people.

The interesting thing is that Fedora developers seem to be mostly happy with how Rawhide is working. It gives them a place to stage longer-term changes and see how they play with the rest of the system. Problems can often be found early in the process so that the next Fedora development cycle can start in at least a semi-stable condition. By looking at Rawhide occasionally, developers can get a sense for what their colleagues are up to and what they may have to cope with in the future.

In other words, Rawhide seems to have evolved into a sort of distribution-level equivalent to the kernel's linux-next tree. Developers put future stuff into it freely, stand back, and watch how the monster they have just created behaves for a little while. But it is a rare developer indeed who actually does real work with linux-next kernels or tries to develop against them. Producing kernels that people actually use is not the purpose of linux-next, and, it seems, producing a usable distribution is not Rawhide's purpose.

This article was meant to be a fierce rant on how the Fedora developers should never have had the temerity to produce a development distribution that fails to meet your editor's specific needs. But everybody who read it felt the need to point out that, actually, the Fedora project is not beholden to those needs. If the current form of Rawhide better suits the project's needs and leads to better releases, then changing Rawhide was the right thing for the project to do.

Your editor recognizes that, and would like to express his gratitude for years of fun Rawhide roller coaster rides. But it also seems like time to move on to something else that better suits current needs. What the next distribution will be has yet to be decided, though. One could just follow the Fedora release branches and get something similar to old-style Rawhide with less post-release mess, but perhaps it's time for a return to something Debian-like or to go a little further afield. However things turn out, it should be fun finding a new distribution to get grumpy about.


to post comments

Selective upgrading of packages

Posted Jul 16, 2012 16:46 UTC (Mon) by epa (subscriber, #39769) [Link] (13 responses)

Fedora's six-month cycle keeps it fairly current but sometimes you want the very latest version of something or other. If it's already in Rawhide you can download the source package and rebuild it - after downloading, rebuilding, and installing all the packages it depends on, recursively. This kind of 'dependency hell' for binary packages went away when tools like apt and yum became common. But rebuilding from source is still a largely manual operation in the rpm world. If tools had better support for selectively upgrading certain packages (either as binaries, or rebuilding from source) then you could largely get the best of both worlds by running ordinary Fedora releases and occasionally sucking down an update from Rawhide. Things would still break, of course, but perhaps not as much as running Rawhide for everything.

I believe apt and dpkg offer something like this where you can run Debian stable and pick certain packages from unstable?

Selective upgrading of packages

Posted Jul 16, 2012 17:01 UTC (Mon) by drag (guest, #31333) [Link] (8 responses)

Apt offers pinning support. That will allow you to mix and match packages based weights you assign them. It's a useful feature. You can even 'weight' a group of packages in such a manner that it allows you to 'roll back' packages to earlier releases and such things.

But in terms of 'stable' using apt-pinning to pull in packages from Unstable or testing is not useful. The problem you run into is that the way Debian does dependencies is that when they upgrade some lower-level package they re-compile everything then have the new packages depend on the new lower-level package. This is probably not necessary as long as the developers of the low level packages are not huge dicks about breaking ABIs, but it does avoid the need for Debian to care when libraries don't bother to stay compatible with themselves.

The effect of this is that when you pull in packages from Unstable you will be forced to upgrade huge swaths of your OS.

If you want to mix and match packages the best approach I found is to use backports.debian.org and/or backwards port the packages yourself using deb-src files. Despite what the Gentoo folks may say Debian does make handling/building/install source-based packages fairly easy. Using that approaches I have always been successful.

Although I expect that forcing dpkg to install and ignoring dependency tracking will probably work fairly well in many cases.

Selective upgrading of packages

Posted Jul 16, 2012 17:26 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

That's why Debian also has Backports repository with new packages built against the previous distribution. It's a really nice thing if you need just a few packages on top of the stable distribution.

And actually using the Unstable distribution in Debian is much less scary than running Rawhide.

Selective upgrading of packages

Posted Jul 16, 2012 18:11 UTC (Mon) by drag (guest, #31333) [Link] (2 responses)

The downside is that if you are interested in the 'latest and greatest' then Debian Unstable can actually be behind the regular Fedora releases. Not always and not with everything, but it's quite common.

Selective upgrading of packages

Posted Jul 16, 2012 21:20 UTC (Mon) by awesomeman (guest, #85116) [Link] (1 responses)

This happens around freezes of testing, because it is preferred to be able to run updates to testing through unstable, which means that core packages can't be upgraded during that period, or this would greatly complicate releasing.

Selective upgrading of packages

Posted Jul 16, 2012 22:54 UTC (Mon) by drag (guest, #31333) [Link]

Well when you have a Fedora release it will generally be out ahead of Debian unstable for a while. After a couple months it would catch up, but then eventually Fedora has another release.

I use Debian Unstable and Fedora on my desktops.To me they seem roughly equivalent in terms of 'rawness' and goals even though they take different approaches.

To find a equivelant for Rawhide you'd have to look at mixing Debian Unstable with Experimental.

Selective upgrading of packages

Posted Jul 16, 2012 20:55 UTC (Mon) by epa (subscriber, #39769) [Link]

Thanks for the info. If it is common for mass rebuilds to happen for library updates then there is a need for pinning packages at the source-package level. That would reduce the churn from updating one package to a new version.

Selective upgrading of packages

Posted Jul 16, 2012 20:58 UTC (Mon) by foom (subscriber, #14868) [Link] (1 responses)

I find selectively pulling packages from unstable quite useful, and usable, at least on a server. Generally, it will want to upgrade a few core packages like libc/libstdc++, but I don't find that to be a problem. It doesn't actually cause a need to upgrade vast swaths of the OS, only the few core packages, and I trust that those likely still work. :)

It's possible that the interdependence of desktop packages might be greater and make it infeasible to usefully do for a non-server package without upgrading almost the entire OS, I haven't really tried that.

Selective upgrading of packages

Posted Jul 16, 2012 23:04 UTC (Mon) by drag (guest, #31333) [Link]

It depends.

If the package doesn't want lots of dependencies then I'll pull down straight from Unstable. If it's something that lots of other packages depend on I usually won't do it and will source code compile.

This is usually how it goes for me when I install Debian stable and find out the software I want to run wants newer versions of something-or-other then Debian provides. In ranking from preferable to not:

1. Check backports.debian.org
2. See if something can be pulled from testing without pulling in a lot of dependencies.
3. Use apt-source and related items to compile packages.
4. upgrade to testing or unstable.

Selective upgrading of packages

Posted Jul 17, 2012 10:57 UTC (Tue) by pkern (subscriber, #32883) [Link]

The problem you run into is that the way Debian does dependencies is that when they upgrade some lower-level package they re-compile everything then have the new packages depend on the new lower-level package. This is probably not necessary as long as the developers of the low level packages are not huge dicks about breaking ABIs, but it does avoid the need for Debian to care when libraries don't bother to stay compatible with themselves.

That's pretty uncommon, especially now that we have symbol files at least for C libraries. The "recompile everything" normally only happens when the ABI is broken, which causes the binary package name to change. As we're pretty anal about ABIs it's not uncommon that we point upstream to breakage.

It is true, however, that some library say that if you compile against it, you need at least that same version of the library to use it. That's one way one can take without symbol files. This means that you do not need to manually check for ABI additions. But we don't do mass rebuilds for those new versions, it's just that packages happen to link against them when they are built and then inherit those dependencies.

Selective upgrading of packages

Posted Jul 16, 2012 17:26 UTC (Mon) by dwmw2 (subscriber, #2063) [Link] (3 responses)

You can use yum to upgrade specific binary packages from Rawhide. For example yum --releasever=rawhide update openconnect should pull in the specific package and any of its dependencies. And if its list of dependencies comprises the whole world including glibc, you get to say 'no' and rebuild from source instead.

Note that this doesn't always work. Some GNOME projects are deliberately shipping with broken dependencies so the Rawhide packages aren't marked as requiring new libraries even though they do.

Selective upgrading of packages

Posted Jul 17, 2012 13:05 UTC (Tue) by Yenya (subscriber, #52846) [Link] (2 responses)

There is another problem in rawhide: it is not even usable for "let's see whether the latest-greatest version of this package fixes my problem": after reproducing my problem with rawhide packages and reporting the bug, I have been told "it is already fixed in Koji". And indeed, there has been several days old package in Koji, which contained the fix.

So as a side note, the whole mirroring of gigabytes of rawhide is pretty useless even for testing, because rawhide, from the tester's point of view, is already outdated.

Selective upgrading of packages

Posted Jul 17, 2012 20:54 UTC (Tue) by nim-nim (subscriber, #34454) [Link]

If you're serious about reporting bugs (not just running the latest and greatest) you should point yum directly to koji repos. That way you can test packages without he rawhide lag

Though the infra price should be paid in useful bug reports

Selective upgrading of packages

Posted Jul 18, 2012 6:21 UTC (Wed) by michich (guest, #17902) [Link]

It's unusual that a several days old build was in Koji without reaching Rawhide. I thought Rawhide gets updated with the latest Koji builds every day.

By your reasoning you'd have to conclude that testing of anything is always useless, because the developer could respond with "I already fixed the bug in my local git tree" or "I already thought about this bug and have a fix planned in my head". The fix propagation delay can simply never reach zero.

Left by Rawhide

Posted Jul 16, 2012 16:47 UTC (Mon) by killefiz (subscriber, #8542) [Link]

@corbet - the comment you're referring to isn't about RPM packages that aren't expected to be tested before they hit rawhide but about the netinst images that are built nightly.

I'd say that nonworking install/netinstall images are something to expect in rawhide. Nonworking packages (except for the occasional dependency issues) should be (and are mostly) avoided by packagers.

--
sven (Fedora contributor)

Left by Rawhide

Posted Jul 16, 2012 16:51 UTC (Mon) by marduk (subscriber, #3831) [Link] (18 responses)

You should take a good look at Gentoo. Gentoo, IMO, offers the best of both worlds (well, 3 world). Where they have "stable" "testing" and "overlay" (or 3rd party) repositories for software.

For example, if you are GNOME user in the stable tree you are running gnome 2.32. If you are in the testing tree you are using GNOME 3.4 (well, most of it). And if you are on the gnome overlay you are running some parts of 3.5. Best off, you can mix and match these (to some degree). Let's say you want to run GNOME 3 but want to stick with a "stable" kernel, you can do that. Gentoo also keeps multiple versions of the same package at a time, so it essentially allows you to "downgrade" certain packages. For example, a few days ago I updated to gtk-3.5 and discovered it broke a lot of things that my mostly GNOME 3.4 system utilized. I simply "masked" the 3.5* versions of gtk and then did an update, and it downgraded me to the latest gtk-3.4*. When I feel safe/brave, I can later unmask gtk-3.5 and try it again.

It also allows you to experiment with stuff and go back fairly easily. For example I wanted to play with systemd so I changed some USE flags and converted my system to systemd, rebooted my system and there I was running a systemd. I did live with systemd for a few days until i decided there were still some things that needed work (e.g. sometimes when I boot I cannot log in... it seems like some services are not being started or starting in the wrong order). So I'm giving up on systemd for now, went back to Gentoo's openrc (I just created a new USE file for systemd, so I just had to move that file out of the way), reboot again, and I'm back on openrc. I'll try systemd in another month or so once the devs have it better integrated.

You can also stick with a mostly stable system but run a few things from the unstable tree (e.g. you always want the latest Postgres of the latest Firefox). You can even run "live" ebuilds (packages that pull straight from upstream's vcs and builds them on-the-fly) if you are brave.

Gentoo makes this all easy (well, easy if you now Gentoo). And you start to feel more like *you* are in control of what's on your system instead of what the distro's developers or some build bot decides you're going to be running.

Left by Rawhide

Posted Jul 16, 2012 18:06 UTC (Mon) by nikarul (subscriber, #4462) [Link] (5 responses)

I'll second this recommendation. I've been running Gentoo much like our editor has been running Rawhide these past few years. Every once in awhile things break, sometimes spectacularly, and it definitely keeps your system diagnosis skills sharp. But for the most part, it works very well and gives you a more control over what's going onto your system.

I will add a couple caveats. Obviously this requires a system which good CPU and memory resources for the amount of package building you do. And I do maintain a rather large RAM drive for building all but the largest packages (I'm looking at you, Libre Office). I've had more than one hard drive fall in the past to the onslaught that is 'emerge -uDv world'.

Left by Rawhide

Posted Jul 16, 2012 21:50 UTC (Mon) by jackb (guest, #41909) [Link] (3 responses)

Every once in awhile things break, sometimes spectacularly, and it definitely keeps your system diagnosis skills sharp. But for the most part, it works very well and gives you a more control over what's going onto your system.

I haven't used a traditional Linux distribution in over 10 years; I've been using Linux from Scratch followed by Gentoo. It's difficult for me to remember what it's like not to have that control.

Do regular distributions break less frequently than Gentoo or do they just break in different ways?

Left by Rawhide

Posted Jul 17, 2012 1:45 UTC (Tue) by kenmoffat (subscriber, #4807) [Link]

Hmm, as a Linux from Scratch (and Beyond-*) user and editor, I've often broken my builds when updating. Occasionally,I even have to use multiple-version workarounds (currently, older ffmpeg for transcode, which we've now dropped, and gst-ffmpeg [ yes, I know that gst-ffmpeg devs dislike using system ffmpeg, but their version of ffmpeg was *so* old last time I looked).

I've also seen problems in specific packages (e.g. abiword with some past versions of libxml2). So, I expect that from time to time there will be *some* breakage on my desktop. But, that's "my system, my rules, my breakage."

I don't expect my changes in the books to break functionality - if they do, I try to fix the problem. I had assumed that all distros took a similar "we don't deliberately break it, but if it's broken we will try to fix it" attitude. Sounds as if I'm too much of an optimist.

ĸen

Left by Rawhide

Posted Jul 17, 2012 13:25 UTC (Tue) by rsidd (subscriber, #2582) [Link]

I run Gentoo on my android tablet (arm) in a chroot. I even compiled libreoffice. Together with a keyboard-case it makes a fine laptop.

Left by Rawhide

Posted Jul 17, 2012 15:11 UTC (Tue) by drag (guest, #31333) [Link]

> It's difficult for me to remember what it's like not to have that control

It feels exactly the same.

Left by Rawhide

Posted Jul 17, 2012 10:29 UTC (Tue) by Trelane (subscriber, #56877) [Link]

> Obviously this requires a system which good CPU and memory resources for the amount of package building you do.

I run Gentoo on my atom boxes (netbook, nettop). It works fine. (Anecdotally, better [faster/cooler] than prebuilt distros, perhaps due to -Os -march=atom Could be confirmation bias, though). Just don't expect huge things (kernel, mozilla, LibreOffice) to compile instantly.

Left by Rawhide

Posted Jul 17, 2012 6:50 UTC (Tue) by djc (subscriber, #56880) [Link]

+1. It sounds like Gentoo would fit the bill for what you need quite well. The unstable Gentoo tree gets very new stuff, but there are a lot of people who actually run the unstable tree (including many of the devs), so I doubt most things would remain broken that long. As others noted, it's also relatively easy to run a stable tree (which should still be relatively modern), with granular settings to decide what you want to pick up from the unstable tree.

Left by Rawhide

Posted Jul 17, 2012 7:56 UTC (Tue) by gnu_andrew (guest, #49515) [Link] (9 responses)

This is exactly why I use Gentoo. It baffles me how people can handle having a regular slew of new binary blobs to update to, with no idea of the changes contained within. I guess I just don't trust developers that much with my system, and experience has generally proved me correct.

For me, there are two models I can work with; the Gentoo model, which is close to how I believe FOSS should work, giving you full control over which updates to bring in and allowing you to create your own unique system, and the RHEL/Debian Stable model, which is like a FOSS Windows/MacOS model where everything stays visibly the same (bar security updates and major fixes) and then you choose some apocalyptic moment to do that big upgrade to the new shiny major version. Fedora, Ubuntu, etc have something in between which just doesn't work for me.

Left by Rawhide

Posted Jul 17, 2012 11:56 UTC (Tue) by jwakely (subscriber, #60262) [Link]

> It baffles me how people can handle having a regular slew of new binary blobs to update to, with no idea of the changes contained within.

Would you prefer if the people writing kernel drivers for you, or writing the compiler you use, or breaking your desktop environment for you, stopped doing that because they spent their time reviewing all the code in the updates to their own systems? (OK, maybe the desktop guys ;-)

As someone who produces FOSS as well as consuming it I spend far too much of my own time working on a single project, I have no desire to review all the other projects I rely on. I'd rather just take the updates and live with the occasional fits of rage when something seemingly stupid gets done to my system!

Left by Rawhide

Posted Jul 17, 2012 15:57 UTC (Tue) by drag (guest, #31333) [Link] (5 responses)

> It baffles me how people can handle having a regular slew of new binary blobs to update to, with no idea of the changes contained within.

Generally speaking people keep change logs.

> I guess I just don't trust developers that much with my system, and experience has generally proved me correct.

So the developers that write your software are not good enough to compile it?

What a bizarre concept.

And the developers that write the scripts you use to compile everything, and blindly download and execute via portage, are going to do a much better job?

> For me, there are two models I can work with; the Gentoo model, which is close to how I believe FOSS should work, giving you full control over which updates to bring in and allowing you to create your own unique system,

I don't want full control. I want to hand control over to people that are competent and know what they are doing better then myself. Instead of micromanaging everything I want to only be forced to manage what really matters to me and the task at hand.

Left by Rawhide

Posted Jul 18, 2012 20:24 UTC (Wed) by nix (subscriber, #2304) [Link] (4 responses)

I don't want full control. I want to hand control over to people that are competent and know what they are doing better then myself.
And that's a fundamental difference that you'll never resolve by arguing about it. I'm a control freak -- I got involved with computers in the first place (aged six) entirely because the things were perfect servants that will do precisely what you ask (as long as you can describe it well enough) and can in principle be completely understood. As a consequence, I consider binary-package distros to be a violation of the fundamental reason I use computers in the first place -- control.

But I'm also quite aware that most people just consider the things tools, not totally controllable havens, and I can understand why if you think of your computer as a tool you might be willing to prioritize things that make it a better tool over control issues you don't care about.

Left by Rawhide

Posted Jul 20, 2012 5:28 UTC (Fri) by dirtyepic (guest, #30178) [Link] (3 responses)

Well put. When I first started using Linux I tried several binary distros but I kept finding myself attempting to compile mplayer from source and installing gcc prereleases. After a few Linux From Scratch installs I realized I would never be able to maintain an entire distro on my own and still have time for extracurricular activities like sleeping. So I switched to Gentoo, where I can focus my attention on what interests me and leave the rest to be managed by someone else who knows a lot more about those areas.

Gentoo is a distro for the obsessive compulsive. Most people don't get it, and they never will, and that's fine.

Left by Rawhide

Posted Jul 20, 2012 12:42 UTC (Fri) by nix (subscriber, #2304) [Link]

When I first started using Linux I tried several binary distros but I kept finding myself attempting to compile mplayer from source and installing gcc prereleases. After a few Linux From Scratch installs I realized I would never be able to maintain an entire distro on my own and still have time for extracurricular activities like sleeping.
And they say free software is not a drug. :)

Left by Rawhide

Posted Jul 20, 2012 22:49 UTC (Fri) by man_ls (guest, #15091) [Link] (1 responses)

Meh. With Debian you also have control over the parts that you find interesting, you can compile from source and generate your own packages, but the default is to leave things alone. To our beloved editor: do not listen to these guys, for regular obsessive people Gentoo.gets old pretty quick. You sound like you want Debian testing, all of the fun but little unpredictability!

Left by Rawhide

Posted Jul 21, 2012 3:16 UTC (Sat) by dirtyepic (guest, #30178) [Link]

It was between Debian and Gentoo and maybe Slackware, but Gentoo's install was closer to LFS' so I tried it first. Now they made me a developer so I suppose I should stick around. For what it's worth, I agree that our editor would probably be happier with Debian or another of the suggestions.

Some people buy a boat to relax. Some people build one in their basement. In the end it's just personal preference.

Left by Rawhide

Posted Jul 17, 2012 17:19 UTC (Tue) by raven667 (subscriber, #5198) [Link] (1 responses)

> It baffles me how people can handle having a regular slew of new binary blobs to update to, with no idea of the changes contained within.

If you are just blindly compiling the source you download without a detailed audit than there is no functional or security difference. It sounds like you are ascribing magical properties to the source code, as if compiling makes your system more "pure".

Left by Rawhide

Posted Jul 18, 2012 3:24 UTC (Wed) by AngryChris (guest, #74783) [Link]

It sounds like he's going to end up here: http://funroll-loops.info/

Left by Rawhide

Posted Jul 20, 2012 14:20 UTC (Fri) by andika (guest, #47219) [Link]

Interesting alternative. Seems that I have to try Gentoo for GNOME dev build test. Let's see if my build trial on Fedora Rawhide, Debian Sid, and OpenSUSE 12 got stuck, Gentoo won't.

Left by Rawhide

Posted Jul 16, 2012 17:00 UTC (Mon) by dowdle (subscriber, #659) [Link] (3 responses)

Jon, I can certainly understand your frustration and I'm glad you held back and turned away from rantiness... and I do understand why you have decided to move on from Rawhide. I've never been brave enough to run it myself.

My question/comment though is... that it is fairly well known that a lot of original development goes on in Fedora... so it would be interesting for you to perhaps sample a few of the other major distributions to see how true that Fedora fact holds up. I'm guessing that it will indeed hold up and that most other distributions will be somewhat boring by comparison... but then again, what do I know?!?

I do occasionally download the Fedora nightly iso builds... but they seem to have hiatuses... and not being that familiar with the nitty-gritty of their release cycle, I'm guessing that is normal. Following their development releases is a good way to keep up on the development since those are made from Rawhide (I think)... and at least you know they build... and are installable... unless they aren't. That's kind of the way I've been doing some Rawhide tasting. If desired, you can still periodically sample the development builds without being committed to anything... and using a VM means you don't have to worry about breaking something you might care about.

In any event, getting a better handle on the development going on elsewhere is certainly going to be a good thing... but I do enjoy your glimpses into Fedora so I also wish some retention of that was possible. Phoronix also seems to cover the internals of Fedora development some, although mainly via mailing lists... and hey, it is easy enough for those interested to subscribe themselves, right?

Thanks for the article and all past articles regarding Rawhide. They will be missed.

Left by Rawhide

Posted Jul 17, 2012 7:56 UTC (Tue) by ovitters (guest, #27950) [Link] (2 responses)

It would be nice if LWN evaluated Mageia for a while. The 'Rawhide' is called Cauldron. You're expected to follow mageia-dev, as planned changes are announced there (+ after the fact announcement in case breakage was unplanned).

I don't have any major issues running Cauldron, aside from the periodical big changes. For big changes we have an updates_testing channel, to test e.g. new xorg or new systemd. Those are used rarely though.

Packagers not running Cauldron are the exception (though take into account the size of the distribution; Fedora is much bigger).

Note that at the moment I find Cauldron really unstable compared to usual. This due to two big problems; Postfix doesn't start and as we're switching to systemd only, GDM doesn't seem to notice that you logged out.

Left by Rawhide

Posted Jul 18, 2012 16:40 UTC (Wed) by halfline (guest, #31920) [Link] (1 responses)

Left by Rawhide

Posted Jul 27, 2012 1:29 UTC (Fri) by ovitters (guest, #27950) [Link]

We have GDM 3.5.4.2 and this patch is included. I'll ask someone to file another bug + check if they really are running the latest systemd.

Left by Rawhide

Posted Jul 16, 2012 17:09 UTC (Mon) by drag (guest, #31333) [Link]

Distro hopping is miserable. Most distributions suck and the internet is full of all sorts of terrible and misleading advice on the subject. I pity you.

I suggest sticking with Fedora 6 month releases in pull in packages from Koji if you are interested in looking at something that is newer then what is offered.

Left by Rawhide

Posted Jul 16, 2012 17:10 UTC (Mon) by louie (guest, #3285) [Link] (10 responses)

If the current form of Rawhide better suits the project's needs and leads to better releases, then changing Rawhide was the right thing for the project to do.

It only suits the project's needs well because the project, collectively, is a total failure on the QA front, so what's one more source of failure and wasted opportunity. This is not a new problem. My first publicly recorded mini-rant on the subject is from 2007, and it was already a long-standing problem at that time. As I said then:

Let me be clear- I feel that quality is one of the biggest possible advantages free software can have over proprietary software, specifically because you can have hundreds or thousands of people testing the latest code every day. If you're not taking advantage of that, you're throwing away one of the biggest advantages we have over proprietary software.

(emphasis added)

With a useless Rawhide, Fedora continues to throw away that potential QA advantage. But that's nothing new.

Left by Rawhide

Posted Jul 16, 2012 17:23 UTC (Mon) by dowdle (subscriber, #659) [Link] (3 responses)

louie, So given your dismal opinion of Fedora / Rawhide, what have you moved on to? What distro is the most stable? ...staying somewhat bleeding edge for desktop stuff? ...doing a lot of original development work on many small and some major packages? I'm guessing the answer is different for each question but maybe not.

Fedora definitely isn't for everyone but I prefer it and refer to it as the "innovator distro". We all have our preferences and that is one reason I'm glad so many distros exist.

People sometimes freak out about the number of Linux distros but I ask them why aren't there multiple flavors of Windows and Mac available... besides the few releases provided by their makers? The answer is obvious, because they don't allow third parties to remix and release like Linux does... but you have to wonder just how many Windows there would be and how many Mac OS X's there would be if people were allowed to remix it. My guess is that they would have the same "problem" we do if only they were allowing it.

Left by Rawhide

Posted Jul 16, 2012 21:17 UTC (Mon) by louie (guest, #3285) [Link] (1 responses)

I still use Fedora (not Rawhide, which I would prefer to use). Fedora itself is a perfectly competent distro, with both practical and moral aims that work for me. I just think that treating Rawhide as a dumping ground squanders an opportunity to much better achieve those practical/pragmatic aims.

Left by Rawhide

Posted Jul 18, 2012 3:32 UTC (Wed) by AngryChris (guest, #74783) [Link]

This is where i'm at right now. I swap between Debian and Fedora every couple of years. What amazes me is how good Fedora is when it appears (to me) to be so grossly mismanaged. I don't see how Rawhide can be remotely useful for "seeing how things work together" if no one is running it (i.e.; making those things work together). As recently as last year, the policy board mailing list was overrun with talk of mission statements. It's like watching Initech (Office Space) actually publishing great software.

I have a feeling that most good Free software is good in spite of how the project is run rather than because of it (for any given Free software project).

Left by Rawhide

Posted Jul 17, 2012 9:12 UTC (Tue) by viiru (subscriber, #53129) [Link]

> People sometimes freak out about the number of Linux distros but I ask
> them why aren't there multiple flavors of Windows and Mac available...
> besides the few releases provided by their makers? The answer is obvious,
> because they don't allow third parties to remix and release like Linux
> does... but you have to wonder just how many Windows there would be and
> how many Mac OS X's there would be if people were allowed to remix it. My
> guess is that they would have the same "problem" we do if only they were
> allowing it.

There are already nearly endless amounts of custom Windows installer cd images, with different sets of updates and additional drivers added. Distributing those is of course illegal, but since there is a need (there is plenty of hardware where the standard images wont even boot, many manufacturers no longer provide install disks and even if they did most users lose those within the first week of ownership) there is a service.

Left by Rawhide

Posted Jul 16, 2012 21:22 UTC (Mon) by johannbg (guest, #65743) [Link] (5 responses)

"It only suits the project's needs well because the project, collectively, is a total failure on the QA front"

Care to elaborate on that comment?

Btw without proper test cases and without proper debugging procedures hundreds or thousands of people testing the latest code every day is meaningless...

Left by Rawhide

Posted Jul 16, 2012 21:29 UTC (Mon) by louie (guest, #3285) [Link] (3 responses)

Btw without proper test cases and without proper debugging procedures hundreds or thousands of people testing the latest code every day is meaningless...

People testing the code gives you data on the parts of the software that people actually use. That's certainly not perfect, and shouldn't be the end-all/be-all of testing, but having people use and report back on the parts that get used is absolutely the best software testing methodology there is, period.

Left by Rawhide

Posted Jul 16, 2012 21:34 UTC (Mon) by johannbg (guest, #65743) [Link] (2 responses)

And this is relevant to my comment how?

Left by Rawhide

Posted Jul 16, 2012 21:53 UTC (Mon) by louie (guest, #3285) [Link] (1 responses)

hundreds or thousands of people testing the latest code every day is meaningless...

Hundreds or thousands of people testing the code is never, ever meaningless. If you think it is, your QA processes are broken.

Context?

Posted Jul 23, 2012 11:06 UTC (Mon) by gmatht (guest, #58961) [Link]

I am not sure what the precise context is here. But if the context is having hundreds of thousands of people test software that is known not to work at all, then you aren't going to get much useful feedback. In many projects there is a "latest software that passed basic automated tests" tree, which may be subtly broken but will at least partially work. I don't see any point in having hundreds of thousands of people test anything newer than that; having a thousand "nothing works" bug reports show up at the same time doesn't really help anyone.

Left by Rawhide

Posted Jul 17, 2012 0:02 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Users and test cases are different and provide different kinds of information. Test cases are great because they provide reliable, high quality information, but they're limited; nobody has enough resources to test every configuration their users are going to try out. User reports are messy, but they're documentation of real problems that your QA has to fix. The continued existence of user error reports is proof that the kind of disciplined testing represented by proper test cases and debugging procedures is inadequate. If it were enough, no problems would ever get through for users to find them.

This isn't new

Posted Jul 16, 2012 17:17 UTC (Mon) by dwmw2 (subscriber, #2063) [Link] (27 responses)

I'm not sure I'd agree with the assertion that the 'No Frozen Rawhide' policy has "changed the nature of the Rawhide distribution in fundamental ways.".

What goes into Rawhide does go into the next Fedora release. It's not like linux-next where stuff is supposed to be present only if it's going to be sent to Linus in the next merge window; with rawhide it actually is true. What's pushed to Rawhide now will be in Fedora 18, unless it's subsequently updated again.

The only thing that changed is that in the past, Rawhide was frozen while a Fedora release was being stabilised. During that period, there was nowhere that people could commit the shiny new exciting stuff. This lead to a huge amount of changes hitting Rawhide when the floodgates were opened just after a release.

One of the things that the 'No Frozen Rawhide' policy was supposed to do was remove that damming effect, and improve stability in the "immediate post-release period" that you mention.

Also, users have always been told that "if it breaks in Rawhide you get to keep both pieces"; that isn't new either. And it's always been variable — some developers really don't care about their packages in Rawhide until a new release is imminent, while others do try to keep it working.

I believe that Fedora policy does forbid having a package in the stable distro which is newer than the one in Rawhide. It's the same policy that forbids Fedora N from having a new version of a package than Fedora N+1. Upgrades should, well, upgrade.

I don't think there's anything really new here. You take your life in your hands when you run Rawhide, and you've been fairly lucky if it's been consistently usable for you for a number of years. I note your 'exploded' link actually refers to a mail describing how to fix things after the problematic update, thus implying that a full restore from backup shouldn't have been necessary?

This isn't new

Posted Jul 16, 2012 17:32 UTC (Mon) by corbet (editor, #1) [Link] (25 responses)

The thing that I think has changed is that developers are not actually running Rawhide and are less concerned with whether it works or not. It has changed in my experience. It's not that things break, one expects that. It's what happens afterward that is different.

And no, the instructions to fix the system did not work for me. My life does not currently allow time to figure out what else was hosed; I really had to just give up on it.

This isn't new

Posted Jul 16, 2012 18:49 UTC (Mon) by jspaleta (subscriber, #50639) [Link] (3 responses)

Have you looked at using the following scenario.

run N release
run N+1 pre-release branch when it gets branched from rawhide
stay on N+1 pre-release branch through official N+1 release
jump to N+2 pre-release branch when it gets branched from rawhide

Would this be closer to matching your expectations?

-jef

This isn't new

Posted Jul 16, 2012 19:00 UTC (Mon) by corbet (editor, #1) [Link] (1 responses)

That possibility is mentioned in the conclusion. Certainly it would be the easiest change to make from running full frontal Rawhide.

This isn't new

Posted Jul 16, 2012 19:06 UTC (Mon) by jspaleta (subscriber, #50639) [Link]

bah sorry about that, reading is hard.

-jef"ELACKOFCOFFEE"spaleta

This isn't new

Posted Jul 17, 2012 6:40 UTC (Tue) by ghane (guest, #1805) [Link]

I am doing this with Ubuntu. Soon after 11.04 was released, I shifted to 11.10 pre-alphas, updating daily; and so on.

I am now on quantal 12.10. Things break every couple of weeks, but logging into console and running an update again has fixed those.

--
Sanjeev

This isn't new

Posted Jul 16, 2012 20:14 UTC (Mon) by dwmw2 (subscriber, #2063) [Link] (13 responses)

You may be right that what happens after a breakage is different — but that was always extremely variable from one developer to another anyway. So you'd need a reasonable sample size, over time, to really be sure of that.

I'm also not sure that such a change, if it's real, could necessarily be blamed on the no-frozen-rawhide thing. There have been other changes in Fedora packaging over time. There's much more of a focus on "packagers" these days, and much less on "package maintainers" who actually get their hands dirty and work on the code. This naturally translates into a much less happy experience for those who file bugs.

This isn't new

Posted Jul 16, 2012 21:42 UTC (Mon) by johannbg (guest, #65743) [Link] (3 responses)

That's my assessment as well as in that quality of the components in the distribution has been thrown overboard for the quantity of components in the distribution...

This isn't new

Posted Jul 17, 2012 9:24 UTC (Tue) by drag (guest, #31333) [Link] (2 responses)

Which is yet another reason why distributions need to keep their scope limited, maintain standardized APIs across distributions, and have the upstream developers themselves package the software.

There is no way that you can expect a OS to bundle everything that everybody would possibly want run on it. Extensibility is important and users lack the knowledge, time, and drive to port and compile the software they want to use themselves.

This isn't new

Posted Jul 17, 2012 19:40 UTC (Tue) by epa (subscriber, #39769) [Link]

But equally, you can't expect an upstream developer to package the program for every possible distribution and architecture. Even expecting the upstream developer to support two is a losing proposition, since most people will have one distro that they use regularly.

Is there some way the distros and the upstreams can meet halfway? Source code could be distributed in a standard source package format, which as well as the usual configure script contains metadata about what files are built, what libraries are required and so on. This in turn can generate source packages for rpm, dpkg, Gentoo and so on using translator tools maintained by each distro. This would not be appropriate for core packages, which usually have a lot of distro-specific hacking, but for the long tail of applications and libraries it could bridge the gap between 'distro must package all possible programs' and 'upstream muat package for all possible distros'.

This isn't new

Posted Jul 20, 2012 10:57 UTC (Fri) by misc (subscriber, #73730) [Link]

You never tried to say "no" to a packager. Most of them all go with a frenzy phase of "let's add as much stuff we can to the distro, because users ask for it and threathen to go elsewhere". Then, that go to "we need more packager to take care of that", then "we need more QA and people doing the boring bits", except that boring stuff are not done, because that's boring.

A solution would be to take in account ressources as a whole, ie, you cannot upload a package unless there is enough people to take care for QA and updates.

But such view are far from being popular IMHO ( and most users do not care, like most do not care about sustainability ).

This isn't new

Posted Jul 17, 2012 17:14 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (8 responses)

This time (and all the previous times Rawhide spectacularly failed) the breakage came from Red Hat-written code pushed in Rawhide by the people who wrote it.

It's not pure packagers which are the problem. They are actually very careful about what they push precisely because they're not sure they'll be able to fix it if it breaks. The problem is people who write code and package it themselves, and feel they won't make any mistake ever, and anyway if it breaks they owe nothing to Rawhide guinea pigs which are only there to help them meet coding deadlines (gross generalization, dracut people are more aware than most of the risks though they do have to contend with systemd abrupt changes nowadays)

That's the typical problem you get in any project where you have devs that perform operational jobs, if you didn't make sure beforehand that they understand operations means they have real users and have to fix the problems they cause as soon as they cause them (unlike broken code that can languish in a VCS for days without consequences)

This isn't new

Posted Jul 18, 2012 5:08 UTC (Wed) by nim-nim (subscriber, #34454) [Link]

Case in point:

* Tue Jul 17 2012 020-96.git20120717 ← dracut working again
- disabled systemd in the initramfs, until it works correctly

* Wed Jul 11 2012 020-84.git20120711
- add back "--force" to switch-root, otherwise systemd umounts /run

* Wed Jul 11 2012 020-83.git20120711
- more systemd journal fixes
- nfs module fix
- install also /lib/modprobe.d/*
- fixed dracut-shutdown service
- safeguards for dracut-install
- for --include also copy symlinks

* Tue Jul 10 2012 020-72.git20120710
- stop journal rather than restart
- copy over dracut services to /run/systemd/system

* Tue Jul 10 2012 020-70.git20120710
- more systemd unit fixups
- restart systemd-journald in switch-root post
- fixed dracut-install loader ldd error message

* Mon Jul 09 2012 020-64.git20120709
- fixed plymouth install
- fixed resume
- fixed dhcp
- no dracut systemd services installed in the system

* Mon Jul 09 2012 020-57.git20120709
- more fixups for systemd-udevd unit renaming

* Mon Jul 09 2012 020-55.git20120709
- require systemd >= 186
- more fixups for systemd-udevd unit renaming

* Mon Jul 09 2012 020-52.git20120709
- fixed prefix in 01-dist.conf

* Fri Jul 06 2012 020-51.git20120706 ← start of huge breakage
- cope with systemd-udevd unit renaming
- fixed network renaming
- removed dash module

And:

* Tue Jul 03 2012 Lennart Poettering - 186-1
- New upstream release

This isn't new

Posted Jul 18, 2012 19:17 UTC (Wed) by nix (subscriber, #2304) [Link] (6 responses)

Ah, software developer optimism. You gotta love it.

(Of course *I* would never suffer from such a delusion. My code simply has no bugs. I know because I fixed all the bugs I found, and now I can find no more bugs so I know there must be no more bugs. Also, the Earth has a green sky and six retrograde moons.)

This isn't new

Posted Jul 20, 2012 3:27 UTC (Fri) by doogie (guest, #2445) [Link] (3 responses)

The only code that is bug free is that which is not yet written.

This isn't new

Posted Jul 20, 2012 8:47 UTC (Fri) by liw (subscriber, #6379) [Link] (2 responses)

Code not yet written doesn't run, which is a bug. ;-)

This isn't new

Posted Jul 20, 2012 9:24 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (1 responses)

Code not written has no users and thus by definition it is perfect.

That's the uncertainty principle applied to software: bugs not observed do not exist (yet)

This isn't new

Posted Jul 20, 2012 12:41 UTC (Fri) by nix (subscriber, #2304) [Link]

Not so. I currently suspect there is at least one bug in part of my code that has never run, but I am attacking it later, when other bugs are fixed so code flow can reach that bug and prove its existence. This bug is in an indeterminate state. :)

Schneier's law of software?

Posted Jul 22, 2012 4:40 UTC (Sun) by Max.Hyre (subscriber, #1054) [Link] (1 responses)

Sounds as if we have a programming equivalent of Schneier's Law:
Any programmer can write a program so good he can't find any bugs in it.

Schneier's law of software?

Posted Jul 23, 2012 17:45 UTC (Mon) by Tet (guest, #5433) [Link]

It pretty much already exists, and long predates Schneier's:
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

-- Brian Kernighan, "The Elements of Programming Style"

This isn't new

Posted Jul 16, 2012 21:29 UTC (Mon) by johannbg (guest, #65743) [Link] (1 responses)

I disagree with the statement that we have fewer maintainers running rawhide then we did a while back but argue instead we have roughly the same amount ( and probably more or less the same maintainers ) but the number of packages has increase which might give the impression that the number of maintainers running rawhide has decreased.

This isn't new

Posted Jul 17, 2012 13:33 UTC (Tue) by dwmw2 (subscriber, #2063) [Link]

FWIW I almost never run Rawhide, and I almost never did. The no-frozen-rawhide made no difference to me. The only time in history that I recall any of my machines spending much time running rawhide was when we were doing the PowerPC port to new machines. And that was mostly installer testing.

This isn't new

Posted Jul 16, 2012 21:49 UTC (Mon) by ceplm (subscriber, #41334) [Link] (3 responses)

I think the point is not that Rawhide is now broken (after all, it is supposed to eat your babies, right?), but that nobody uses it is so often broken for a long long time, because not many developers uses it as their production machine. Old-time Rawhide used to be broken often (I was told that "there are days when Rawhide actually boots up" when I was asking about its usefulness in approx. FC6 time frame), but it was also rapidly fixed because the developer's own machine was broken as well.

Not saying that it is good or bad (yes, I was using that "update in time of Alpha policy" as well when I was still using Fedora as my main system ;now I have to use RHEL for the work-related reasons), just clarifying what I read in the original article.

Matěj

This isn't new

Posted Jul 17, 2012 17:02 UTC (Tue) by nim-nim (subscriber, #34454) [Link] (2 responses)

Over the years, some Fedora developers have successfully lobbied to impress the 'Rawhide eat babies' and 'so what' if it's broken, it's Rawhide' on Fedora users.

It wasn't always the case. A decade ago there was some pride in keeping rawhide usable (and using it oneself). It's on (then rhl) list I've read first the 'dogfoodable' term (about rawhide).

It's sad that in their eagerness not to be held accountable for what they put in rawhide some packagers are poisoning the well from which Fedora itself (and derivatives) come.

This isn't new

Posted Jul 17, 2012 20:40 UTC (Tue) by dwmw2 (subscriber, #2063) [Link] (1 responses)

I agree. I'm not trying to defend this situation — just pointing out that I don't think it's caused by the 'No Frozen Rawhide' thing.

This isn't new

Posted Jul 18, 2012 7:25 UTC (Wed) by ceplm (subscriber, #41334) [Link]

I would even go further ... even if Rawhide quality really went down (I don't use Rawhide myself, so I cannot testify about that), and even if such decrease in quality was caused by the current Fedora development work-flow (and that's a big if), it still doesn't follow that such change in work-flow wasn't worthy of possible increase in overall quality of released Fedoras (if that happened). Maybe we just have to adjust our upgrading policy to the new reality and the total may be still worthy.

And yes, nevertheless I am still longing after those days when men were men, women were women, and both of them (maybe helping each other) used Rawhide. ;)

This isn't new

Posted Jul 17, 2012 19:24 UTC (Tue) by nwnk (guest, #52271) [Link]

I don't think that's entirely fair. The class of xserver ABI bug cited in the "Is Rawhide Supposed To Be Useful" thread you linked pretty much can't happen anymore, like, ever again. If I didn't care, I wouldn't have fixed it. (I will happily admit it was not fixed on a timescale I feel was acceptable, but I had this whole other point in the thread about that, that nobody bit on then and I don't expect will now.)

I don't think it's reasonable to just toss rawhide to the "it will eat your data" wolves. But I also don't think it's reasonable to expect it to be your daily driver if you're not willing to do some of your own dirty work (and others'). That's what releases are for. That's the entire point of calling them releases: they are things you can expect to consume without needing to think or worry about.

I can see why rawhide isn't meeting your needs, but I'm a bit unsure what would.

Definitely not new; definitely always been an awful idea

Posted Jul 16, 2012 21:25 UTC (Mon) by louie (guest, #3285) [Link]

Also, users have always been told that "if it breaks in Rawhide you get to keep both pieces"; that isn't new either. And it's always been variable — some developers really don't care about their packages in Rawhide until a new release is imminent, while others do try to keep it working.

While it is true that this has always been the policy, it has also always been an awful policy, pretty much guaranteeing that only the clinically insane, like our beloved editor ;), perform QA on Rawhide.

Perhaps more importantly, it also has the awful side-effect of supporting waterfall-y (read: bad) developer habits. All developers should always be encouraged, by hook and by crook, to keep HEAD buildable and runnable. Explicitly telling users "HEAD will frequently not be runnable" implicitly tells developers that it is perfectly fine to leave HEAD in a non-runnable state. That kind of attitude leads to worse software.

[Not that you don't sometimes need to break HEAD; but it should be something done only when the other options are very bad, and ideally after much testing on a branch. Same should apply to Rawhide.]

Left by Rawhide

Posted Jul 16, 2012 22:10 UTC (Mon) by LDXcw (subscriber, #57072) [Link] (2 responses)

I can recommend Archlinux as your next distro. It's a recent one and you can make it even more recent if you enable the testing repos.

Left by Rawhide

Posted Jul 17, 2012 1:04 UTC (Tue) by philanecros (subscriber, #66913) [Link] (1 responses)

Allan emphasized that [testing] repo is for testing recently. Although it usually works well.
http://allanmcrae.com/2012/07/the-arch-linux-testing-repo...

Left by Rawhide

Posted Aug 1, 2012 13:07 UTC (Wed) by philomath (guest, #84172) [Link]

Even without testing Arch is really rolling-release.

I second LDXcw's advice. Arch seems to fit best the bill.

Left by Rawhide

Posted Jul 16, 2012 22:12 UTC (Mon) by hlandgar (subscriber, #39579) [Link] (3 responses)

I started with Redhat 7 but left after 9 and found Gentoo. I have been running Gentoo ever since. I continuously update and take hard disks from old computers to new. The current one is 6 cores, 24G or memory and 3 big monitors. Nothing will make you better at fixing any Linux problem than understanding and running Gentoo.

Left by Rawhide

Posted Jul 17, 2012 9:01 UTC (Tue) by drag (guest, #31333) [Link] (2 responses)

> Nothing will make you better at fixing any Linux problem than understanding and running Gentoo.

So you mean that Gentoo is continuously broken?

Because otherwise there are lots of better ways to learn about 'fixing Linux' then pressing 'enter' and watching text scroll by for 2-3 hours.

Left by Rawhide

Posted Jul 18, 2012 20:20 UTC (Wed) by nix (subscriber, #2304) [Link]

Well, source and from-scratch distros can teach you about the general layout of the system and its components, which can be useful when localizing problems later. That's as far as I'd go though.

Left by Rawhide

Posted Sep 30, 2012 9:57 UTC (Sun) by Duncan (guest, #6647) [Link]

You are laboring under a seriously dated misconception.

You don't hit enter and watch text scroll by for hours (unless you want to); on a modern multi-core system with a few dollars/gigs worth of RAM, you hit enter, hit your virtual-desktop/terminal switch key, and go about your business just as you normally would. When you get to a convenient break in your normal work, you switch back and see where it's at and if it's time to run the config updater.

Really, just as with binary distros, the real human time of an upgrade is generally AFTER the actual update, checking that any customized config still works for the new version, and figuring out where the upstream dev moved the functionality you depended on in the previous version. That's the same regardless of the distribution, binary, scripted-build-from-source, or LFS, you run, with the difference there being whether it's an all-at-once flag-day upgrade, or a rolling upgrade distro with individual updates as they come in. (Personally, I like the rolling upgrade, as when there's a problem with the update, it's much easier to isolate since there's only a few packages that updated at once that it could be. YMMV.)

Now if you're running an old 586-era system with perhaps an eighth gig of RAM, yeah, building from source is going to make using the system for much else somewhat more difficult, but those days are long gone on anything half-current. Even the original single-core Atoms do reasonably well (they /do/ have hyperthreading), as long as they have a gig of RAM to play with and the build is set to idle/batch priority (nice of 19) and you don't go overboard on the number of parallel build jobs. And if your system really IS sub-gigahertz sub-gigabyte pentium era, there's STILL no reason to sit there watching text scroll by unless you get a kick out of it or something, just schedule the updates for overnite or whatever.

My experience with Debian Unstable

Posted Jul 17, 2012 20:24 UTC (Tue) by BrucePerens (guest, #2510) [Link] (4 responses)

I've run Debian unstable on systems for... oh gosh, could it be 21 years?

I had one day down due to a software issue in that time. And I have recently discovered that one of my workstations would not boot unstable due to some issue with grub or the kernel, not investigated.

That's a pretty good record.

My experience with Debian Unstable

Posted Jul 17, 2012 20:29 UTC (Tue) by BrucePerens (guest, #2510) [Link] (3 responses)

I think it's 19 years.

My experience with Debian Unstable

Posted Jul 17, 2012 23:08 UTC (Tue) by pr1268 (guest, #24648) [Link]

I was going to say... I don't even think Linus' famous Usenet post is 21 years old (although it will be next month!).

My experience with Debian Unstable

Posted Jul 18, 2012 11:24 UTC (Wed) by hummassa (subscriber, #307) [Link] (1 responses)

I ran debian unstable in my desktop for something like ten years, and I had some breakage -- something like a workday of downtime every three months or so. Not that I am complaining too much -- even Ubuntu used to give me half a workday of downtime every dist-upgrade (but I must point that the last three or four ones were flawless) probably due to the half a dozen PPAs I still use.

My experience with Debian Testing

Posted Jul 19, 2012 6:29 UTC (Thu) by alison (subscriber, #63752) [Link]

I've been running Debian Testing for about 18 months and have had only one serious problem during that time. Testing has new updates every day, and config files do move around and change, but for the most part I have newish packages and few problems.

Left by Rawhide

Posted Jul 18, 2012 2:27 UTC (Wed) by proyvind (guest, #74683) [Link] (3 responses)

Interesting article!

The "lack" of bureacracy and somewhat anarchistic nature of the almost as old, yet always way more open (almost anarchistic, "almost"..;) nature of Mandriva cooker is and has always been the our key to success for us to remain extremely dynamic and even quite competetive even throughout the worst of times with only a fragments of resources! :o)

Something broken in cooker would've simply not been able to remain broken for a very long while as we have a policy of no package ownership, if broken everyone would've been able to fix it immediately without any boring bureacratics or territorial egos blocking it! (of course, cannot make any claims about that we don't actually have any of these people.:p)

https://twitter.com/proyvind/status/225413532110426112

Left by Rawhide

Posted Jul 18, 2012 20:25 UTC (Wed) by tonyblackwell (guest, #43641) [Link] (2 responses)

While wishing Mandriva the very best and congratulating them on keeping updates in trying circumstances, Mageia is working very well with a dynamic community. Do I understand correctly that Mandriva's current plan is to base its future commercial server products on Mageia code?

Left by Rawhide

Posted Jul 18, 2012 21:23 UTC (Wed) by misc (subscriber, #73730) [Link] (1 responses)

Yes.

Current Mandriva installer ( based on livecd ) is far from being suitable for a server ( no automated installation among others, nor network one ). Lack of resources to do security updates ( or updates alone in cooker ) would be another one. For example, there is lots of old packages according to report like http://youri.zarb.org/demo/mandriva/updates.html
Puppet is outdated ( and with several CVE ), mantis is outdated ( several CVE too ), tomcat is suffering from CVE-2011-3375, sympa is suffering at least from CVE-2012-2352, etc. And that's issue in the current development version, so unless someone fix them, this will end in the next stable release.

Most server packages maintainers, if not all except those paid to work on Mandriva, are now working on Mageia, and a independent body is the best way to share the work and the burden of maintaining a distribution ( and that's why Mandriva finally moved to follow the step of Mageia by having a separate entity for the community, split from the company, because otherwise, people are not really eager to work for free if they (wrongly) feel they cannot decide anything ).

Left by Rawhide

Posted Jul 24, 2012 0:38 UTC (Tue) by proyvind (guest, #74683) [Link]

@misc: I'd love to answering this in great detail and all about your claims and all here in public and revealing what's really happened related to the truly awkward (and obviously slightly clueless) decission of jmcroset on Mageia and all..

Yet, professional integrity and respect of relations and what not dictates for me not to..

I'll rather discuss these things in private with you (and we clearly have a *LOT* to discuss amounting to two years of history now.. :/

Anyways, on the matter of updates..

Currently Mageia is having an overly tough and tiring issue with their inability of really actually truly being able to find (& hire) someone actually willing to fill the role of release support manager.

So far AFAIK (in the past) both Stew Benedict and the current person (defacto?) responsible(?) (AFAIK?) David Walser have acknowledged this, with even the latter planning on getting back to being involved with cooker again shortly..

Anyways, I'm on my last night of vacations in Amsterdam now and really wanting to have a really blast of a last night time ending it all.. :)

I'll get back to ye all soon..

--
Uttermost sincerely and absolutely truly dearly,
Per Øyvind Karlsen
Mandriva Linux Project Leader
http://www.linkedin.com/in/proyvind

Left by Rawhide

Posted Jul 25, 2012 0:58 UTC (Wed) by KaiRo (subscriber, #1987) [Link]

It looks like so far all somewhat major development distros have been mentioned here as proposed alternatives, with one exception, and I somewhat wonder why.
I'm using openSUSE Factory and it seems to me that it's fairly near to the original Rawhide, getting new versions early, being mostly usable for production (I'm using it on my primary desktop system, not daring on the for-travel laptop though), and mostly freezing around release time of the main distro (which annoys me slightly at times). Also, there's a pretty open community around it.
An interesting steps that has been made at openSUSE is that packages are not dropped into the Factory distro right away but first put into development repositories where they can go through build and even functional tests if needed and wanted, only then they are submitted to the actual distro (OBS tends to be helpful in managing that process).
I don't want to advertise yet another distro, but if you are looking for something nearer to the original Rawhide and you are willing to investigate other distros, you should take a look at openSUSE Factory as well. That one also being rpm-based is probably helpful as well. ;-)

Why some updates are delayed

Posted Jul 26, 2012 2:50 UTC (Thu) by brunowolff (guest, #71160) [Link]

The reason why you see some fixes delayed is that rawhide inherits from the branched repo or if the isn't one current the updates repo of the last release. So if a package hasn't been built for rawhide is broken and has a fix go to updates-testing of branched or the last release, this fixed version will not be inherited into rawhide.

I would prefer that rawhide inherit from updates-testing, though ideally packagers would also be building updates in rawhide as well as the last release. (I believe that is official policy, though I have seen packagers recommend just the opposite to save work.)

I do think the suggestion some people have made, to stay with the branched releases is a good one. Though arguably alpha freeze might be a better time to switch rather than right at branch.

Rawhide seems best for people who are either working on new stuff and should be doing some amount of eating their own dog food and people who are willing to be canaries and save other people grief by catching problems early. I find I have unusual setups and figure problems triggered by that are likely to slip through if I don't catch them early, so I might as well be running prerelease stuff (either rawhide and/or branched) and try to save other people from the grief I'll get either way.


Copyright © 2012, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds