Left by Rawhide
Rawhide, as it happens, is older than Fedora; it was originally launched in August, 1998—almost exactly 14 years ago. Its purpose was to give Red Hat Linux users a chance to test out releases ahead of time and report bugs; it could also have been seen as an attempt to attract users who would otherwise lean toward a distribution like Debian unstable. Rawhide was not a continuously updated distribution; it got occasional "releases" on a schedule determined by Red Hat. One could argue that Fedora itself now plays the role that Red Hat had originally envisioned for Rawhide. But Rawhide persists for those who find Fedora releases to be far too stable, boring, and predictable.
The Rawhide distribution does provide occasional surprises, to the point that any rational person should almost certainly not consider running it on a machine that is needed for any sort of real work. But, at its best, Rawhide is an ideal tool for LWN editors, a group that has not often been accused of being overly rational. Running Rawhide provides a front-seat view into what the development community is up to; fresh software shows up there almost every day. And it can be quite fresh; Fedora developers will often drop beta-level software into Rawhide with the idea of helping to stabilize it before it shows up in finished form as part of a future Fedora release. With Rawhide, you can experience future software releases while almost never having to figure out how to build some complex project from source.
Rawhide also helps one keep one's system problem diagnosis and repair skills up to date—usually at times when one would prefer not to need to exercise such skills. But that's just part of the game.
In the early days of Fedora, Rawhide operated in a manner similar to Debian unstable, but with a shorter release cycle. When a given Fedora release hit feature freeze, Rawhide would freeze and the flow of scary new packages into the distribution would stop. Except, of course, when somebody put something badly broken in anyway, just to make sure everybody was still awake. While the Fedora release stabilized, developers would accumulate lots of new stuff for the next release; it would all hit the Rawhide repository shortly after the stable release was made. One quickly learned to be conservative about Rawhide updates during the immediate post-release period; things would often be badly broken. So it seemed to many that Rawhide was a little too raw during parts of the cycle while being too frozen and boring at other times.
Sometime around 2009, the project came up with the "no frozen Rawhide" idea. The concept was simple: rather than stabilize Fedora releases in the Rawhide repository, each stable release would be branched off Rawhide around feature-freeze time. So Rawhide could continue forward in its full rawness while the upcoming release stabilized on a separate track. It was meant to be the best of both worlds: the development distribution could continue to advance at full speed without interfering with (or getting interference from) the upcoming release. It may be exactly that, but this decision has changed the nature of the Rawhide distribution in fundamental ways.
In May, 2011, Matthew Miller asked the fedora-devel list: "is Rawhide supposed to be useful?" He had been struggling with a problem that had bitten your editor as well: the X server would crash on startup, leaving the system without a graphical display. The fact that Rawhide broke in such a fundamental way was not particularly surprising; Rawhide is supposed to break in horrifying ways occasionally. The real problem is that Rawhide stayed broken for a number of weeks; the responsible developer, it seems, had simply forgotten about the problem. Said developer had clearly not been running Rawhide on his systems; this was the sort of problem that tended to make itself hard to forget for people actually trying to use the software.
So your editor asked: could it be that almost nobody is actually running
Rawhide anymore? The fact that it could be unusably broken for weeks
without an uproar suggested that the actual user community was quite small.
One answer that came back read: "In
the week before F15 change freeze, are you really surprised that nobody's
running the F16 dumping ground?
" At various times your editor has,
in response to Rawhide bug reports, been told that running Rawhide is a bad
idea (example, another example, yet another example). There seems to be a clear message
that, not only are few people running Rawhide, but nobody is really even
supposed to be running it.
The new scheme shows its effects in other ways as well. Bug fixes can be slow to make it into Rawhide, even after the bug has been fixed in the current release branch. Occasionally, the "stable" branch has significantly newer software than Rawhide does; Rawhide can become a sort of stale backwater at times. It is not surprising that Fedora developers are strongly focused on doing a proper job with the stable release; that bodes well for the project as a whole. But this focus has come at the expense of the Rawhide branch, which is now seen, by some developers at least, as a "dumping ground."
Recently, your editor applied an update that brought about the familiar "GNOME just forgot all your settings" pathology, combined with the apparent loss of the ability to fix those settings. It was necessary to return to xmodmap commands to put the control key where $DEITY (in the form of the DEC VT100 designers) meant it to be, for example. Some time had passed before this problem was discovered, so the obvious first step was to update again, get current, and see if the problem had gone away. Alas, that was just when Rawhide exploded in a fairly spectacular fashion, with an update leaving the system corrupted and unable to boot. Not exactly the fix that had been hoped for. Fortunately, many years of experience have taught the value of exceptionally good backups, but the episode as a whole was not fun.
But what was really not fun was the ensuing discussion. Chuck Forsberg made the reasonable-sounding suggestion that perhaps developers could be bothered to see if their packages actually work before putting them into Rawhide. Adam Williamson responded:
This, in your editor's eyes, is not the description of a distribution that is actually meant to be used by real people.
The interesting thing is that Fedora developers seem to be mostly happy with how Rawhide is working. It gives them a place to stage longer-term changes and see how they play with the rest of the system. Problems can often be found early in the process so that the next Fedora development cycle can start in at least a semi-stable condition. By looking at Rawhide occasionally, developers can get a sense for what their colleagues are up to and what they may have to cope with in the future.
In other words, Rawhide seems to have evolved into a sort of distribution-level equivalent to the kernel's linux-next tree. Developers put future stuff into it freely, stand back, and watch how the monster they have just created behaves for a little while. But it is a rare developer indeed who actually does real work with linux-next kernels or tries to develop against them. Producing kernels that people actually use is not the purpose of linux-next, and, it seems, producing a usable distribution is not Rawhide's purpose.
This article was meant to be a fierce rant on how the Fedora developers should never have had the temerity to produce a development distribution that fails to meet your editor's specific needs. But everybody who read it felt the need to point out that, actually, the Fedora project is not beholden to those needs. If the current form of Rawhide better suits the project's needs and leads to better releases, then changing Rawhide was the right thing for the project to do.
Your editor recognizes that, and would like to express his gratitude for
years of fun Rawhide roller coaster rides. But it also seems like time to
move on to something else that better suits current needs. What the next
distribution will be has yet to be decided, though. One could just follow
the Fedora release branches and get something similar to old-style Rawhide
with less post-release mess, but perhaps it's time for a return to
something Debian-like or to go a little further
afield. However things turn out, it should be fun finding a new
distribution to get grumpy about.
Posted Jul 16, 2012 16:46 UTC (Mon)
by epa (subscriber, #39769)
[Link] (13 responses)
I believe apt and dpkg offer something like this where you can run Debian stable and pick certain packages from unstable?
Posted Jul 16, 2012 17:01 UTC (Mon)
by drag (guest, #31333)
[Link] (8 responses)
But in terms of 'stable' using apt-pinning to pull in packages from Unstable or testing is not useful. The problem you run into is that the way Debian does dependencies is that when they upgrade some lower-level package they re-compile everything then have the new packages depend on the new lower-level package. This is probably not necessary as long as the developers of the low level packages are not huge dicks about breaking ABIs, but it does avoid the need for Debian to care when libraries don't bother to stay compatible with themselves.
The effect of this is that when you pull in packages from Unstable you will be forced to upgrade huge swaths of your OS.
If you want to mix and match packages the best approach I found is to use backports.debian.org and/or backwards port the packages yourself using deb-src files. Despite what the Gentoo folks may say Debian does make handling/building/install source-based packages fairly easy. Using that approaches I have always been successful.
Although I expect that forcing dpkg to install and ignoring dependency tracking will probably work fairly well in many cases.
Posted Jul 16, 2012 17:26 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
And actually using the Unstable distribution in Debian is much less scary than running Rawhide.
Posted Jul 16, 2012 18:11 UTC (Mon)
by drag (guest, #31333)
[Link] (2 responses)
Posted Jul 16, 2012 21:20 UTC (Mon)
by awesomeman (guest, #85116)
[Link] (1 responses)
Posted Jul 16, 2012 22:54 UTC (Mon)
by drag (guest, #31333)
[Link]
I use Debian Unstable and Fedora on my desktops.To me they seem roughly equivalent in terms of 'rawness' and goals even though they take different approaches.
To find a equivelant for Rawhide you'd have to look at mixing Debian Unstable with Experimental.
Posted Jul 16, 2012 20:55 UTC (Mon)
by epa (subscriber, #39769)
[Link]
Posted Jul 16, 2012 20:58 UTC (Mon)
by foom (subscriber, #14868)
[Link] (1 responses)
It's possible that the interdependence of desktop packages might be greater and make it infeasible to usefully do for a non-server package without upgrading almost the entire OS, I haven't really tried that.
Posted Jul 16, 2012 23:04 UTC (Mon)
by drag (guest, #31333)
[Link]
If the package doesn't want lots of dependencies then I'll pull down straight from Unstable. If it's something that lots of other packages depend on I usually won't do it and will source code compile.
This is usually how it goes for me when I install Debian stable and find out the software I want to run wants newer versions of something-or-other then Debian provides. In ranking from preferable to not:
1. Check backports.debian.org
Posted Jul 17, 2012 10:57 UTC (Tue)
by pkern (subscriber, #32883)
[Link]
That's pretty uncommon, especially now that we have symbol files at least for C libraries. The "recompile everything" normally only happens when the ABI is broken, which causes the binary package name to change. As we're pretty anal about ABIs it's not uncommon that we point upstream to breakage. It is true, however, that some library say that if you compile against it, you need at least that same version of the library to use it. That's one way one can take without symbol files. This means that you do not need to manually check for ABI additions. But we don't do mass rebuilds for those new versions, it's just that packages happen to link against them when they are built and then inherit those dependencies.
Posted Jul 16, 2012 17:26 UTC (Mon)
by dwmw2 (subscriber, #2063)
[Link] (3 responses)
Note that this doesn't always work. Some GNOME projects are deliberately shipping with broken dependencies so the Rawhide packages aren't marked as requiring new libraries even though they do.
Posted Jul 17, 2012 13:05 UTC (Tue)
by Yenya (subscriber, #52846)
[Link] (2 responses)
So as a side note, the whole mirroring of gigabytes of rawhide is pretty useless even for testing, because rawhide, from the tester's point of view, is already outdated.
Posted Jul 17, 2012 20:54 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link]
Though the infra price should be paid in useful bug reports
Posted Jul 18, 2012 6:21 UTC (Wed)
by michich (guest, #17902)
[Link]
By your reasoning you'd have to conclude that testing of anything is always useless, because the developer could respond with "I already fixed the bug in my local git tree" or "I already thought about this bug and have a fix planned in my head". The fix propagation delay can simply never reach zero.
Posted Jul 16, 2012 16:47 UTC (Mon)
by killefiz (subscriber, #8542)
[Link]
I'd say that nonworking install/netinstall images are something to expect in rawhide. Nonworking packages (except for the occasional dependency issues) should be (and are mostly) avoided by packagers.
--
Posted Jul 16, 2012 16:51 UTC (Mon)
by marduk (subscriber, #3831)
[Link] (18 responses)
For example, if you are GNOME user in the stable tree you are running gnome 2.32. If you are in the testing tree you are using GNOME 3.4 (well, most of it). And if you are on the gnome overlay you are running some parts of 3.5. Best off, you can mix and match these (to some degree). Let's say you want to run GNOME 3 but want to stick with a "stable" kernel, you can do that. Gentoo also keeps multiple versions of the same package at a time, so it essentially allows you to "downgrade" certain packages. For example, a few days ago I updated to gtk-3.5 and discovered it broke a lot of things that my mostly GNOME 3.4 system utilized. I simply "masked" the 3.5* versions of gtk and then did an update, and it downgraded me to the latest gtk-3.4*. When I feel safe/brave, I can later unmask gtk-3.5 and try it again.
It also allows you to experiment with stuff and go back fairly easily. For example I wanted to play with systemd so I changed some USE flags and converted my system to systemd, rebooted my system and there I was running a systemd. I did live with systemd for a few days until i decided there were still some things that needed work (e.g. sometimes when I boot I cannot log in... it seems like some services are not being started or starting in the wrong order). So I'm giving up on systemd for now, went back to Gentoo's openrc (I just created a new USE file for systemd, so I just had to move that file out of the way), reboot again, and I'm back on openrc. I'll try systemd in another month or so once the devs have it better integrated.
You can also stick with a mostly stable system but run a few things from the unstable tree (e.g. you always want the latest Postgres of the latest Firefox). You can even run "live" ebuilds (packages that pull straight from upstream's vcs and builds them on-the-fly) if you are brave.
Gentoo makes this all easy (well, easy if you now Gentoo). And you start to feel more like *you* are in control of what's on your system instead of what the distro's developers or some build bot decides you're going to be running.
Posted Jul 16, 2012 18:06 UTC (Mon)
by nikarul (subscriber, #4462)
[Link] (5 responses)
I will add a couple caveats. Obviously this requires a system which good CPU and memory resources for the amount of package building you do. And I do maintain a rather large RAM drive for building all but the largest packages (I'm looking at you, Libre Office). I've had more than one hard drive fall in the past to the onslaught that is 'emerge -uDv world'.
Posted Jul 16, 2012 21:50 UTC (Mon)
by jackb (guest, #41909)
[Link] (3 responses)
I haven't used a traditional Linux distribution in over 10 years; I've been using Linux from Scratch followed by Gentoo. It's difficult for me to remember what it's like not to have that control. Do regular distributions break less frequently than Gentoo or do they just break in different ways?
Posted Jul 17, 2012 1:45 UTC (Tue)
by kenmoffat (subscriber, #4807)
[Link]
I've also seen problems in specific packages (e.g. abiword with some past versions of libxml2). So, I expect that from time to time there will be *some* breakage on my desktop. But, that's "my system, my rules, my breakage."
I don't expect my changes in the books to break functionality - if they do, I try to fix the problem. I had assumed that all distros took a similar "we don't deliberately break it, but if it's broken we will try to fix it" attitude. Sounds as if I'm too much of an optimist.
ĸen
Posted Jul 17, 2012 13:25 UTC (Tue)
by rsidd (subscriber, #2582)
[Link]
Posted Jul 17, 2012 15:11 UTC (Tue)
by drag (guest, #31333)
[Link]
It feels exactly the same.
Posted Jul 17, 2012 10:29 UTC (Tue)
by Trelane (subscriber, #56877)
[Link]
I run Gentoo on my atom boxes (netbook, nettop). It works fine. (Anecdotally, better [faster/cooler] than prebuilt distros, perhaps due to -Os -march=atom Could be confirmation bias, though). Just don't expect huge things (kernel, mozilla, LibreOffice) to compile instantly.
Posted Jul 17, 2012 6:50 UTC (Tue)
by djc (subscriber, #56880)
[Link]
Posted Jul 17, 2012 7:56 UTC (Tue)
by gnu_andrew (guest, #49515)
[Link] (9 responses)
For me, there are two models I can work with; the Gentoo model, which is close to how I believe FOSS should work, giving you full control over which updates to bring in and allowing you to create your own unique system, and the RHEL/Debian Stable model, which is like a FOSS Windows/MacOS model where everything stays visibly the same (bar security updates and major fixes) and then you choose some apocalyptic moment to do that big upgrade to the new shiny major version. Fedora, Ubuntu, etc have something in between which just doesn't work for me.
Posted Jul 17, 2012 11:56 UTC (Tue)
by jwakely (subscriber, #60262)
[Link]
Would you prefer if the people writing kernel drivers for you, or writing the compiler you use, or breaking your desktop environment for you, stopped doing that because they spent their time reviewing all the code in the updates to their own systems? (OK, maybe the desktop guys ;-)
As someone who produces FOSS as well as consuming it I spend far too much of my own time working on a single project, I have no desire to review all the other projects I rely on. I'd rather just take the updates and live with the occasional fits of rage when something seemingly stupid gets done to my system!
Posted Jul 17, 2012 15:57 UTC (Tue)
by drag (guest, #31333)
[Link] (5 responses)
Generally speaking people keep change logs.
> I guess I just don't trust developers that much with my system, and experience has generally proved me correct.
So the developers that write your software are not good enough to compile it?
What a bizarre concept.
And the developers that write the scripts you use to compile everything, and blindly download and execute via portage, are going to do a much better job?
> For me, there are two models I can work with; the Gentoo model, which is close to how I believe FOSS should work, giving you full control over which updates to bring in and allowing you to create your own unique system,
I don't want full control. I want to hand control over to people that are competent and know what they are doing better then myself. Instead of micromanaging everything I want to only be forced to manage what really matters to me and the task at hand.
Posted Jul 18, 2012 20:24 UTC (Wed)
by nix (subscriber, #2304)
[Link] (4 responses)
But I'm also quite aware that most people just consider the things tools, not totally controllable havens, and I can understand why if you think of your computer as a tool you might be willing to prioritize things that make it a better tool over control issues you don't care about.
Posted Jul 20, 2012 5:28 UTC (Fri)
by dirtyepic (guest, #30178)
[Link] (3 responses)
Gentoo is a distro for the obsessive compulsive. Most people don't get it, and they never will, and that's fine.
Posted Jul 20, 2012 12:42 UTC (Fri)
by nix (subscriber, #2304)
[Link]
Posted Jul 20, 2012 22:49 UTC (Fri)
by man_ls (guest, #15091)
[Link] (1 responses)
Posted Jul 21, 2012 3:16 UTC (Sat)
by dirtyepic (guest, #30178)
[Link]
Some people buy a boat to relax. Some people build one in their basement. In the end it's just personal preference.
Posted Jul 17, 2012 17:19 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (1 responses)
If you are just blindly compiling the source you download without a detailed audit than there is no functional or security difference. It sounds like you are ascribing magical properties to the source code, as if compiling makes your system more "pure".
Posted Jul 18, 2012 3:24 UTC (Wed)
by AngryChris (guest, #74783)
[Link]
Posted Jul 20, 2012 14:20 UTC (Fri)
by andika (guest, #47219)
[Link]
Posted Jul 16, 2012 17:00 UTC (Mon)
by dowdle (subscriber, #659)
[Link] (3 responses)
My question/comment though is... that it is fairly well known that a lot of original development goes on in Fedora... so it would be interesting for you to perhaps sample a few of the other major distributions to see how true that Fedora fact holds up. I'm guessing that it will indeed hold up and that most other distributions will be somewhat boring by comparison... but then again, what do I know?!?
I do occasionally download the Fedora nightly iso builds... but they seem to have hiatuses... and not being that familiar with the nitty-gritty of their release cycle, I'm guessing that is normal. Following their development releases is a good way to keep up on the development since those are made from Rawhide (I think)... and at least you know they build... and are installable... unless they aren't. That's kind of the way I've been doing some Rawhide tasting. If desired, you can still periodically sample the development builds without being committed to anything... and using a VM means you don't have to worry about breaking something you might care about.
In any event, getting a better handle on the development going on elsewhere is certainly going to be a good thing... but I do enjoy your glimpses into Fedora so I also wish some retention of that was possible. Phoronix also seems to cover the internals of Fedora development some, although mainly via mailing lists... and hey, it is easy enough for those interested to subscribe themselves, right?
Thanks for the article and all past articles regarding Rawhide. They will be missed.
Posted Jul 17, 2012 7:56 UTC (Tue)
by ovitters (guest, #27950)
[Link] (2 responses)
I don't have any major issues running Cauldron, aside from the periodical big changes. For big changes we have an updates_testing channel, to test e.g. new xorg or new systemd. Those are used rarely though.
Packagers not running Cauldron are the exception (though take into account the size of the distribution; Fedora is much bigger).
Note that at the moment I find Cauldron really unstable compared to usual. This due to two big problems; Postfix doesn't start and as we're switching to systemd only, GDM doesn't seem to notice that you logged out.
Posted Jul 18, 2012 16:40 UTC (Wed)
by halfline (guest, #31920)
[Link] (1 responses)
https://bugzilla.gnome.org/show_bug.cgi?id=677556
(and the older:
http://cgit.freedesktop.org/systemd/systemd/commit/?id=8c...
)
Posted Jul 27, 2012 1:29 UTC (Fri)
by ovitters (guest, #27950)
[Link]
Posted Jul 16, 2012 17:09 UTC (Mon)
by drag (guest, #31333)
[Link]
I suggest sticking with Fedora 6 month releases in pull in packages from Koji if you are interested in looking at something that is newer then what is offered.
Posted Jul 16, 2012 17:10 UTC (Mon)
by louie (guest, #3285)
[Link] (10 responses)
It only suits the project's needs well because the project, collectively, is a total failure on the QA front, so what's one more source of failure and wasted opportunity. This is not a new problem. My first publicly recorded mini-rant on the subject is from 2007, and it was already a long-standing problem at that time. As I said then: (emphasis added) With a useless Rawhide, Fedora continues to throw away that potential QA advantage. But that's nothing new.
Posted Jul 16, 2012 17:23 UTC (Mon)
by dowdle (subscriber, #659)
[Link] (3 responses)
Fedora definitely isn't for everyone but I prefer it and refer to it as the "innovator distro". We all have our preferences and that is one reason I'm glad so many distros exist.
People sometimes freak out about the number of Linux distros but I ask them why aren't there multiple flavors of Windows and Mac available... besides the few releases provided by their makers? The answer is obvious, because they don't allow third parties to remix and release like Linux does... but you have to wonder just how many Windows there would be and how many Mac OS X's there would be if people were allowed to remix it. My guess is that they would have the same "problem" we do if only they were allowing it.
Posted Jul 16, 2012 21:17 UTC (Mon)
by louie (guest, #3285)
[Link] (1 responses)
Posted Jul 18, 2012 3:32 UTC (Wed)
by AngryChris (guest, #74783)
[Link]
I have a feeling that most good Free software is good in spite of how the project is run rather than because of it (for any given Free software project).
Posted Jul 17, 2012 9:12 UTC (Tue)
by viiru (subscriber, #53129)
[Link]
There are already nearly endless amounts of custom Windows installer cd images, with different sets of updates and additional drivers added. Distributing those is of course illegal, but since there is a need (there is plenty of hardware where the standard images wont even boot, many manufacturers no longer provide install disks and even if they did most users lose those within the first week of ownership) there is a service.
Posted Jul 16, 2012 21:22 UTC (Mon)
by johannbg (guest, #65743)
[Link] (5 responses)
Care to elaborate on that comment?
Btw without proper test cases and without proper debugging procedures hundreds or thousands of people testing the latest code every day is meaningless...
Posted Jul 16, 2012 21:29 UTC (Mon)
by louie (guest, #3285)
[Link] (3 responses)
People testing the code gives you data on the parts of the software that people actually use. That's certainly not perfect, and shouldn't be the end-all/be-all of testing, but having people use and report back on the parts that get used is absolutely the best software testing methodology there is, period.
Posted Jul 16, 2012 21:34 UTC (Mon)
by johannbg (guest, #65743)
[Link] (2 responses)
Posted Jul 16, 2012 21:53 UTC (Mon)
by louie (guest, #3285)
[Link] (1 responses)
Hundreds or thousands of people testing the code is never, ever meaningless. If you think it is, your QA processes are broken.
Posted Jul 23, 2012 11:06 UTC (Mon)
by gmatht (guest, #58961)
[Link]
Posted Jul 17, 2012 0:02 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link]
Users and test cases are different and provide different kinds of information. Test cases are great because they provide reliable, high quality information, but they're limited; nobody has enough resources to test every configuration their users are going to try out. User reports are messy, but they're documentation of real problems that your QA has to fix. The continued existence of user error reports is proof that the kind of disciplined testing represented by proper test cases and debugging procedures is inadequate. If it were enough, no problems would ever get through for users to find them.
Posted Jul 16, 2012 17:17 UTC (Mon)
by dwmw2 (subscriber, #2063)
[Link] (27 responses)
What goes into Rawhide does go into the next Fedora release. It's not like linux-next where stuff is supposed to be present only if it's going to be sent to Linus in the next merge window; with rawhide it actually is true. What's pushed to Rawhide now will be in Fedora 18, unless it's subsequently updated again.
The only thing that changed is that in the past, Rawhide was frozen while a Fedora release was being stabilised. During that period, there was nowhere that people could commit the shiny new exciting stuff. This lead to a huge amount of changes hitting Rawhide when the floodgates were opened just after a release.
One of the things that the 'No Frozen Rawhide' policy was supposed to do was remove that damming effect, and improve stability in the "immediate post-release period" that you mention.
Also, users have always been told that "if it breaks in Rawhide you get to keep both pieces"; that isn't new either. And it's always been variable — some developers really don't care about their packages in Rawhide until a new release is imminent, while others do try to keep it working.
I believe that Fedora policy does forbid having a package in the stable distro which is newer than the one in Rawhide. It's the same policy that forbids Fedora N from having a new version of a package than Fedora N+1. Upgrades should, well, upgrade.
I don't think there's anything really new here. You take your life in your hands when you run Rawhide, and you've been fairly lucky if it's been consistently usable for you for a number of years. I note your 'exploded' link actually refers to a mail describing how to fix things after the problematic update, thus implying that a full restore from backup shouldn't have been necessary?
Posted Jul 16, 2012 17:32 UTC (Mon)
by corbet (editor, #1)
[Link] (25 responses)
And no, the instructions to fix the system did not work for me. My life does not currently allow time to figure out what else was hosed; I really had to just give up on it.
Posted Jul 16, 2012 18:49 UTC (Mon)
by jspaleta (subscriber, #50639)
[Link] (3 responses)
run N release
Would this be closer to matching your expectations?
-jef
Posted Jul 16, 2012 19:00 UTC (Mon)
by corbet (editor, #1)
[Link] (1 responses)
Posted Jul 16, 2012 19:06 UTC (Mon)
by jspaleta (subscriber, #50639)
[Link]
-jef"ELACKOFCOFFEE"spaleta
Posted Jul 17, 2012 6:40 UTC (Tue)
by ghane (guest, #1805)
[Link]
I am now on quantal 12.10. Things break every couple of weeks, but logging into console and running an update again has fixed those.
--
Posted Jul 16, 2012 20:14 UTC (Mon)
by dwmw2 (subscriber, #2063)
[Link] (13 responses)
I'm also not sure that such a change, if it's real, could necessarily be blamed on the no-frozen-rawhide thing. There have been other changes in Fedora packaging over time. There's much more of a focus on "packagers" these days, and much less on "package maintainers" who actually get their hands dirty and work on the code. This naturally translates into a much less happy experience for those who file bugs.
Posted Jul 16, 2012 21:42 UTC (Mon)
by johannbg (guest, #65743)
[Link] (3 responses)
Posted Jul 17, 2012 9:24 UTC (Tue)
by drag (guest, #31333)
[Link] (2 responses)
There is no way that you can expect a OS to bundle everything that everybody would possibly want run on it. Extensibility is important and users lack the knowledge, time, and drive to port and compile the software they want to use themselves.
Posted Jul 17, 2012 19:40 UTC (Tue)
by epa (subscriber, #39769)
[Link]
Is there some way the distros and the upstreams can meet halfway? Source code could be distributed in a standard source package format, which as well as the usual configure script contains metadata about what files are built, what libraries are required and so on. This in turn can generate source packages for rpm, dpkg, Gentoo and so on using translator tools maintained by each distro. This would not be appropriate for core packages, which usually have a lot of distro-specific hacking, but for the long tail of applications and libraries it could bridge the gap between 'distro must package all possible programs' and 'upstream muat package for all possible distros'.
Posted Jul 20, 2012 10:57 UTC (Fri)
by misc (subscriber, #73730)
[Link]
A solution would be to take in account ressources as a whole, ie, you cannot upload a package unless there is enough people to take care for QA and updates.
But such view are far from being popular IMHO ( and most users do not care, like most do not care about sustainability ).
Posted Jul 17, 2012 17:14 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (8 responses)
It's not pure packagers which are the problem. They are actually very careful about what they push precisely because they're not sure they'll be able to fix it if it breaks. The problem is people who write code and package it themselves, and feel they won't make any mistake ever, and anyway if it breaks they owe nothing to Rawhide guinea pigs which are only there to help them meet coding deadlines (gross generalization, dracut people are more aware than most of the risks though they do have to contend with systemd abrupt changes nowadays)
That's the typical problem you get in any project where you have devs that perform operational jobs, if you didn't make sure beforehand that they understand operations means they have real users and have to fix the problems they cause as soon as they cause them (unlike broken code that can languish in a VCS for days without consequences)
Posted Jul 18, 2012 5:08 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link]
* Tue Jul 17 2012 020-96.git20120717 ← dracut working again
* Wed Jul 11 2012 020-84.git20120711
* Wed Jul 11 2012 020-83.git20120711
* Tue Jul 10 2012 020-72.git20120710
* Tue Jul 10 2012 020-70.git20120710
* Mon Jul 09 2012 020-64.git20120709
* Mon Jul 09 2012 020-57.git20120709
* Mon Jul 09 2012 020-55.git20120709
* Mon Jul 09 2012 020-52.git20120709
* Fri Jul 06 2012 020-51.git20120706 ← start of huge breakage
And:
* Tue Jul 03 2012 Lennart Poettering - 186-1
Posted Jul 18, 2012 19:17 UTC (Wed)
by nix (subscriber, #2304)
[Link] (6 responses)
(Of course *I* would never suffer from such a delusion. My code simply has no bugs. I know because I fixed all the bugs I found, and now I can find no more bugs so I know there must be no more bugs. Also, the Earth has a green sky and six retrograde moons.)
Posted Jul 20, 2012 3:27 UTC (Fri)
by doogie (guest, #2445)
[Link] (3 responses)
Posted Jul 20, 2012 8:47 UTC (Fri)
by liw (subscriber, #6379)
[Link] (2 responses)
Posted Jul 20, 2012 9:24 UTC (Fri)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
That's the uncertainty principle applied to software: bugs not observed do not exist (yet)
Posted Jul 20, 2012 12:41 UTC (Fri)
by nix (subscriber, #2304)
[Link]
Posted Jul 22, 2012 4:40 UTC (Sun)
by Max.Hyre (subscriber, #1054)
[Link] (1 responses)
Posted Jul 23, 2012 17:45 UTC (Mon)
by Tet (guest, #5433)
[Link]
-- Brian Kernighan, "The Elements of Programming Style"
Posted Jul 16, 2012 21:29 UTC (Mon)
by johannbg (guest, #65743)
[Link] (1 responses)
Posted Jul 17, 2012 13:33 UTC (Tue)
by dwmw2 (subscriber, #2063)
[Link]
Posted Jul 16, 2012 21:49 UTC (Mon)
by ceplm (subscriber, #41334)
[Link] (3 responses)
Not saying that it is good or bad (yes, I was using that "update in time of Alpha policy" as well when I was still using Fedora as my main system ;now I have to use RHEL for the work-related reasons), just clarifying what I read in the original article.
Matěj
Posted Jul 17, 2012 17:02 UTC (Tue)
by nim-nim (subscriber, #34454)
[Link] (2 responses)
It wasn't always the case. A decade ago there was some pride in keeping rawhide usable (and using it oneself). It's on (then rhl) list I've read first the 'dogfoodable' term (about rawhide).
It's sad that in their eagerness not to be held accountable for what they put in rawhide some packagers are poisoning the well from which Fedora itself (and derivatives) come.
Posted Jul 17, 2012 20:40 UTC (Tue)
by dwmw2 (subscriber, #2063)
[Link] (1 responses)
Posted Jul 18, 2012 7:25 UTC (Wed)
by ceplm (subscriber, #41334)
[Link]
I would even go further ... even if Rawhide quality really went down (I don't use Rawhide myself, so I cannot testify about that), and even if such decrease in quality was caused by the current Fedora development work-flow (and that's a big if), it still doesn't follow that such change in work-flow wasn't worthy of possible increase in overall quality of released Fedoras (if that happened). Maybe we just have to adjust our upgrading policy to the new reality and the total may be still worthy. And yes, nevertheless I am still longing after those days when men were men, women were women, and both of them (maybe helping each other) used Rawhide. ;)
Posted Jul 17, 2012 19:24 UTC (Tue)
by nwnk (guest, #52271)
[Link]
I don't think it's reasonable to just toss rawhide to the "it will eat your data" wolves. But I also don't think it's reasonable to expect it to be your daily driver if you're not willing to do some of your own dirty work (and others'). That's what releases are for. That's the entire point of calling them releases: they are things you can expect to consume without needing to think or worry about.
I can see why rawhide isn't meeting your needs, but I'm a bit unsure what would.
Posted Jul 16, 2012 21:25 UTC (Mon)
by louie (guest, #3285)
[Link]
While it is true that this has always been the policy, it has also always been an awful policy, pretty much guaranteeing that only the clinically insane, like our beloved editor ;), perform QA on Rawhide.
Perhaps more importantly, it also has the awful side-effect of supporting waterfall-y (read: bad) developer habits. All developers should always be encouraged, by hook and by crook, to keep HEAD buildable and runnable. Explicitly telling users "HEAD will frequently not be runnable" implicitly tells developers that it is perfectly fine to leave HEAD in a non-runnable state. That kind of attitude leads to worse software.
[Not that you don't sometimes need to break HEAD; but it should be something done only when the other options are very bad, and ideally after much testing on a branch. Same should apply to Rawhide.]
Posted Jul 16, 2012 22:10 UTC (Mon)
by LDXcw (subscriber, #57072)
[Link] (2 responses)
Posted Jul 17, 2012 1:04 UTC (Tue)
by philanecros (subscriber, #66913)
[Link] (1 responses)
Posted Aug 1, 2012 13:07 UTC (Wed)
by philomath (guest, #84172)
[Link]
I second LDXcw's advice. Arch seems to fit best the bill.
Posted Jul 16, 2012 22:12 UTC (Mon)
by hlandgar (subscriber, #39579)
[Link] (3 responses)
Posted Jul 17, 2012 9:01 UTC (Tue)
by drag (guest, #31333)
[Link] (2 responses)
So you mean that Gentoo is continuously broken?
Because otherwise there are lots of better ways to learn about 'fixing Linux' then pressing 'enter' and watching text scroll by for 2-3 hours.
Posted Jul 18, 2012 20:20 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Sep 30, 2012 9:57 UTC (Sun)
by Duncan (guest, #6647)
[Link]
You don't hit enter and watch text scroll by for hours (unless you want to); on a modern multi-core system with a few dollars/gigs worth of RAM, you hit enter, hit your virtual-desktop/terminal switch key, and go about your business just as you normally would. When you get to a convenient break in your normal work, you switch back and see where it's at and if it's time to run the config updater.
Really, just as with binary distros, the real human time of an upgrade is generally AFTER the actual update, checking that any customized config still works for the new version, and figuring out where the upstream dev moved the functionality you depended on in the previous version. That's the same regardless of the distribution, binary, scripted-build-from-source, or LFS, you run, with the difference there being whether it's an all-at-once flag-day upgrade, or a rolling upgrade distro with individual updates as they come in. (Personally, I like the rolling upgrade, as when there's a problem with the update, it's much easier to isolate since there's only a few packages that updated at once that it could be. YMMV.)
Now if you're running an old 586-era system with perhaps an eighth gig of RAM, yeah, building from source is going to make using the system for much else somewhat more difficult, but those days are long gone on anything half-current. Even the original single-core Atoms do reasonably well (they /do/ have hyperthreading), as long as they have a gig of RAM to play with and the build is set to idle/batch priority (nice of 19) and you don't go overboard on the number of parallel build jobs. And if your system really IS sub-gigahertz sub-gigabyte pentium era, there's STILL no reason to sit there watching text scroll by unless you get a kick out of it or something, just schedule the updates for overnite or whatever.
Posted Jul 17, 2012 20:24 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (4 responses)
I had one day down due to a software issue in that time. And I have recently discovered that one of my workstations would not boot unstable due to some issue with grub or the kernel, not investigated.
That's a pretty good record.
Posted Jul 17, 2012 20:29 UTC (Tue)
by BrucePerens (guest, #2510)
[Link] (3 responses)
Posted Jul 17, 2012 23:08 UTC (Tue)
by pr1268 (guest, #24648)
[Link]
I was going to say... I don't even think Linus' famous Usenet post is 21 years old (although it will be next month!).
Posted Jul 18, 2012 11:24 UTC (Wed)
by hummassa (subscriber, #307)
[Link] (1 responses)
Posted Jul 19, 2012 6:29 UTC (Thu)
by alison (subscriber, #63752)
[Link]
Posted Jul 18, 2012 2:27 UTC (Wed)
by proyvind (guest, #74683)
[Link] (3 responses)
The "lack" of bureacracy and somewhat anarchistic nature of the almost as old, yet always way more open (almost anarchistic, "almost"..;) nature of Mandriva cooker is and has always been the our key to success for us to remain extremely dynamic and even quite competetive even throughout the worst of times with only a fragments of resources! :o)
Something broken in cooker would've simply not been able to remain broken for a very long while as we have a policy of no package ownership, if broken everyone would've been able to fix it immediately without any boring bureacratics or territorial egos blocking it! (of course, cannot make any claims about that we don't actually have any of these people.:p)
Posted Jul 18, 2012 20:25 UTC (Wed)
by tonyblackwell (guest, #43641)
[Link] (2 responses)
Posted Jul 18, 2012 21:23 UTC (Wed)
by misc (subscriber, #73730)
[Link] (1 responses)
Current Mandriva installer ( based on livecd ) is far from being suitable for a server ( no automated installation among others, nor network one ). Lack of resources to do security updates ( or updates alone in cooker ) would be another one. For example, there is lots of old packages according to report like http://youri.zarb.org/demo/mandriva/updates.html
Most server packages maintainers, if not all except those paid to work on Mandriva, are now working on Mageia, and a independent body is the best way to share the work and the burden of maintaining a distribution ( and that's why Mandriva finally moved to follow the step of Mageia by having a separate entity for the community, split from the company, because otherwise, people are not really eager to work for free if they (wrongly) feel they cannot decide anything ).
Posted Jul 24, 2012 0:38 UTC (Tue)
by proyvind (guest, #74683)
[Link]
Yet, professional integrity and respect of relations and what not dictates for me not to..
I'll rather discuss these things in private with you (and we clearly have a *LOT* to discuss amounting to two years of history now.. :/
Anyways, on the matter of updates..
Currently Mageia is having an overly tough and tiring issue with their inability of really actually truly being able to find (& hire) someone actually willing to fill the role of release support manager.
So far AFAIK (in the past) both Stew Benedict and the current person (defacto?) responsible(?) (AFAIK?) David Walser have acknowledged this, with even the latter planning on getting back to being involved with cooker again shortly..
Anyways, I'm on my last night of vacations in Amsterdam now and really wanting to have a really blast of a last night time ending it all.. :)
I'll get back to ye all soon..
--
Posted Jul 25, 2012 0:58 UTC (Wed)
by KaiRo (subscriber, #1987)
[Link]
Posted Jul 26, 2012 2:50 UTC (Thu)
by brunowolff (guest, #71160)
[Link]
I would prefer that rawhide inherit from updates-testing, though ideally packagers would also be building updates in rawhide as well as the last release. (I believe that is official policy, though I have seen packagers recommend just the opposite to save work.)
I do think the suggestion some people have made, to stay with the branched releases is a good one. Though arguably alpha freeze might be a better time to switch rather than right at branch.
Rawhide seems best for people who are either working on new stuff and should be doing some amount of eating their own dog food and people who are willing to be canaries and save other people grief by catching problems early. I find I have unusual setups and figure problems triggered by that are likely to slip through if I don't catch them early, so I might as well be running prerelease stuff (either rawhide and/or branched) and try to save other people from the grief I'll get either way.
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
2. See if something can be pulled from testing without pulling in a lot of dependencies.
3. Use apt-source and related items to compile packages.
4. upgrade to testing or unstable.
Selective upgrading of packages
The problem you run into is that the way Debian does dependencies is that when they upgrade some lower-level package they re-compile everything then have the new packages depend on the new lower-level package. This is probably not necessary as long as the developers of the low level packages are not huge dicks about breaking ABIs, but it does avoid the need for Debian to care when libraries don't bother to stay compatible with themselves.
You can use yum to upgrade specific binary packages from Rawhide. For example yum --releasever=rawhide update openconnect should pull in the specific package and any of its dependencies. And if its list of dependencies comprises the whole world including glibc, you get to say 'no' and rebuild from source instead.
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Selective upgrading of packages
Left by Rawhide
sven (Fedora contributor)
Left by Rawhide
Left by Rawhide
Left by Rawhide
Every once in awhile things break, sometimes spectacularly, and it definitely keeps your system diagnosis skills sharp. But for the most part, it works very well and gives you a more control over what's going onto your system.
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
I don't want full control. I want to hand control over to people that are competent and know what they are doing better then myself.
And that's a fundamental difference that you'll never resolve by arguing about it. I'm a control freak -- I got involved with computers in the first place (aged six) entirely because the things were perfect servants that will do precisely what you ask (as long as you can describe it well enough) and can in principle be completely understood. As a consequence, I consider binary-package distros to be a violation of the fundamental reason I use computers in the first place -- control.
Left by Rawhide
Left by Rawhide
When I first started using Linux I tried several binary distros but I kept finding myself attempting to compile mplayer from source and installing gcc prereleases. After a few Linux From Scratch installs I realized I would never be able to maintain an entire distro on my own and still have time for extracurricular activities like sleeping.
And they say free software is not a drug. :)
Meh. With Debian you also have control over the parts that you find interesting, you can compile from source and generate your own packages, but the default is to leave things alone.
To our beloved editor: do not listen to these guys, for regular obsessive people Gentoo.gets old pretty quick. You sound like you want Debian testing, all of the fun but little unpredictability!
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
If the current form of Rawhide better suits the project's needs and leads to better releases, then changing Rawhide was the right thing for the project to do.
Let me be clear- I feel that quality is one of the biggest possible advantages free software can have over proprietary software,
specifically because you can have hundreds or thousands of people
testing the latest code every day. If you're not taking advantage of
that, you're throwing away one of the biggest advantages we have over
proprietary software.
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
> them why aren't there multiple flavors of Windows and Mac available...
> besides the few releases provided by their makers? The answer is obvious,
> because they don't allow third parties to remix and release like Linux
> does... but you have to wonder just how many Windows there would be and
> how many Mac OS X's there would be if people were allowed to remix it. My
> guess is that they would have the same "problem" we do if only they were
> allowing it.
Left by Rawhide
Btw without proper test cases and without proper debugging procedures hundreds or thousands of people testing the latest code every day is meaningless...
Left by Rawhide
Left by Rawhide
hundreds or thousands of people testing the latest code every day is meaningless...
Left by Rawhide
Context?
Left by Rawhide
I'm not sure I'd agree with the assertion that the 'No Frozen Rawhide' policy has "changed the nature of the Rawhide distribution in fundamental ways.".
This isn't new
The thing that I think has changed is that developers are not actually running Rawhide and are less concerned with whether it works or not. It has changed in my experience. It's not that things break, one expects that. It's what happens afterward that is different.
This isn't new
This isn't new
run N+1 pre-release branch when it gets branched from rawhide
stay on N+1 pre-release branch through official N+1 release
jump to N+2 pre-release branch when it gets branched from rawhide
That possibility is mentioned in the conclusion. Certainly it would be the easiest change to make from running full frontal Rawhide.
This isn't new
This isn't new
This isn't new
Sanjeev
This isn't new
This isn't new
This isn't new
This isn't new
This isn't new
This isn't new
This isn't new
- disabled systemd in the initramfs, until it works correctly
- add back "--force" to switch-root, otherwise systemd umounts /run
- more systemd journal fixes
- nfs module fix
- install also /lib/modprobe.d/*
- fixed dracut-shutdown service
- safeguards for dracut-install
- for --include also copy symlinks
- stop journal rather than restart
- copy over dracut services to /run/systemd/system
- more systemd unit fixups
- restart systemd-journald in switch-root post
- fixed dracut-install loader ldd error message
- fixed plymouth install
- fixed resume
- fixed dhcp
- no dracut systemd services installed in the system
- more fixups for systemd-udevd unit renaming
- require systemd >= 186
- more fixups for systemd-udevd unit renaming
- fixed prefix in 01-dist.conf
- cope with systemd-udevd unit renaming
- fixed network renaming
- removed dash module
- New upstream release
This isn't new
This isn't new
This isn't new
This isn't new
This isn't new
Sounds as if we have a programming equivalent of Schneier's Law:
Schneier's law of software?
Any programmer can write a program so good he can't find any bugs in it.
It pretty much already exists, and long predates Schneier's:
Schneier's law of software?
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
This isn't new
FWIW I almost never run Rawhide, and I almost never did. The no-frozen-rawhide made no difference to me. The only time in history that I recall any of my machines spending much time running rawhide was when we were doing the PowerPC port to new machines. And that was mostly installer testing.
This isn't new
This isn't new
This isn't new
I agree. I'm not trying to defend this situation — just pointing out that I don't think it's caused by the 'No Frozen Rawhide' thing.
This isn't new
This isn't new
This isn't new
Definitely not new; definitely always been an awful idea
Also, users have always been told that "if it breaks in Rawhide you get to keep both pieces"; that isn't new either. And it's always been variable — some developers really don't care about their packages in Rawhide until a new release is imminent, while others do try to keep it working.
Left by Rawhide
Left by Rawhide
http://allanmcrae.com/2012/07/the-arch-linux-testing-repo...
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
Left by Rawhide
I've run Debian unstable on systems for... oh gosh, could it be 21 years?My experience with Debian Unstable
My experience with Debian Unstable
My experience with Debian Unstable
My experience with Debian Unstable
My experience with Debian Testing
Left by Rawhide
Left by Rawhide
Left by Rawhide
Puppet is outdated ( and with several CVE ), mantis is outdated ( several CVE too ), tomcat is suffering from CVE-2011-3375, sympa is suffering at least from CVE-2012-2352, etc. And that's issue in the current development version, so unless someone fix them, this will end in the next stable release.
Left by Rawhide
Uttermost sincerely and absolutely truly dearly,
Per Øyvind Karlsen
Mandriva Linux Project Leader
http://www.linkedin.com/in/proyvind
Left by Rawhide
I'm using openSUSE Factory and it seems to me that it's fairly near to the original Rawhide, getting new versions early, being mostly usable for production (I'm using it on my primary desktop system, not daring on the for-travel laptop though), and mostly freezing around release time of the main distro (which annoys me slightly at times). Also, there's a pretty open community around it.
An interesting steps that has been made at openSUSE is that packages are not dropped into the Factory distro right away but first put into development repositories where they can go through build and even functional tests if needed and wanted, only then they are submitted to the actual distro (OBS tends to be helpful in managing that process).
I don't want to advertise yet another distro, but if you are looking for something nearer to the original Rawhide and you are willing to investigate other distros, you should take a look at openSUSE Factory as well. That one also being rpm-based is probably helpful as well. ;-)
Why some updates are delayed