|
|
Log in / Subscribe / Register

Between Fedora 12 and 13

By Jonathan Corbet
November 30, 2009
In many minds, the Fedora 12 release is likely to remain forever associated with the project's ill-advised decision to allow any local user to install packages without the root password. That mistake is now in the past and, in any case, there is far more to Fedora 12 than this particular problem. In this article, your editor looks at the quality of the Fedora 12 release and ponders what Fedora 13 may bring.

The Fedora brand has suffered a bit in recent times; the Fedora 10 and Fedora 11 releases have not proved to be the most stable distributions ever. Some users have begun to lament the passing of the days of Red Hat Linux, when quality control seemingly had a higher priority. Said users may well have never lived through the RHL 4.0 and 5.0 transitions, which were not the most rock-solid systems on the planet. Today's Fedora releases are far larger than Red Hat Linux ever was, and they are more stable than many RHL releases were. We have made progress over time.

Still, recent Fedora releases have had some users wondering if it might not be time to move on to some other distribution. For some of those users, Fedora 12 might be the release which forces the decision one way or another. With this thought in the back of his mind, your editor proceeded to upgrade two systems (one from Fedora 10, the other from 11) to the current release.

As an aside, it should be said that the Fedora "preupgrade" feature is a nice addition. It's still not quite a Debian-style online upgrade, but preupgrade does the work of collecting all the needed package files while the system is operating normally, only requiring that the system be taken down for the actual upgrade operation. No need to burn DVDs. It makes the whole process easier, at least when it works; some users are still reporting problems with preupgrade. It worked flawlessly for your editor, in any case.

Fedora 12, once installed, made an immediate impression: a great many little irritations have gone away. Printing works - every time. The laptop suspends and resumes much more quickly, and it has lost its "you have to resume me twice before I'll stay resumed" behavior. NetworkManager no longer comes up with "network disabled" and it responds far more quickly to network changes. The GNOME desktop even remembered most of its pre-upgrade settings - an unexpected bonus. And so on. From your editor's point of view, the Fedora developers have used the F12 development cycle to fix a big pile of problems, and they would appear to have been kind enough to avoid adding a pile of new problems to replace the old ones. In summary: Fedora 12 is the solid release that this project really needed to create. Compared to that, the new features in F12 (and there are many) are of secondary importance.

While your editor has seen similar comments from others, it's worth noting that not all users are 100% pleased. If people are having trouble with F12, chances are it has to do with graphic adapters. One user went so far as to suggest the cancellation of Fedora 13 so that the developers could work on fixing F12 graphics problems. That seems unlikely to happen, but there is an awareness within the development community that the graphics experience is still not quite what it should be.

Dave Airlie explained the priorities used by the development team when addressing problems. Issues which prevent the system from booting normally are at the top of the list, as are those which keep a normal desktop from working. Unfortunately for certain classes of users, the lowest-priority items are non-GNOME desktops and arbitrary 3D applications. So the above-mentioned user, who was running into trouble getting Blender to work, may have to wait a while for a complete fix. There are also known issues with the Nouveau driver. Users having difficulties with proprietary graphics drivers are, of course, entirely on their own.

In the end, Linux graphics is still a work in progress. There have been a lot of advances in this area, but the job will not be done for a little while yet.

So what comes next? Fedora 13 is tentatively scheduled for release on May 11, 2010. The proposed feature list for this release is just beginning to come together, and some possible features (such as Btrfs-based rollbacks) do not yet appear there. Unsurprisingly, improvements to the Nouveau and Radeon graphics drivers are on the list. Better online telephony support is a possibility for F13 as well.

Another important feature which is likely to appear in Fedora 13 is the Python 3 language. The current plan is to package Python 3 in a way that allows it to be installed alongside Python 2.6 without interference between the two - an important point, since a number of crucial Fedora scripts are written in Python 2. It looks like the only place where non-interference is hard to implement is when attempting to run both within the same address space. That may seem like a strange thing to do - and it is, until you try to run both mod_python and mod_python3 within a web server. Most users are unlikely to notice or install the python3 package with Fedora 13, but it will provide a base for the gradual migration of programs written in Python.

Fedora 13 users can also look forward to RPM 4.8.0, with a long list of new features. The RPM developers are looking for especially brave and well-backed-up testers to help find any remaining problems before inflicting it upon the more cowardly folks who merely run Rawhide.

Finally, the Fedora developers would like F13 to be a higher-quality release than F12, even though F12 is looking good. To that end, they have started a quality assurance retrospective page, reviewing how the QA process went for F12 and how it can be improved the next time around.

There has been speculation that Fedora 12 will be the release picked by Red Hat to serve as the base for Red Hat Enterprise Linux 6. Despite its remaining problems, F12 should serve well in that role; it is one of the best of the recent Fedora offerings. The challenge for the project now, of course, is to carry that success forward into subsequent releases while simultaneously incorporating all of the new software that the development community is so busily producing. Whether 13 will prove to be a lucky number for Fedora remains to be seen, but F12 seems like a good starting point and the project seems determined to do even better the next time around.


to post comments

Between Fedora 12 and 13

Posted Nov 30, 2009 22:32 UTC (Mon) by kragil (guest, #34373) [Link] (14 responses)

RH is a American company, so they would never base RH6 on a version 13 .
But I guess that RH6 will release after F13 and most fixes that went into F13 will eventually end up in RH6.

(BTW I tried F12 KDE SIG on my netbook but the for once really good Kubuntu 9.10 had the edge in almost all regards.)

Between Fedora 12 and 13

Posted Dec 1, 2009 19:04 UTC (Tue) by Trelane (guest, #56877) [Link] (6 responses)

RH is a American company, so they would never base RH6 on a version 13 .
Why not?

Between Fedora 12 and 13

Posted Dec 1, 2009 19:40 UTC (Tue) by kragil (guest, #34373) [Link] (5 responses)

"13" is considered "unlucky".

That is why things like this happen: http://apcmag.com/microsoft_to_skip_unlucky_office_13.htm

I think they do it because of their customers. Superstition is an art from in the US.

Between Fedora 12 and 13

Posted Dec 1, 2009 20:24 UTC (Tue) by Trelane (guest, #56877) [Link] (4 responses)

I am aware of "unlucky 13", but disagree that "Superstition is an art form in the US." Perhaps they won't base it on 13, but I doubt it. I know it makes people feel good to bash on the US, but I get rather tired of it.

Between Fedora 12 and 13

Posted Dec 2, 2009 17:40 UTC (Wed) by rvfh (guest, #31018) [Link] (3 responses)

Yeah, 13 is a number avoided in many places in Europe too. Many hotels don't have room 13 for example.

AIUI, it all comes from the Last Supper, so most Christian countries (and that's a lot!) may have this superstition (the 13th is a traitor [Judas], or the 13th will die [Jesus]).

Between Fedora 12 and 13

Posted Dec 3, 2009 18:28 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] (2 responses)

My Chinese colleagues take lucky and unlucky numbers far more seriously than Americans do, and the Chinese government does so as well. In particular, 8 is such a lucky number that the Beijing Olympics opening ceremonies began at 8/8/08 at 8 minutes and 8 seconds after 8pm local time. The number 4 is very unlucky as it sounds like the word for death.

Between Fedora 12 and 13

Posted Dec 4, 2009 10:28 UTC (Fri) by SimonKagstrom (guest, #49801) [Link]

Now that you mention it, Microsofts decision to skip Office 13 and go for Office 14 instead doesn't sound like a very wise move :-).

Maybe they should jump directly to Office 18 instead!

unlucky numbers

Posted Dec 5, 2009 1:26 UTC (Sat) by giraffedata (guest, #1954) [Link]

My Chinese colleagues take lucky and unlucky numbers far more seriously than Americans do

That's my experience too.

Chinese American business people nearly wet themselves back when "888" was introduced as a toll-free telephone area code.

If you don't know Mandarin but listen to Mandarin TV a lot (as I do, due to my Chinese roommate), the most common phrase you hear is "yi ba ba ba," which appears at the end of most commercials and means "1-888".

Between Fedora 12 and 13

Posted Dec 3, 2009 17:11 UTC (Thu) by Richard_DCS (guest, #56565) [Link] (6 responses)

The biggest reason that they should not base on version 13 is that RHEL6 is already 9 months late on their self proclaimed 18-24 months release cycle.

RHEL lifecylce

Posted Dec 3, 2009 19:36 UTC (Thu) by kragil (guest, #34373) [Link] (3 responses)

_I_ think RH silently went from time based to feature based.
I guess they want a kernel with a fairly ready BtrFS and some other stuff (KVM, maybe RT and also higher up the stack.)
If think about it it makes sense. Selling an OS with Ext4 in 2012? I don't think so. KVM has to replace the current Xen etc.

IMHO going feature based and even longer release cycles makes sense for an enterprise distro with a subscription business model. There is always Fedora (if you are into BDSM that is .. JK)

And I think the lifetime of RH5 will be extended even more.

RHEL lifecylce

Posted Dec 3, 2009 21:23 UTC (Thu) by sbergman27 (guest, #10767) [Link] (2 responses)

I think you are right about the incursion of feature-based thinking. Once a time-based
release cycle gets long enough, I starts getting harder to decide to release without some
highly desired feature. Because you know you are going to have to live with that for two
years. Of course, over the course of the extended release cycle, new things arise that you
come to think of as "must have". And you sure don't want to release without those.

Debian learned this lesson with the Sarge development cycle. The Linux kernel devs
learned this with the 2.5.x development cycle.

And the solution, in both cases, has been *more frequent releases*. And it is a solution
which has worked pretty well. I certainly don't mean that Red Hat needs to go to the usual
6 month release cycle. (Which is too rapid for any distro, IMO. Personally, I think Gnome,
KDE, Xorg, and most distros should target 9 months. But that's another post.) But RH could
benefit from dropping their 18-24 month target to 12-18 months.

With a 12-18 month release cycle, they could afford to go ahead and release the
improvements that are ready... and let the rest wait another 12 months, or so. It wouldn't
be so painful to release without everything on the current list of desired features. And 12-18
months is still long enough to preserve good QA.

It would, however, leave them with more releases to support simultaneously. Such is life, I
guess.

RHEL lifecylce

Posted Dec 3, 2009 22:33 UTC (Thu) by kragil (guest, #34373) [Link] (1 responses)

Well, there is a whole ecosystem around RHEL, which RH probably wants to preserve. More frequent releases would demand faster certification from Oracle, IBM, SAP etc. Which is very unlikely. They mostly only certify new products. Customers would be unhappy.
Yearly releases would only work if RH would provide the whole stack, but for most people they don't. I think they want to get there, but so far ..

I think most customers are happy with RHEL5 and are only willing to switch for compelling features. So if a 2 year update cyle won't offer those RH will adopt a longer cycle, which is what has happened I guess.

IMO this is RH new release policy:

"Gather enough features that would compel our customers to switch and that we can't realistically backport and then release.(and adapt lifetime of products accordingly. 8 or 9 years max)"

Maybe that is the crux of the subscription model, you have to do what your customers want (I know how strange that sounds.)

RHEL lifecylce

Posted Dec 3, 2009 23:33 UTC (Thu) by sbergman27 (guest, #10767) [Link]

"""
Maybe that is the crux of the subscription model, you have to do what your customers want
"""

Then if they've decided to change policy, they need to at least stop continuing to actively
advertise 18-24 month release cycles in their *current* sales material. Even if they are not
ready to actually announce a change yet. Unless they think their customers *want* to be
decieved.

Between Fedora 12 and 13

Posted Dec 3, 2009 19:58 UTC (Thu) by sbergman27 (guest, #10767) [Link] (1 responses)

Indeed. A predictable release cycle of 18-24 months was one of Red Hat's major selling
points for RHEL. And the RHEL5 General Overview document still claims 18-24 months,
even as they cavalierly violate that stated policy.

http://www.redhat.com/f/pdf/rhel/rhel5_overview.pdf

I used to use some CentOS for XDMCP servers. And on a release cycle of 18-24 months, that
works out OK. Though I tend to prefer to update at 12 month intervals. And they were doing
pretty well until now. Oh, RHEL5 was a bit late. But not enought to worry about. But I'm
really surprised that Red Hat is being as cavalier as they are in violating their stated policy.

Now that all my CentOS installations are purely servers, of the conventional sort, I don't
mind it that much. But it really is a bit of an issue for people who chose RHEL/CentOS for
desktop related work. Granted they've updated some of the desktop apps. But those are
apps that an admin would have been able to update fairly easily anyway. (And in fact, I
used to do a Firefox/Thunderbird/OO.o facelift toward the end of the release cycle, anyway.)
But it looks like RHEL/CentOS desktop folks are likely to be stuck with creaky old Gnome
2.6.16 and kernel 2.6.18 for some time to come, even if RHEL 6 is based upon F12.

Lest this post seem too negative (as some here seem to have developed a hair trigger
when I criticize things in the RH world) I will say that to their credit, the 7 year support cycle
for both server and desktop pretty much blows all competing distros out of the water. Even
Ubuntu's 3yr desktop/5 yr server support cycle for LTS releases does not match that. So
good on Red Hat for that. Presumably, that is a promise that they will not choose to break.
At least, I would be very surprised if they did.

Between Fedora 12 and 13

Posted Dec 11, 2009 17:04 UTC (Fri) by damentz (subscriber, #41789) [Link]

Who says RHEL needs to be based off a version of Fedora? They just pick software that is better than what they were using in the previous version of RHEL.

Fedora is a testing ground for new features, not the new RHEL core packages.

Between Fedora 12 and 13

Posted Nov 30, 2009 23:00 UTC (Mon) by smoogen (subscriber, #97) [Link] (3 responses)

Wow that brought back memories... When I started at Red Hat, Red Hat Linux 4.0 was considered the worse release it had ever done.... 4.1 and 4.2 were both basically seen as stabilizers. Then we released 5.0 and all new levels of bugginess were found due to going to glibc-2 and compiler changes. Funny enough, Red Hat Linux 5.1 looked like it was going to be worse (it shipped with a problem that caused Netscape to segfault ). RHL-5.2 was very stable and I remember running it until about the middle of 6.1 testing (6.0 was not too stable either.)

However the number of packages on these releases could fit onto 1-2 cd's without high compression. [The current live-cd's should have more packages due to higher compression that weren't used because of CPU limitations.]

Between Fedora 12 and 13

Posted Dec 1, 2009 2:18 UTC (Tue) by mattdm (subscriber, #18) [Link] (2 responses)

This pattern continues -- 6.2 was a very solid release, and then 7 was awful — but usable again after 7.2.

Then things got all strange with a 7.3 release, but we're back to awful at 8.0. (No offense to all you Red Hat engineers who worked on this -- I'm sure it wasn't your fault.)

But then RHL 9 was very, very nice. I mean, so nice that it took me like, a decade to retire it at BU.

But then, if we pretend FC1 and FC2 were like Red Hat 10.0 and 10.1, it all makes sense again, with FC3 being the first really good Fedora.

(I don't think the progression follows after that, but I've been runnning rawhide since Fedora 7, so I'm a bit out of touch.)

Between Fedora 12 and 13

Posted Dec 2, 2009 11:45 UTC (Wed) by error27 (subscriber, #8346) [Link]

Everything before RH8 was ugly as pants. RH7.1 and 7.2 were crashy as well as being horrible to look at. RH7.3 was pretty stable.

RH8 was a drastic improvement so far as the UI was concerned. It had that one bug in libc which made broke external mysql connections (bug id 77467). The way I fixed it was to pull in some kind of grotty third-party version of libc. But I was so happy about the UI fixes that I was willing to overlook any other problems.

Between Fedora 12 and 13

Posted Dec 3, 2009 20:26 UTC (Thu) by sbergman27 (guest, #10767) [Link]

This was all pretty much intended. Oh, certainly RH wanted every release to be stable. But
the X.0 releases got a bunch of new features, and the X.1 and X.2 releases were stabilizing
releases. This was all long before Fedora. But one could roughly equate RH X.0 to a Fedora
release, and RH X.1 and X.2 to RHEL releases. The mapping is not perfect, but you get the
idea. Incidentally, there was an enterprise 6.2 release. (6.2E) It was the forerunner of RHEL.
And RH 7.2 was the basis for RHEL2.1. (Amusingly, the first RHEL release was 2.1, because
the enterprise shuns both 1.x releases, and x.0 releases. 2.1 was the first marketable
number available!)

The transition from the old libc to glibc in RH5.0 is one I remember well. Folks grumbling
about Xorg driver regressions today really have no idea. I don't think that *anything* since
then has been so disruptive. I still have a RH 5.0 box up in my storage room. And when I
pass it a get a sort of feeling of nostalgia, with an underlying case of mild heebee-geebies.

And the strange thing about it is... it seems like only yesterday. Cliché, I know. But it really
does seem like only a year or two ago.

Depends

Posted Nov 30, 2009 23:53 UTC (Mon) by bojan (subscriber, #14302) [Link] (6 responses)

I guess the experience depends on the hardware as well. For me, F-12 is a major regression:

- 3D graphics doesn't work without KMS
- 3D graphics + KMS + hibernate/thaw or suspend/resume crashes the system
- my network keeps disconnecting

Yeah, I got hit big time, because I have Intel 945GM/GMS graphics and Broadcom BCM4401-B0 NIC. So, several times a day, I have to start my network card again. And, of course, I have to suffer through slowness of 2D and metacity, instead of having smooth 3D graphics and compiz.

YMMV, as they say. But, I'll suffer through it until it gets fixed - it's not the end of the world.

PS. Yes, bugs have been filed etc.

Depends

Posted Dec 1, 2009 0:05 UTC (Tue) by jspaleta (subscriber, #50639) [Link] (2 responses)

I'd appreciate a reference to the intel 945 bug ticket number.

-jef

Depends

Posted Dec 1, 2009 2:54 UTC (Tue) by bojan (subscriber, #14302) [Link] (1 responses)

https://bugzilla.redhat.com/show_bug.cgi?id=537494

Other folks reported other bugs related to Intel graphics. For instance:

https://bugzilla.redhat.com/show_bug.cgi?id=523646

Depends

Posted Dec 1, 2009 4:05 UTC (Tue) by bojan (subscriber, #14302) [Link]

BTW, looks like this is the workaround for slowness:

gconftool-2 -s /apps/compiz/general/screen0/options/sync_to_vblank -t bool false

Also, kernels after -155 appear to have the fix. Will test.

Depends

Posted Dec 4, 2009 21:37 UTC (Fri) by Tet (subscriber, #5433) [Link] (2 responses)

For me, F-12 is a major regression

Heh. For me it's the total opposite. It's a huge step forward. F10 and F11 were dire releases, and I was seriously thinking of looking elsewhere. But F12 is the best release for a while, and is enough to keep me on Fedora for now. Yes, there are problems. KMS doesn't work, and audio doesn't work out of the box. But I can get it to the point where I can do the basics (read my email, surf the web, print, listen to music) with only minor hiccoughs. With F10 and F11, I couldn't get that far, no matter how much effort I put in. I'm starting to think Fedora works in cycles of threes. FC6 was great, F9 was OK and F12 is looking good for me. I just hope I don't have to wait until F15 to get another decent one.

Depends

Posted Dec 5, 2009 0:24 UTC (Sat) by nix (subscriber, #2304) [Link] (1 responses)

"The [Fedorans] did everything in threes."

Depends

Posted Dec 10, 2009 14:52 UTC (Thu) by fatrat (guest, #1518) [Link]


A quote that made my day ;)

RHEL6

Posted Nov 30, 2009 23:56 UTC (Mon) by Felix_the_Mac (guest, #32242) [Link] (18 responses)

Linux 2.6.18 (the RHEL5 kernel) was released on 2006-09-16. (Arrr!)
RHEL5 was released on 2007-03-14.

There has been mounting speculation that RHEL6 would be beased upon F10, F11, F12 ....

But, as a example of doubt, Jonathan is suggesting here that RHEL6 may be based on F12 whereas Wikipedia (bastion of reliable information) says:
"Red Hat Enterprise Linux 6, will be based on Fedora 14, supposed to arrive in the third quarter of 2011, no codename has been finalized"

My questions:
1. With the rapid pace of Kernel development how long can Red Hat feasibly stick with 2.6.18?
2. From a marketing perspective how long can they stick with 2.6.18?
3. Why are they not officially telling us their product plan?
4. How come nobody is leaking?
5. Is there something in recent kernels that is stopping them from using more recent releases? e.g. CFS?
6. And while we're at it ... whatever happened to the Red Hat desktop?

Enquiring minds want to know!

RHEL6

Posted Dec 1, 2009 0:31 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (8 responses)

Actually Red Hat currently supports a 2.4 series kernel in RHEL 3, and will do so into the middle of next year.

It is possible that Red Hat thinks supporting four releases at once was too much, and has decided, but not yet announced, that it will release RHEL less frequently, but with long phase 1 (new hardware and features) support. This might be a good choice because as the product matures, Microsoft have seen that customers become more and more resistant to upgrading. Rightly or wrongly they think it's an added cost (not a software cost, both Red Hat and Microsoft offer products where you don't pay for the specific version you use - but a training and testing cost) for no value.

So whereas you might have found customers would start using RHEL 4 on some production systems six months after it came out, and were using RHEL 5 within a year, Red Hat may have data that suggests if RHEL 6 was available today, it wouldn't drive any sales until 2011 anyway. So why spend the extra money on engineering? Wait until customers are begging for the new features, right?

Fedora offers them an advantage here. If Microsoft has some trouble that leaves them without even a beta of the new Windows Server for an extra year, they've got no way to put (server) features out there for customers to see. With Fedora, Red Hat gets an opportunity to show the way every six months, which must be good for customer confidence, while at the same time the rapid turnover of Fedora releases minimises ongoing engineering support overhead & makes it worthless for the roles where you might plausibly sell an RHEL license.

RHEL6

Posted Dec 1, 2009 10:17 UTC (Tue) by miguelzinho (guest, #40535) [Link] (6 responses)

Thank you very much! Finally some one who understands that Fedora is a perpetual beta for Red Hat products.

Fedora a beta for RHEL?

Posted Dec 1, 2009 19:19 UTC (Tue) by dowdle (subscriber, #659) [Link] (4 responses)

Fedora is considered the "upstream" of RHEL... and so far as any "upstream" is a perpetual beta of a "downstream", ok... call it whatever you want. I guess that makes Ubuntu a perpetual beta of Debain? No? Why not?

In any event, there are a large number of differences between Fedora and RHEL. Take the biggest difference is the number of packages. If you want to call Fedora a beta for RHEL, call 1/10th of it a beta for RHEL because the other 9/10ths aren't even part of RHEL. I'm just guessing with those percentages... I haven't actually run the exact package numbers but you get the point.

Fedora a beta for RHEL?

Posted Dec 1, 2009 23:56 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

~ 2500 in RHEL vs ~ 16000 binary packages in Fedora.

Fedora a beta for RHEL?

Posted Dec 3, 2009 5:52 UTC (Thu) by qg6te2 (guest, #52587) [Link]

It should be mentioned that Fedora's EPEL effort provides a large set of extra packages for RHEL.

While these extra packages are not officially part of RHEL, they considerably increase the number of packages directly usable on RHEL.

Fedora a beta for RHEL?

Posted Dec 3, 2009 0:21 UTC (Thu) by xoddam (subscriber, #2322) [Link] (1 responses)

No, Debian *unstable* is a perpetual beta of Ubuntu.

Kinda.

Fedora a beta for RHEL?

Posted Dec 4, 2009 19:32 UTC (Fri) by misiu_mp (guest, #41936) [Link]

No, debian unstable is perpetual beta of debian stable.
Ubuntu is just perpetual beta.
(evil me)

RHEL6

Posted Dec 3, 2009 20:52 UTC (Thu) by sbergman27 (guest, #10767) [Link]

We're not that rare.

RHEL6

Posted Dec 3, 2009 20:59 UTC (Thu) by sbergman27 (guest, #10767) [Link]

"""
It is possible that Red Hat thinks supporting four releases at once was too much, and has
decided, but not yet announced, that it will release RHEL less frequently,
"""

The proper way to have made that change would have been to make it effective for RHEL7.
They had already promised 18-24 month releases to RHEL5 customers. And, in fact, are still
claiming an 18-24 month release cycle in their current sales literature. Even as they blithely
disregard it:

http://www.redhat.com/f/pdf/rhel/rhel5_overview.pdf

RHEL6

Posted Dec 1, 2009 3:44 UTC (Tue) by qg6te2 (guest, #52587) [Link]

If one has the a bit of time to wade through Red Hat's Bugzilla, it can be noticed that RH folks are busy making a RHEL 6 beta.

A reasonable guesstimate is that RHEL 6 will be a hybrid between F-12 and F-13. That is, F-12 will be the base system, with some newer packages from F-13. A beta will be released roughly around the same time as F-13, partly in order to take make use of community feedback (i.e. bug reports) on F-13.

It's likely that either the beta or details of the final product will be announced at RH's summit in 2010.

RHEL6

Posted Dec 1, 2009 8:58 UTC (Tue) by lkundrak (subscriber, #43452) [Link] (3 responses)

Ever bothered to look at RHEL 5 (5.4) kernel? You'll get an impression, it's not quite 2.6.18 anymore.

RHEL6

Posted Dec 1, 2009 9:45 UTC (Tue) by Felix_the_Mac (guest, #32242) [Link] (1 responses)

Yes, I appreciate that, it was embedded within my question:
"With the rapid pace of Kernel development how long can Red Hat feasibly stick with 2.6.18?"

RHEL6

Posted Dec 1, 2009 16:22 UTC (Tue) by ewan (guest, #5533) [Link]

They can keep calling the RHEL kernel 2.6.18 indefinitely.

RHEL6

Posted Dec 19, 2009 11:00 UTC (Sat) by jengelh (subscriber, #33263) [Link]

Yeah. It's so much not 2.6.18 that most things that are designed to compile with a genuine 2.6.18 will outright fail.

RHEL6

Posted Dec 1, 2009 13:51 UTC (Tue) by ceplm (subscriber, #41334) [Link]

> And while we're at it ... whatever happened to the Red Hat desktop?

While I cannot comment on other ones, I am not sure what you mean here ... there is a commercially available desktop product http://www.redhat.com/rhel/desktop/ and of course Fedora servers pretty well as a desktop OS. I would be very sorry if it doesn't, because that's what I am employed to develop ;).

RHEL-based kernels?

Posted Dec 1, 2009 19:26 UTC (Tue) by dowdle (subscriber, #659) [Link]

LWN has run a few articles regarding "Enterprise" kernels that address your questions. You seem to be overlooking the fact that Red Hat releases an updated RHEL-5 version about ever 6 months with a new 2.6.18 branch... and in doing so they always back-port drivers, some features, and security fixes. In the last few updates they have also seen fit to re-base some desktop apps on newer versions... or upgrade an app if the upstream has discontinued the base they were using (take Firefox for example in RHEL4).

RHEL's 2.6.18 is really 2.6.18 but with a significant amount of code from newer releases mixed in.

So to answer your question... how long can they continue to use it? That is like asking how long people can live in a house? As long as that house is continuously remodelled to meet the needs of the residents, it can go on until the planet gets destroyed.

RHEL6 - Fedora speculation

Posted Jan 30, 2010 22:45 UTC (Sat) by jroysdon (guest, #63273) [Link]

I've done a bit of my own speculation as to RHEL6 to Fedora versioning. I tend to agree with another poster here, and based on the evidence I site, that RHEL6 will be based on F12 and/or a hybrid of F12 w/F13 refreshes.

RHEL6

Posted Apr 13, 2010 5:08 UTC (Tue) by antus (guest, #65245) [Link]

Keeping kernel 2.6.18 as a base (read the same API) does have its advantages in the some commercial places. One of the suckiest thing about Fedora for me personally is running binary nvidia drivers which break often when new kernels come out.

By contrast, we maintain a commercial telephony IVR based on RHEL3. Its still in production, still works, and the company that owns it has no interest to update, with the exception of security updates. Due to redhat maintaining the same kernel release I was able to install yum on this machine, "yum update" over 600 packages (including the kernel), reboot and have the machine come straight back up, and start taking phone calls even without rebuilding the proprietry drivers for the IVR hardware.

A lot of purists will hate that, but in an unbiased world where you just want it to work (enterprise), it can be a major plus.

Even so we are running RHEL 3, 4 and 5 systems now and Im looking forward to RHEL6 for new installs. The old ones wont be updated so long as they still serve their purpose.

Between Fedora 12 and 13

Posted Dec 1, 2009 0:16 UTC (Tue) by jspaleta (subscriber, #50639) [Link] (31 responses)

So... Dave was nice enough to layout what the priorities are for Red Hat staffed manhours when it comes to video driver development work. The real question is there a business interest that cares enough about 3-D support to staff manhours to work on the 3-D? Novell? Intel? ... dare I say it... Canonical? Other than Red Hat whose funding video driver work and are any of them actually prioritizing 3-D support at the top of their staffing?

Dave goes on to say that 3-D support is a good candidate for someone in the community to step up and contribute to improve.

http://thread.gmane.org/gmane.linux.redhat.fedora.devel/1...

-jef

Between Fedora 12 and 13

Posted Dec 1, 2009 0:40 UTC (Tue) by foom (subscriber, #14868) [Link]

> dare I say it... Canonical

Indeed, this does seem like something that ought to be right up near the top of their prioritized list
of things to fund work on, considering the Desktop focus.

Between Fedora 12 and 13

Posted Dec 1, 2009 1:01 UTC (Tue) by roblucid (guest, #48964) [Link]

Novell's got commercial desktop, and has team doing ATI RadeonHD driver.

Intel use Linux internally for CAD, presumably they would tend to use lots of integrated graphics, but at least they have worked to update the video stack.

With X & KMS, plus improved graphics memory management technologies, it seems churlish to expect non-glitchy 3D in releases made less than 8 weeks after new kernel version.

If they bugs stay unfixed, in 3 or 4 months, then I'll really start to be concerned.

Reality is, most users go get the blobs from Nvidia & ATI/AMD with Nvidia rated as less trouble because it installs more reliably.

Between Fedora 12 and 13

Posted Dec 1, 2009 1:51 UTC (Tue) by drag (guest, #31333) [Link] (22 responses)

Well... 3D graphics is important for a lot of reasons.

Think about the things you've been reading lately:

TTM (and Intel's GEM) --- Unified Memory Management system is required for many basic features in OpenGL 2+ and other 3D related APIs. (Nvidia proprietary drivers had this feature since day 1.)

KMS --- Required for unified memory management. Also this effectively solves the problems Linux users experience when doing things like presentations on projectors (given the previous inability to hotplug monitors and such). Previously you'd have to do things like edit your Xorg.conf and all that happy crap. Plus having it in the kernel makes it useful for things other then just X, makes it faster, makes it more reliable, and is much easier to deal with.

No-DDX --- Getting rid of Device Dependent X Drivers is necessary for very good unified memory management. It'll make it easier to improve performance for desktops that have a mixture of 2D and 3D elements (although in micro benchmarks it probably won't look like a big difference). It makes it possible to support multiple GUI logged in users in a meaningful way. It will eliminate the requirement to run X as setuid root. It'll make getting working 2D AND 3D drivers vastly easier. Etc. Etc.

Gallium --- Provides a unified driver framework on top of the unified memory management framework. It'll allow Linux to have multiple effective ways to access the processing power of GPUs. Help make making drivers easier, help making performance improvements easier, etc etc. Right now Linux only really supports EXA and OpenGL in a useful manner.. with OpenGL only being barely useful. (with Open Source drivers). If you want to have GLSL shaders done right, Media encoding/decoding, raytracing acceleration, OpenCL, or any other GPU-related technology working well Gallium will need to be perfected. Of course this has to wait for all the other building blocks above to get done and mature a bit first.

You have a couple major things happening:

Video cards no longer support 2D acceleration in any way shape or manner. No 2D engines. On the newest ATI cards it's only done through firmware emulation, and that will end pretty quickly in itself once ATI stops having to give a shit about XP support.

All 'acceleration' will actually be done through the '3D pipelines' which is slowly turning into little more then a pure software solution that is optimized to run your CPU and your GPU. The GPU being nothing more then dozens or even hundreds of little 'micro processor cores' that are programmable in how they are accessed and what they are used for.

And eventually the GPU will be integrated as a Core on your CPU to save expense, improve performance, and should be available to all applications through for doing all sorts of calculations beyond those designed just for rendering something on your monitor.

----------------------------------

Basically the business model is:

"If you give a shit about anything remotely to do with Linux combined with graphics in any way shape or manner, plus you care about Linux being able to effectively use the multiprocessor machines of the future (GPGPU) then you should really really really really really care about this sort of thing. It is exceptionally critical and without this Linux will not be able to do even relatively basic functions that users of other OSes will take for granted".

Keep in mind that the technical equivalents of everything I described up there have been present in Windows since Windows XP came out, more or less. Some of it was not really native to Windows until Vista, but it's all been in their drivers. (well none of the GP-GPU type stuff.)

So Novell, Redhat, Canonical, and anybody who hopes that Linux will be competitive into the next decade should be willing to put money and developer time behind this sort of thing.

Right now the champions of Linux graphics seem to be Redhat, Vmware, Intel, and ATI since I think they are the ones putting the most money into it. But I am not sure.. I am sure that somebody involved in X development would have a much better idea.

Forgive me if I said something ignorant above, I could be quite mistaken about specifics, but I should be generally right.

3D overload

Posted Dec 1, 2009 8:12 UTC (Tue) by eru (subscriber, #2753) [Link] (11 responses)

I fear you are right about the future, but I find all this emphasis on 3D pretty annoying, a waste of resources. Who cares about 3D? Gamers and those working on CAD. For me and (gazing around) everyone in the big open-plan office around me, good 2D acceleration with maybe some live video support thrown in (for those internal corporate clips featuring flying logos and the talking head of some boss) is all that is required. Its just text and pictures. Stuff that graphics cards circa 1999 already did pretty well...

Compiz and similar? Seriously, what are they for? I tried, for all of 5 minutes, felt nauseated and went back to ye olde desktop.

3D overlord

Posted Dec 1, 2009 8:50 UTC (Tue) by kragil (guest, #34373) [Link] (2 responses)

Well, Compiz is not the real goal. Having fast smooth non-flickering non-tearing graphics and tapping into the power of the GPU is.
The GPU is a powerful friend for doing a lot of things.
On Windows IE and Firefox will render their web pages with the GPU which provides a lot of acceleration.

I'm not 100% sure, but couldn't almost all of things in X be done by the GPU? (Drawing, hinting, font rendering etc.) Wouldn't that be something?

And I grow quite tired of the memory consumption whining. If done right only the memory on the graphics card will be used and not having 512 mb gfx memory laying around dormant is only a good thing in my book.(Win7 already does it that way .. Vista didn't.)

@drag: Thankfully Windows XP support will be with us for years and years to come. Hell, it is still being sold. So before 2015 we don't have to worry that much. Linux is lucky in that regard, but that is just for the fairly broken status quo.

3D overlord

Posted Dec 1, 2009 9:16 UTC (Tue) by eru (subscriber, #2753) [Link] (1 responses)

And I grow quite tired of the memory consumption whining. [...]

Actually, when writing about resources, I wasn't whining about memory consumption (this time... normally I whine about it a lot). What I meant is the extensive need of bright mind time to debug 3D support and (even worse) reverse-engineer proprietary 3D hardware and drivers for things like Nouveau.

But I guess I just have to stop worrying and learn to love 3D cards.

3D overlord

Posted Dec 1, 2009 17:11 UTC (Tue) by drag (guest, #31333) [Link]

Well don't worry to much about the concentration on '3D'. The GPU is really just a bunch of tiny processor cores, really. We (in Linux) can't use them for anything to substantial in distros because the proprietary software, well, is proprietary, and the open source stuff is not up to snuff yet. Think about it like a math co-processor in a 286 machine.

The 'fixed function' "OpenGL Accelerator" or "DirectX Accelerator" style graphics card which was designed to specifically acceleration one or two APIs started dying off with the introduction of things like 'Geforce 256'. Every since then they are growing more and more general purpose. The DDX and Mesa DRI drivers were designed specifically for that sort of 'fixed function' were you hand off certain OpenGL functions to hardware. This is one of the reasons that hardware support is so slow coming and it's so difficult to make them stable and useful.

The Geforce 256 was the first major step away from that sort of concept and it was introduced around mid-1999, which goes to show you how far behind Linux really is.

Nowadays they can be used for just about anything. Accelerating media decoding/encoding, (certain types of) super-fast floating point calculations for scientific calculations and stuff like that. Hell I expect that they could possibly be used for some sort of crypto or random number generation. The push for the '3D' desktop is partially just a vehicle for change. To get users to actually start caring and heavily use 3D stuff; especially open source drivers. The unified memory management and related items (KMS/DRI2/etc) are a basic dependency for Linux to take advantage of the power the GPU can unlock. Besides all that, of course, it will help with security since we can separate graphical users from root in a much more substantial way and it will help making the desktop more attractive due to the increase in performance, stability, and that sort of thing.

3D overload

Posted Dec 1, 2009 11:58 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (3 responses)

2D is just a special case of 3D with very boring matrices. The "2D acceleration" in older chips tended to mean acceleration of the Windows GDI functions, whereas at least "3D acceleration" is more generally applicable stuff.

AMD/ATI do make a chip with no 3D features, they sell it for inclusion in rack servers which they rightly theorise will be connected to an LCD panel for maybe 5 minutes if they're being installed by someone who is new to the business and doesn't have a reliable network based auto-installer yet. It has basically the same framebuffer setup and so on as their 3D chips, just no 3D. I would not be surprised to discover that this is actually just as expensive to make, and is done purely because 3D drivers are notoriously complicated, therefore unreliable and no-one wants their web server crashed with an Oops message saying the 3D rendering engine lost an interrupt. Not providing the hardware is a 100% effective way to prevent people installing unreliable 3D drivers on their server.

If you're able to convince business PC makers that these framebuffer-only chips are the Right Thing for corporate desktops, you may be onto something. But I suspect that with e.g. Excel and Powerpoint already taking advantage of the 3D acceleration, you won't get much traction.

3D overload

Posted Dec 3, 2009 20:38 UTC (Thu) by anton (subscriber, #25547) [Link] (2 responses)

We have older servers with the ATI Rage XL, and middle-aged servers with the ATO ES1000. I once ran X on the ES1000, and X recognized it as a Radeon 7000 or somesuch. We have free accelerated 3D drivers for the Radeon 7000. IIRC I even tried 3D, and it worked.

BTW, normally we run our servers in text mode, not for fear of getting an oops from the 3D driver (if the X server crashes, we don't care), but because we only need the console when we boot the thing, and when it does not react to the network. And then we want to see what the kernel said on the console, and X having blanked the screen or displaying anything but the console is not very helpful.

3D overload

Posted Dec 5, 2009 1:29 UTC (Sat) by tialaramex (subscriber, #21167) [Link] (1 responses)

“I once ran X on the ES1000, and X recognized it as a Radeon 7000 or somesuch. We have free accelerated 3D drivers for the Radeon 7000. IIRC I even tried 3D, and it worked.”

I guess everybody has their memory play tricks on them sometimes. The ES1000 has no 3D capability. If you don't believe me you might read ATI's specifications for it, or the source of the free driver you're talking about.

3D overload

Posted Dec 16, 2009 19:03 UTC (Wed) by daenzer (subscriber, #7050) [Link]

> The ES1000 has no 3D capability.

Actually, at least initially the 3D hardware was there, just not validated during production (probably that's cheaper than actually removing the hardware). So with luck it might work at least to some degree, but if it breaks you get to keep both pieces. The X.Org radeon driver currently disables all functionality using 3D hardware by default on these cards but it can be enabled via xorg.conf options for giggles (the extremely low video memory bandwidth probably precludes any non-trivial 3D usage anyway).

3D overload

Posted Dec 1, 2009 23:35 UTC (Tue) by bojan (subscriber, #14302) [Link] (2 responses)

> but I find all this emphasis on 3D pretty annoying, a waste of resources

90% or more of silicon real estate of graphics chips these days is dedicated to 3D. Using that to display 2D is actually faster and works better than using the 2D hardware. This whole thing is not about 3D effects, 3D for CAD or anything like that. It is about using the hardware so that the regular desktop works smoothly.

For instance, I use compiz but I do not enable any of the fancy 3D effects (e.g. wobbly windows, workspaces on the cube), because they are annoying to me. However, windows move better around the screen with compiz (because it's using 3D hardware to draw them), zooming is better, small objects are displayed more precisely (just look at the workspace switcher), scrolling is faster, CPU utilisation is lower etc.

3D overload

Posted Dec 2, 2009 19:34 UTC (Wed) by drag (guest, #31333) [Link] (1 responses)

From what I understand is that for video cards still containing 2D
acceleration those 2D acceleration processors are just little more then
copy-n-paste versions of the processors developed over a decade ago when
people still cared about 2D performance.

Once popular gaming switched gears from 2D side-scrollers to 3D shoot-em-ups
then all development regarding 2D processor cores halted completely for
mainstream systems. I figure all advancements in graphics and rendering
acceleration have occurred on the '3D' side of the hardware since around
1998 or so.

3D overload

Posted Dec 3, 2009 11:19 UTC (Thu) by nix (subscriber, #2304) [Link]

There's another reason why graphics card vendors only pay attention to 3D performance... because even with old cards like the Radeon r100 series, 2D performance is limited by memory bandwidth these days, not by GPU speed. Screen updates are pretty much instantaneous as long as the CPU can get the data to the card fast enough. Graphics card vendors are paying attention to 3D because there's nothing left for them to do in the 2D arena.

3D overload

Posted Dec 6, 2009 1:23 UTC (Sun) by numasan (guest, #62353) [Link]

"Who cares about 3D? "

Well, I and a whole industry do. I guess you don't watch "blockbuster" movies if you feel nauseated by Compiz, but +90% of all visual effects in big budget movies are created (not just rendered) on Linux, and great OpenGL acceleration is very much needed. Right now Nvidia is King with their proprietary driver, and will be for some time yet.

As others have stated 3D is not only for games and Compiz (which I don't use). Perhaps you could couple CAD and DCC together, but with a modern graphics card and Blender you have the tools to create without it being "exclusively highend-only". Unfortunately the open source stack is not yet capable or stable for this task. I personally hope that 3D will get a lot more attention, especially for serious work. What good is a "multi-teraFLOP" GPU when it can only drive wobbly windows?

About Fedora12, we are currently running F10 on our graphics workstations albeit very customized. I'm planning to evaluate F12 soonish but have thought about looking for another distro... F10 is working decently for us though.

Between Fedora 12 and 13

Posted Dec 1, 2009 8:29 UTC (Tue) by roblucid (guest, #48964) [Link] (1 responses)

> Video cards no longer support 2D acceleration in any way shape or manner.
> No 2D engines. On the newest ATI cards it's only done through firmware
> emulation, and that will end pretty quickly in itself once ATI stops
> having to give a shit about XP support.

> All 'acceleration' will actually be done through the '3D pipelines' which
> is slowly turning into little more then a pure software solution that is
> optimized to run your CPU and your GPU. The GPU being nothing more then
> dozens or even hundreds of little 'micro processor cores' that are
> programmable in how they are accessed and what they are used for.

But Windows 7 has as a new feature, 2D acceleration for GDI graphics like XP. That'll have to be supported for a while you know. They have now implemented a new accelerated framework for drawing text, and plan to add support for accelerated text into IE9 to improve scrolling.

Why can't a generalised GPU accelerate 2D? In general lusers don't give a monkeys if it's done in pure hardware or hardware + software; it just needs to do the drawing primitives fast, and offload stuff from main memory. The 3D solutions today, basically are hardware + software in the cards so accelerating 2D ought to be very doable.

Between Fedora 12 and 13

Posted Dec 1, 2009 17:59 UTC (Tue) by drag (guest, #31333) [Link]

Your right, of course.

Modern (or at least future) video cards accelerate 2D operations through the
3D pipelines. From the programmer's perspective they can still be
programming GDI or EXA or XRender, but from the driver's perspective you'll
still need to support 3D operations and have a unified memory management
scheme for all the different APIs you want to support.

What I was merely talking about is the 2D processor engines that you
traditionally targeted through 2D drivers like 'nv' or other X.org DDX stuff
is going away.

Between Fedora 12 and 13

Posted Dec 1, 2009 9:06 UTC (Tue) by luya (subscriber, #50741) [Link] (4 responses)

"Video cards no longer support 2D acceleration in any way shape or manner. No 2D engines. On the newest ATI cards it's only done through firmware emulation, and that will end pretty quickly in itself once ATI stops having to give a shit about XP support. "

Depend which videocard because PowerVR series (found on majority of mobile device include Apple IPhone and Nokia N900) supports 2D acceleration due to its tile-based architecture.

Between Fedora 12 and 13

Posted Dec 1, 2009 13:18 UTC (Tue) by nix (subscriber, #2304) [Link] (1 responses)

Um, just about every graphics card out there has some sort of tiling (it's useful for locality-of-reference because it means that things close in space are close in memory). That doesn't mean it necessarily supports any 2D acceleration to speak of.

Between Fedora 12 and 13

Posted Dec 3, 2009 9:09 UTC (Thu) by luya (subscriber, #50741) [Link]

Except majority of modern videocards does not use deferred rendering method i.e only visible pixels are rendered, they use immediate mode rendering. That method reduces memory usage and can save power as well which explains why PowerVR chipset has become defacto in mobile world. Check out: http://www.imgtec.com/powervr/powervr-technology.asp

The videocard of old Dremcast uses same method being the predecessor of those chipsets.

Between Fedora 12 and 13

Posted Dec 1, 2009 18:15 UTC (Tue) by drag (guest, #31333) [Link] (1 responses)

The stuff for embedded systems is going to be trailing PC-based systems by a
few years.

The PowerVR situation is one of the major things that sucks about Linux on
ARM. With Linux on x86 you can get open source 3D acceleration through Intel
and, in a way, through ATI pretty easily. But to do that on ARM still
requires very proprietary drivers, as far as I know.

Between Fedora 12 and 13

Posted Dec 3, 2009 9:16 UTC (Thu) by luya (subscriber, #50741) [Link]

Maybe somebody should bring back the old Kyro (PowerVR Series 3) module that was removed from kernel 2.6.

Between Fedora 12 and 13

Posted Dec 4, 2009 17:35 UTC (Fri) by leoc (guest, #39773) [Link] (2 responses)

I am no expert, but it seems to me that if/when the graphics functions do move into the CPU, it would by necessity have to be much more open than the situation we have now.

For example, how could ATI/AMD could keep its video technology a "secret" without hindering the CPU as a result? Would you have to load a "proprietary blob" to be able to run anything on it under Linux?

Between Fedora 12 and 13

Posted Dec 4, 2009 22:39 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

Um, CPUs already contain secret proprietary magic uploadable binary blobs.
Neither AMD nor Intel publicise the format of their CPU microcode.

Between Fedora 12 and 13

Posted Dec 11, 2009 21:21 UTC (Fri) by leoc (guest, #39773) [Link]

Yes, but once they are loaded you have access to the instruction set of the CPU, unlike say with NVIDIA video cards that hide everything behind the closed driver.

Between Fedora 12 and 13

Posted Dec 4, 2009 21:31 UTC (Fri) by Tet (subscriber, #5433) [Link] (4 responses)

Dave was nice enough to layout what the priorities are for Red Hat staffed manhours when it comes to video driver development work

Yes. But that is also somewhat worrying, given that I reported a complete failure a month ago (as in X is completely unusable -- it crashes reliably at startup), and nothing's come of that yet. Can they really have that many other critical X bugs that they haven't been able to look at it in a month?

Between Fedora 12 and 13

Posted Dec 4, 2009 21:49 UTC (Fri) by jspaleta (subscriber, #50639) [Link] (1 responses)

bug number?

-jef

Between Fedora 12 and 13

Posted Dec 9, 2009 18:28 UTC (Wed) by Tet (subscriber, #5433) [Link]

Bug 533030

Between Fedora 12 and 13

Posted Dec 13, 2009 18:02 UTC (Sun) by AdamW (subscriber, #48457) [Link]

"Can they really have that many other critical X bugs that they haven't been able to look at it in a month?"

Yes.

http://bugz.fedoraproject.org/xorg-x11-drv-intel
http://bugz.fedoraproject.org/xorg-x11-drv-ati
http://bugz.fedoraproject.org/xorg-x11-drv-nouveau

Graphics development is hard. This is nothing specific to Fedora, all distros have this many graphics bugs, more or less.

Between Fedora 12 and 13

Posted Dec 13, 2009 18:03 UTC (Sun) by AdamW (subscriber, #48457) [Link]

Oh, and we did 'take a look at it': Ben Skeggs is RH/Fedora's nouveau developer. If you can't get logs, there's nothing we can do to tell what's going wrong. You could boot to runlevel 3 and try startx manually.

Between Fedora 12 and 13

Posted Dec 13, 2009 18:00 UTC (Sun) by AdamW (subscriber, #48457) [Link]

Jef: you'll note that 'enough 3D for actually-important things like composited window managers' is in the priority list. It's the highest priority besides 'getting the card working at all'. The future of GNOME is gnome-shell, so RH needs at least that much '3D' functionality to work on all major cards. That's the main reason we hired Ben Skeggs.

And...heh, Canonical. That was a good one.

Between Fedora 12 and 13

Posted Dec 1, 2009 10:48 UTC (Tue) by armijn (subscriber, #3653) [Link] (2 responses)

If you would have done a fresh install (like I did) you would have noticed that the network actually is disabled after installation. Sigh.

Between Fedora 12 and 13

Posted Dec 1, 2009 11:12 UTC (Tue) by michich (guest, #17902) [Link] (1 responses)

What medium did you install from? The DVD? I believe not enabling network interfaces by default is the intended behaviour for this type of installation.

Between Fedora 12 and 13

Posted Dec 1, 2009 13:58 UTC (Tue) by armijn (subscriber, #3653) [Link]

I did indeed install from the DVD. I just wanted to point out that corbet was wrong by claiming it had been fixed (he explicitely mentioned not having to burn DVDs, and sees it as a bug).

Two Python versions

Posted Dec 1, 2009 19:25 UTC (Tue) by brouhaha (subscriber, #1698) [Link] (3 responses)

The current plan is to package Python 3 in a way that allows it to be installed alongside Python 2.6 without interference between the two
Why is it feasible to do this now, when the maintainers said that it was completely infeasible to package both Python 2.4 and 2.5 without interference when 2.5 appeared?

Two Python versions

Posted Dec 1, 2009 23:53 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link]

It wasn't impossible. It was just difficult. Python 2 and Python 3 are going to have to remain in parallel as incompatible languages for a long time to come and this made it feasible for someone to volunteer to get the work done. It is cost vs benefit equation.

Two Python versions

Posted Dec 2, 2009 19:10 UTC (Wed) by nevyn (guest, #33129) [Link] (1 responses)

I'm still not convinced it's workable, and have said so in the BZ (I'm the previous python Fedora maintainer). But the new python maintainer disagrees, it's his problem so he gets to decide his way.

However there are also a couple of "advantages" this time around:

1. They are _just_ doing python3 for F13, no extra modules. This cuts both ways, it won't be useful for much (IMO) ... but it also shouldn't break much.

2. python3 is _vastly_ different to python2, so people won't mind as much if Fedora ends up treating them as two separate languages that happen to end/start in close proximity.

3. The python maintainer is doing it full time, so if the world blows up he might have the time to fix everything.

Two Python versions

Posted Dec 13, 2009 18:06 UTC (Sun) by AdamW (subscriber, #48457) [Link]

"I'm still not convinced it's workable"

That's odd, seeing as how Mandriva had python 2.4 and python 2.5 parallel installed for multiple releases. Hell, it _still_ has python 2.4 available in Mandriva 2010, alongside 2.6. I'm not aware of it ever having blown up the world in all that time.

Upgrading vs. Fresh Install... and graphics issues

Posted Dec 1, 2009 19:59 UTC (Tue) by dowdle (subscriber, #659) [Link] (3 responses)

I started writing a review of Fedora 12 a while ago but put it on the back burner as things came up... thinking the longer I wait to finish it, the more time I will have had with it... the more complete of a review I can do.

I don't really recommend upgrading to anyone... except under certain conditions. On servers where the package count is fairly low and the possibility of third-party addon packages is low, upgrading has been painless for me for the last 5 or 6 releases I've been doing them.

On desktops where there is a large number of packages as well as a greater potential for third-party packages to be installed (think RPM Fusion for certain verboten media codecs and apps)... I don't upgrade.

But Scott, aren't you crazy to do a fresh install rather than upgrade? Umm, no. I build my own remix each release... which ends up being a 1.9GB LiveDVD. It has all of the software I want installed and I update it periodically so it has all of the updates. I keep my /home directory (which is encrypted on my laptop/netbook machines) a separate partition and it saves all of the data I care about between releases... like a backup of /etc and /root... and anything else desired.

Sure it takes a while to prepare the build environment by downloading everything and then actually building it... but Fedora has made it fairly easy and pain free and it has worked well for me the last few releases. And all of that work can be done while my upgrade target machines continue to run. I do the building inside of a KVM virtual machine so it doesn't disrupt the host machine much at all.

Once I have the LiveDVD iso I turn it into a LiveUSB with Fedora's livecd-iso-to-disk script. Oddly a fresh install from LiveUSB media takes between 5 - 10 minutes depending on the speed of the hardware. That is a *LOT* faster than an upgrade (even if you use pre-upgrade) and everything works... and there is (almost) no kruft left behind from one or more previous releases to confuse newer packages. Granted changes in desktop environment releases and individual apps... that look for settings in your home directory... can accumulate some kruft BUT given the fact that Fedora does so much updating during their short lifecycle, keeping my target machines updated makes them have the same versions available in the new release (sometimes slightly older and sometimes even newer depending on how long I waited to do the install) that kruft building in personal settings is less likely. Oh, I said faster, right? :) How much faster? On machines with a ton of packages... I can do 5-8 fresh installs faster than I can upgrade a single machine.

For those who don't have a separate /home or who have a really good reason for doing an upgrade (not sure there are any but just because you can is always there)... go for it... but that isn't the only thing to care about in a new release... how well the upgrade works. While Fedora continues to try and improve the upgrade experience, I've just gotten used to doing it this way.

I don't recommend everyone do it my way... especially if you only have one or two machines to upgrade... but if you have 5 or more and you want the challenge and fun... why not try making your own remix with the stuff you want there and upgraded? It makes new machine installs and setups take like 5 minutes post install.

Now having said all of that... I rarely disagree with Mr. Jon Corbet but him discounting all of the new features of a new release and concentrating completely on the upgrade experience seems silly... unless he was just trying to make his review different from others. Umm, excuse me, new features do count... and in Fedora 12 there are quite a few worth noting. I'll not bother to note them here because this comment is already become a chapter.

Regarding the graphics issues... I'm guessing that while there have been some regressions... those are probably fairly common among all distros based on the same package versions Fedora 12 uses. There was a lot of noise regarding graphic card issues and Ubuntu 9.10. I have no solid data as proof but my personal experience has been that Fedora 12 provides many more graphics improvements for a wider range of hardware than the number of regressions. I tested it on about 10 different computer models (7 were Dells) and the graphics worked as well or better than in previous releases... supporting more features. That includes my Acer Aspire One D150 netbook. I guess I was just lucky given the number of computers I tried it on. My point here is that the graphics issues and/or regressions are less of a Fedora specific issue and more of a general upstream issue that affects many distros.

If someone wants to switch from Fedora to something else, by all means, go for it... have fun... but for me I'm sticking with Fedora... on my personal desktops anyway. I'll just be quiet about servers for now. :)

Upgrading vs. Fresh Install... and graphics issues

Posted Dec 2, 2009 11:04 UTC (Wed) by k3ninho (subscriber, #50375) [Link] (2 responses)

Hey Scott,

That's a really interesting process for robustly upgrading between releases of Fedora -- have you scripted it and could you share the scripts?

(Thought rolls through barren wasteland of head: if you back up /etc and /root, can you also test to see if /home is on a separate partition and, if not, back that up too?)

K3n.

Upgrading vs. Fresh Install... and graphics issues

Posted Dec 2, 2009 15:07 UTC (Wed) by dowdle (subscriber, #659) [Link]

No I haven't scripted it. I'm not a huge fan of automation which is odd since I'm a sysadmin. Repetition is about the only way I can remember multi-step processes these days. :)

Upgrading vs. Fresh Install... and graphics issues

Posted Dec 3, 2009 20:38 UTC (Thu) by nicku (subscriber, #777) [Link]

I have always used an online upgrade; yum upgrade works for me.

I was surprised to find that a new $A75 Radeon HD4550 card works with 3D out of the box with no xorg.conf and dual screens, one CRT, the other LCD.

Between Fedora 12 and 13

Posted Dec 3, 2009 6:46 UTC (Thu) by mezcalero (subscriber, #45103) [Link] (5 responses)

Wow, a piece about the quality/bugginess of a Fedora release and PulseAudio is not mentioned neither in the article itself nor any of the pages linked?

Seems my powers to piss off people have weakened!

/me thinks about additional adventurous ways to break audio on F13.

Between Fedora 12 and 13

Posted Dec 3, 2009 11:24 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

They call it mature software :) it pretty much works for everyone now.

(btw I installed Fedora 12 and PulseAudio ate my dog and burned my house down. It's all your fault, esound would never have done any of these things.)

Between Fedora 12 and 13

Posted Dec 4, 2009 21:27 UTC (Fri) by Tet (subscriber, #5433) [Link] (1 responses)

it pretty much works for everyone now.

Surely you jest. I often feel that pulseaudio has an undeserved reputation for being problematic. However, at the same time, I can't remember the last time I saw a modern Linux install have working sound right off the bat. If nothing else, the sound is usually muted at boot, and you can no longer sanely get to a mixer to fix that.

Between Fedora 12 and 13

Posted Dec 13, 2009 18:08 UTC (Sun) by AdamW (subscriber, #48457) [Link]

"If nothing else, the sound is usually muted at boot, and you can no longer sanely get to a mixer to fix that. "

Eh? What? There's one on the panel like always. If you right-click it, there's a 'mute' checkbox. Hard to think how that could get much easier. gnome-volume-control is right there in the menus like it's always been. Sure, it's a rather different app since F11, but it still has a mute button. Not quite sure what you're talking about.

But yeah, audio is often muted at boot. I filed a bug for my case and Lennart looked into it a bit, then it seemed to magically stop happening while we were debugging it, but it happens again with F12 final live images. Oh well. It's not hard to unmute it, once.

Between Fedora 12 and 13

Posted Dec 3, 2009 12:01 UTC (Thu) by kragil (guest, #34373) [Link] (1 responses)

Pulseaudio has reached a stage that is comparable to when Duke Nukem Forever got the lifetime achievement award for vaporware. Everybody knew the situation sucked, but everyone is tired of repeating what has already been said a million times.
Judging from the history of Duke Pulseaudio will be in the spotlight again. My guess is that a good working Pulseaudio (for everybody) it will always be always right around the corner just before the thing is dropped and something better/simpler/more elegant comes along.

Between Fedora 12 and 13

Posted Dec 3, 2009 20:43 UTC (Thu) by sbergman27 (guest, #10767) [Link]

It would be a mistake to rush such an important component of the Linux desktop. The major
philosophical points are well covered here:

http://www.gamespot.com/pc/action/dukenukemforever/news.h...

Between Fedora 12 and 13

Posted Dec 10, 2009 15:03 UTC (Thu) by Xnux (guest, #62436) [Link] (10 responses)

There is only one reason that a Fedora update would interest me: if they finally removed all of the binary blobs in the Linux kernel and replaced it with Linux-libre. I have never found a good, actively-updated 100% free distro (gNewSense isn't updated at all). If Fedora is committed to free software, they should make the change.

Between Fedora 12 and 13

Posted Dec 11, 2009 6:27 UTC (Fri) by rahulsundaram (subscriber, #21946) [Link] (9 responses)

Between Fedora 12 and 13

Posted Dec 12, 2009 4:31 UTC (Sat) by Xnux (guest, #62436) [Link] (8 responses)

I am aware of Fedora's licensing policies for binary firmware. Still, I don't believe that Fedora is being stringent enough. Even if a piece of firmware is redistributable, the community is at the mercy of the firmware copyright holder to provide updates, which probably won't happen at the rapid rate of Fedora's open source programs.

I understand that firmware helps make the distro available on more hardware, but I don't want that firmware on the kernel by default. The Freed-ora project (related to Linux-libre) provides such a firmware-less kernel for Fedora, but I don't have nearly enough technical expertise to bundle it with Fedora myself. That's why I hope the Fedora developers work on phasing out binary blobs by default.

Between Fedora 12 and 13

Posted Dec 12, 2009 19:16 UTC (Sat) by dlang (guest, #313) [Link] (3 responses)

firmware needs to be updated at the rate of hardware changes/bugs, not at the rate of OS/application development.

the firmware provides an Interface that the OS then uses to manipulate the hardware. If the hardware doesn't change there isn't much need for the firmware to change.

remember that if a particular card doesn't have a firmware blob in the kernel, that doesn't mean that there isn't firmware, it just means that the firmware is in flash or ROM on the card, which is even slower to update.

so why are you willing to use a device that you can't update over one where you (or your linux distro) can pick which version of released firmware you are going to run on the device?

Between Fedora 12 and 13

Posted Dec 13, 2009 0:55 UTC (Sun) by Xnux (guest, #62436) [Link] (2 responses)

My knowledge of how firmware works isn't that great, but I'm not arguing that we shouldn't have binary firmware. I'm saying that all firmware should be made open source. That way, anybody in the Fedora community could contribute source code, not just the copyright holders. Even if the copyright holders do contribute patches to firmware now, that doesn't mean the code will be high quality. Look at nVidia's nv driver--it is mediocre at best, and doesn't even provide 3D support.

I suppose I was under the assumption that blob = proprietary, but maybe I'm wrong. In any case, Fedora should not be content with redistributable-but-closed-source firmware--we need to work on providing open source firmware for different computer hardware as quickly as possible.

Between Fedora 12 and 13

Posted Dec 13, 2009 3:08 UTC (Sun) by dlang (guest, #313) [Link] (1 responses)

while I wouldlove to see opensourced firmware, I really don't understand why people try to get firmware blobs removed from the kernel.

non-trivial hardware will not operate without firmware period.

that firmware may be in ROM on the chip.

it may be in flash on the card that requires special hardware to modify

it may be in flash on the card that can be replaced through the driver or other software when plugged in normally

it may be loaded at startup time from the driver.

in all four cases it can be a binary blob that I cannot modify.

in the fourth case I at least have the option of selecting which firmware blob (and there for which feature/api set that the vendor offers) I want to use. It is the most free of the four options.

yes it would be even better if it was opensource with full internal documentation of how the device was put togeather, but while that is something to strive for I don't see how arguing that devices that use firmware installed by the driver is worse than devices that have the same firmware in flash that requires a windows-only program to update makes sense.

Between Fedora 12 and 13

Posted Dec 13, 2009 19:46 UTC (Sun) by Xnux (guest, #62436) [Link]

I'm not suggesting that we just drop all non-free firmware and ship that to everyone. Obviously, that would cause a huge amount of hardware to fail. I want a version of Fedora with no non-free firmware mostly for my own purposes, because I have hardware that can run on only free firmware and I do not desire to download proprietary firmware onto the CD that will install Fedora only to avoid using said firmware.

Projects like this already exist (e.g., gNewSense, Trisquel, BLAG, Freed-ora, Freed-ebian, etc.), but they are usually woefully behind the current releases of the distributions that they are based on (which are usually Debian/Ubuntu or Fedora). I want a distribution that combines Fedora's recent software packages and gNewSense's 100% software without having to settle for an out-of-date distribution (for example, gNewSense is still based on Ubuntu LTS, which is 8.04 Hardy Heron).

Between Fedora 12 and 13

Posted Dec 13, 2009 9:37 UTC (Sun) by rahulsundaram (subscriber, #21946) [Link] (3 responses)

Fedora's licensing policies are far more strict than other mainstream distributions and firmware policy is called a "Exception" for good reasons. Fedora Project will continue to replace firmware with more free equivalents whenever possible. We were afaik, the first distribution to include the free and open source reverse engineered Broadcom firmware by default.

I am not sure what you want to do with alternative kernels but building such a image is fairly easy.

http://fedoraproject.org/wiki/How_to_create_and_use_a_Liv...

If you need further help, you are free to post to the Fedora list or even contact me directly.

Between Fedora 12 and 13

Posted Dec 13, 2009 19:59 UTC (Sun) by Xnux (guest, #62436) [Link] (2 responses)

I suppose what I am interested in doing is creating a version of Fedora that replaces the default Linux kernel with the Freed-ora version of the Linux-libre kernel (which can be found at http://www.fsfla.org/download/linux-libre/freed-ora/F-12/) and removes any software specifically affected by patents (e.g., Mono and its dependencies, MP3 playback, DVD CSS, etc.). That way, when I actually burn this custom distro to a Live CD, it does not contain any non-free or patent-encumbered software.

  • How easy would it be for an intermediate Linux user like myself to do this?
  • When a new version of Linux-libre comes out (there is a new version for each Fedora release), would I have to manually update the kernel each time, or can I configure Software Update to do this?
  • Is there any way I can convince the Fedora team to maintain a 100% free Fedora version à la Gobuntu? I know that is a lot to ask, but it seems like that would be a lot easier than hacking Fedora myself, plus it would make a lot of disgruntles gNewSense/Trisquel/BLAG users happy.

Thank you for your help.

Between Fedora 12 and 13

Posted Dec 14, 2009 0:53 UTC (Mon) by vonbrand (guest, #4458) [Link]

Since there are probably (idiotic) software patents covering everything from linked lists to writing "Hello, world!", there just isn't a viable Linux distribution (or any other operating system, for that matter) that isn't patent encumbered.

Between Fedora 12 and 13

Posted Dec 14, 2009 5:26 UTC (Mon) by rahulsundaram (subscriber, #21946) [Link]

It is fairly easy. As an example of custom Fedora Remix, feel free to take a look at

http://omega.dgplug.org/11/Live/i686/Omega-11-i686-Live.ks

Since the alternative kernels you are talking about is part of a repository, you can simply point to it within a kickstart file. Third party repositories usually have the repository files as part of foo-release that needs to be added to the kickstart file and software updater will be able to pick up updates easily. Gobuntu doesn't actually exist anymore btw and Fedora Project is unlikely to be interested in maintaining any kernel variants.


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds