RHEL 5.4 released
RHEL 5.4 released
Posted Sep 2, 2009 23:09 UTC (Wed) by Ed_L. (guest, #24287)In reply to: RHEL 5.4 released by jgg
Parent article: RHEL 5.4 released
2.6.30 runs circles around the RH kernel on certain benchmarks, the RH kernel is no longer a good representation of Linux.Lets qualify that: 2.6.30 runs circles around the RH 2.6.18 kernel on certain benchmarks on the machines it runs on, the RH kernel is no longer a good representation of Linux unless your particular workstation won't boot anything newer.
Three and a half years ago I built me a new Opteron workstation based on the latest lower-power AMD/ULi 1575 chipset in a really nice Abit AT8 motherboard. It serves me very well. But about four months after my purchase, Nvidia bought ULi and put their AMD-compatible south-bridges on the spike. So my AT8, while very reliable and cool-running, is a bit of an orphan. It initially ran Fedora 6, then CentOS 5.0 when that became available, that being the distro used by the shop I then worked. A year ago I moved the machine to my home office and again involved myself with Fedora.
Fedora 10 consistently hosed my md raid.
Fedora 11 (2.6.29), when it boots at all, consistently hoses my md raid. But 2.6.29's radeon driver doesn't appear to do dual-monitor spanning, so there's liitle point. Neither does it host para-virtual Xen, which is also a requirement.
Fedora 12 (Rawhide, 2.6.31), when it boots at all, appears to do dual-monitors with radeon just fine, and supposedly will host paravirtual Xen as well, so I'm optimistically in the process of preparing a bug-report against its boot process. After I can get 2.6.31 to boot reliably from an IDE disk, I can start to worry about whether it can play well with the ULi SATA controller. Then onward to Xen.
Lotsa testing. Meantime I'm writing this, and continuing development, from the machine's main CentOS 5.3 (RH 2.6.18-128.7.1.el5xen) md raid 1 partition, which also hosts a CentOS 4.6 Xen client for development/support compilation, all of which works Just Fine, and I'm very glad to have it.
Kudos to those responsible.
