LWN: Comments on "Something going on with Fedora" https://lwn.net/Articles/294188/ This is a special feed containing comments posted to the individual LWN article titled "Something going on with Fedora". en-us Sat, 11 Oct 2025 15:02:29 +0000 Sat, 11 Oct 2025 15:02:29 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net awful idea https://lwn.net/Articles/295099/ https://lwn.net/Articles/295099/ surfingatwork <div class="FormattedComment"><pre> Not sure where to start. You mean like on Windows where user is responsible for installing and finding security bugfixes for each software package individually. Or that the software packages each have their own updater/downloader mechanisms? Maybe that's not what you mean. If it's a one-off then there can be a repository set up for a single software package which is something companies do. Then it integrates with the system tools. </pre></div> Fri, 22 Aug 2008 00:34:26 +0000 It will work somewhat like that... https://lwn.net/Articles/294423/ https://lwn.net/Articles/294423/ NAR <I>Nothing Linus or anyone else working on 2.6 could have done would have made proprietary drivers stop being third rate.</I> <P> Except not changing the internal interfaces every other week... <P> <I>the mount point for detected media is configurable by your distribution or by you</I> <P> Surely. The problem is that I've never found where it could be configured. Also I used to have a /dev/dvd link that was lost somewhere between upgrading from Gutsy to Hardy (which was a bad decision). I have a feeling that distributions tend to make first time installation working fine, but they still have problems with upgrades. I'm pretty sure that upgrading from Windows XP to Windows Vista is also quite painful, but while Windows users need to update only every 3-4 years, Linux users have to update much more often. <P> <I>all your hardware was really well supported in 2.4 then you'll notice less improvement from 2.6</I> <P> Actually 2.4 supported my hardware at that time better than 2.6 supports my hardware now. And it's not just graphics card - xawtv used to be able to control the volume, but not it just doesn't work. It's annoying enough that I'm using Windows more and more at home. Mon, 18 Aug 2008 14:45:18 +0000 It will work somewhat like that... https://lwn.net/Articles/294351/ https://lwn.net/Articles/294351/ tialaramex <div class="FormattedComment"><pre> Three of your complaints seem to be based on running a third party proprietary video driver. • My free software driver detects the supported resolutions of connected displays at runtime without any configuration. It works for my five year old LCD panel, my mother's old CRT, her new widescreen panel, the projector at work, and so on. So X.org gets this right, but obviously your proprietary driver has the option to screw it up • Replacing the only connected display in a single set shouldn't require a reboot either. Detecting the change needs an interrupt, the proprietary driver ought to use this interrupt to initiate the necessary reconfiguration. Alternately you could bind the change to a keypress (my laptop has a button which seems labelled for this exact purpose). • Suspend to disk is most commonly blocked by video drivers that can't actually restore the state of the graphics system after power is restored. This is excusable when the driver has been reverse engineered despite opposition from the hardware maker (e.g. Nouveau) but seems pretty incompetent if it happens in a supposedly "supported" proprietary driver from the company that designed the hardware. Nothing Linus or anyone else working on 2.6 could have done would have made proprietary drivers stop being third rate. If you go look at Microsoft's hardware vendor relationships you'll see they have the same exact problem, and they have to endlessly threaten and bribe vendors to get them to produce code that's even halfway decent. As to the other comments... the mount point for detected media is configurable by your distribution or by you (the administrator) so if you're sure you'd like CDROMs mounted in /cdrom it's not difficult to arrange for that, and still keep the auto-mounting (it's also not difficult to disable the auto-mounting if you just don't like that). Newer 2.6 kernels also support (but your hardware may well not) auto-detecting inserted or removed CDs/DVDs without needing to poll the drive. Surely even if you want the mount point to be /cdrom, it's convenient that with 2.6 + udev any CD ROM drive connected to your laptop (whether from the base station, via USB or whatever) gets a symlink called /dev/cdrom ? Of course if all your hardware was really well supported in 2.4 then you'll notice less improvement from 2.6. Infrastructure-wise it seems much nicer to me. Less hard-wired assumptions and more exposure of events to userspace. </pre></div> Sun, 17 Aug 2008 12:48:23 +0000 It will work somewhat like that... https://lwn.net/Articles/294329/ https://lwn.net/Articles/294329/ strcmp <p> <em>Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works.</em></p><p> Starting applications looks like interactivity from a user's perspective, but for the kernel this counts as throughput: how long does it need to open all the files, read the data from disk (in the case of libraries this tend to be random reads, mainly determined by disk seek speed), parse the configuration data and setup the program. The interactivity drag talked about was scheduling threads when they are needed, i.e. no audio and video skips, fast reaction to mouse clicks. </p> Sat, 16 Aug 2008 23:39:26 +0000 Sorry, but it's just lies at this point https://lwn.net/Articles/294327/ https://lwn.net/Articles/294327/ njs <div class="FormattedComment"><pre> I'm sorry you feel so much frustration. <font class="QuotedText">&gt; How many of these are packaged by your distribution? Samhain, OSSEC, Integrit, AIDE, Tripwire (OSS version), Tiger</font> I did take a quick look at this, though, and it looks like for Debian and Ubuntu the answer is, all of them except OSSEC. Additionally, the Tiger package appears to contain extensive enhancements to let it make use of the dpkg database to better validate installed files. A quick google suggests[0] that the hold-up on integrating OSSEC is a combination of manpower, the fact that the upstream package is garbage (seriously, /var/ossec/etc, /var/ossec/bin?), and the fact that OSSEC is *not legal to redistribute*, because the authors don't understand that the GPL and OpenSSL licenses are incompatible. This is a rather nice example of how expertise in coding does not imply expertise in distribution. They're different skill-sets. I see two changes you might be arguing for. The first is that upstream authors should habitually make their own packages. As we see in the case of OSSEC -- and this is pretty much the universal opinion of anyone whose dealt with any sort of vendor-produced packages ever -- this is an AWFUL IDEA because a huge percentage of upstream will give you garbage. So as a user, I insist on having some technical and legal gatekeeper between upstream and my machine. In fact, the possibility of getting such a gatekeeper is generally considered to be one of the major advantages of Linux over Windows. The other thing you seem to argue is that okay, if we need a gatekeeper, there should still only be one of them -- systems should be similar enough that once one person has done this work, everyone can make use of it. Roughly, this comes down to saying "there should only be one distribution". Which, well, I guess I can see the argument... but frankly it doesn't matter how good the argument is, because as soon as you successfully got things down to one distribution, some jerk would ignore all your hard work and start another one, and there we go again. But maybe it helps to reflect that having multiple distributions also creates a lot of good to justify the bad -- it creates competition to drive development, it provides space for many different approaches to be explored (look at e.g. all the different init systems) before any single one is picked, etc. Hope that helps. [0] <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=361954">http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=361954</a> </pre></div> Sat, 16 Aug 2008 23:23:55 +0000 Sorry, but it's just lies at this point https://lwn.net/Articles/294323/ https://lwn.net/Articles/294323/ drag <div class="FormattedComment"><pre> <font class="QuotedText">&gt; Have you tried to do the same (compile software and it's dependencies from scratch) under Windows? Try it some time. You'll probably need to find few abandoned packages, buy some tools - and in the end get some non-working piece of software because your system included wrong set of headers...</font> They point is _I_Don't_have_to_. Not when I just want to run the software. Every major open source project I've seen that has Windows support supplies Windows users with executables and supplies Linux users with source tarballs, with only a few exceptions on the Linux side. ------------------------------------------- I think your misunderstanding me a bit: What I want to see is a video game like Planeshift (It's a decent Linux MMORPG which follows the ID Software approach were the engine is GPL, but the game itself is open, but non-free) simply supply two DEB (or whatever) packages, themselves, that outline the dependencies and versions they need. Then the various distributions just pull down that package. If there is something wrong then they figure it out and send a patch back to upstream. Right now the package makers working for the various distributions are a huge bottleneck in the distribution of new Linux software. This is because it requires such a huge manual brute-force approach. For some software this is the only way it's going to work, but for the vast majority.. something I expect like 80% don't need that level of dedication from the distributions. Even if they supply it in deb-src format, and distributions use their servers to compile it, then it's still worlds better then what we have now. Get it now? Going back to video games, in Linux there is no way for the average Linux user to compile them. The makers of them are forced to use all those nasty scripts and install dialogs to package them for Linux users. That's they only way they can get it to work. And like you said they are full of nastiness like having to use scripts with ld_library_path and statically compiled binaries and weirdness. These make the games much larger then they need to be and make it impossible for Linux users to use anything but the very latest games on the very latest distribution releases. And beta testing? _FORGET_IT_. It's impossible. Even for the latest and greatest from Debian Unstable I need to do things like setup a chroot environment because trying to install the software I need will obliterate any hope of having apt-get not break my system. And with that your looking at _days_ of effort. And on top of that, because the way Linux users are utterly dependent on their distributions-supplied packages, there is no way the average Linux users will ever know about 90% of the games that are available for Linux unless they subscribe religiously to something like happypenguin.org, So what we have in Linux distributions is a handful of fairly decent FPS or some smaller 'gnome' or 'kde' games and older stuff that are packaged by the distribution maintainers, but they don't have the time or desire to track down every new release of every game out there. And that's just games. I could go on and on for any sort of type of software you want. How about Host-based Intrusion detection systems? I am working on evaluating that stuff from work and due to various thingsthat are 100% out of the control of myself, my bosses, and my entire company, having to compile things from source is not fun. It's certainly possible, but it adds lots of hoops. And entirely besides that I don't like to have to install a large number of developer's tools on production machines. How many of these are packaged by your distribution? Samhain OSSEC Integrit AIDE Tripwire (OSS version) Tiger They all seem to be good, solid, and stable pieces of software. And they don't have very complex dependencies or anything like that. Some are packaged for certain distributions, some are packaged for others, and some are packaged for none. There is no reason why they can't simply supply a couple binaries in their own packages so I can install them and test them out, except that I can't. Because distributions are the gatekeepers to what is installable on my system and they don't have the manpower to manage all of that on their own. (OOOHHH.. that's right. I am using Linux so I don't have to care about malware or rootkits because the packages supplied by my distribution is the perfect way to provide my system with unbreakable security. Well that solves that problem!) </pre></div> Sat, 16 Aug 2008 21:10:07 +0000 It will work somewhat like that... https://lwn.net/Articles/294306/ https://lwn.net/Articles/294306/ NAR <I>It added much better level of interactivity, better response.</I> <P> Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works. <P> <I>significantly better hardware detection and hotplug capabilities.</I> <P> I've changed my monitor recently from a CRT to an LCD. Windows detected it fine, but under Linux I had to edit xorg.conf manually. Not much of an improvement... <P> <I>I also no longer have to drop to root to mount USB drives.</I> <P> Yes. Unfortunately it also means that the inserted CDs and DVDs are mounted at various places, usually not at /cdrom where it used to be. Again, I'd consider this a change, not an improvement. <P> <I>I don't have to drop to root to switch networks or join a vpn...</I> <P> Good for you - on my laptop Linux doesn't notice if I take it down from the docking station, I have to issue an 'ifconfig down; ifconfig up' to get the network working again. Of course, usually I have to reboot, because the graphics adapter doesn't switch to the internal LCD either, but let's blame it on the proprietary driver. <P> <I>I now actually have suspend-to-ram that actually works.</I> <P> Interestingly suspend-to-disk used to work for me in 2.4. Now, of course, it doesn't. Sat, 16 Aug 2008 14:01:00 +0000 Wrong question to ask... https://lwn.net/Articles/294291/ https://lwn.net/Articles/294291/ khim <p><i>The question is: is it worth to have so many different configuration?</i></p> <p>And the answer is: there are no alternative. Windows world <b>includes a lot of</b> duplication too: fresh installation of Windows XP already includes three or four versions of LibC (called msvcr* in Windows world), for example. But since there are central authority it can force it's views on the world. In Linux world the only central authority is Linus - and even he has very little control over kernel shipped to customers, let alone the rest of system. So there no way to avoid different configurations...</p> <p><i>I understand that Linux evolves faster Windows, so there are a whole lot more versions out there - but do these releases add something to the desktop user?</i></p> <p>Probably now. But then the situation is simple: either you release stuff - and it can be used by new projects, or you don't release stuff at all - and then you'll have frozen desktop forever. Joint development and early releases only for trusted partners don't work in OSS world very well... Again: no central authority - no way to synchronize development... Some weak synchronization is possible, but nothing like Windows world...</p> Sat, 16 Aug 2008 07:34:25 +0000 Sorry, but it's just lies at this point https://lwn.net/Articles/294290/ https://lwn.net/Articles/294290/ khim <p><i>And they'll probably do the same thing that Microsoft does to uninstall malicious software. That is: "Nothing At All".</i></p> <p>Sorry, but <a href="http://www.microsoft.com/security/malwareremove/default.mspx">this</a> does not look like nothing. And programs like <a href="http://www.martau.com/">this</a> or <a href="http://www.crystalidea.com/">this</a> are not like dpkg or rpm at all. Windows model is <b>failing</b> - and requires a lot of crutches to work. Will it be with us for much longer? Who knows? If people will stop installing every dancing sexy screensaver they can find it'll survive - but then it'll lose a lot of appeal: you'll be forced to use very limited set of software not because there are nothing else, but because you are afraid to break the system. A lot of people are in this situation already.</p> <p><i>It's solved for Microsoft's customers. I don't know about you, but whenever I install software on Microsoft Windows it works. It may not work well, but it works.</i></p> <p>If you install it on freshly installed Windows - yes, sure. But if your system is few years old... chances are it'll not only not work once installed it can even kill some programs already installed! Thus there are <a href="http://support.microsoft.com/kb/222193">Windows File Protection</a>, <a href="http://windowshelp.microsoft.com/windows/en-US/help/9f6d755a-74bb-4a7d-a625-d762dd8e79e51033.mspx">System Restore</a> and other related crap.</p> <p><i>I am talking about compiling software and it's dependencies from scratch and trying to get it work on Debian Testing/Sid.</i><p> <p>Have you tried to do the same (compile software and it's dependencies from scratch) under Windows? Try it some time. You'll probably need to find few abandoned packages, buy some tools - and in the end get some non-working piece of software because your system included wrong set of headers...</p> <p><i>So there has to be a more elegant way to deal with this stuff. There is no reason on earth does it make sense to have 8 different groups of people working independently on packaging 8 different versions of the same exact piece software for the same exact hardware platform on, fundamentally, the same exact software platform.</i></p> <p>They don't do this independently. Patches are fying right and left and the only truly duplicated thing is testing - in Windows world it's the same. Situations where program works fine under XP and fails on Vista (or vice versa) are common...</p> Sat, 16 Aug 2008 07:24:52 +0000 Something going on with Fedora https://lwn.net/Articles/294285/ https://lwn.net/Articles/294285/ tzafrir <div class="FormattedComment"><pre> So now you won't get those annoncements from Fedora. And in exchange you will get them form: * The GNOME project * The KDE project * The GNU project * The Linux Kernel maintainers * The Apache foundation * The OpenOffice maintainers * Sun (for VirtualBox and MySQL) * The Eclipse foundation * The Blender foundation * ... And those are only the big guys. Many of the packages are maintained by much smaller groups. Some of them could not care less (or wouldn't have the manpower) to care about your distribution. You'll have to get public keys of all of them in a reliable way. Verify their announcements, and help them debug the problems from applying the latest fix on your special platform. Sounds like fun! </pre></div> Sat, 16 Aug 2008 06:04:40 +0000 Pipe dreams... https://lwn.net/Articles/294268/ https://lwn.net/Articles/294268/ vonbrand <blockquote> Right now we solve problems through a brute force and highly labor intensive approach.. each Linux distro is responsible for packaging software, debugging those packages, and all that sort of thing to compensate for relatively little differences between them. All this huge duplication of work for just minor differences. </blockquote> <p> Reasonable distros do track upstream software as closely as possible, and are careful to ship bug reports (even with proposed fixes) upstream where relevant, so this "huge duplication of work" just isn't there. Distributions <em>do</em> share patches and setups (or swizzle them from each other, that is what open source is for), there are even cases where an upstream developer is the packager for a distribution, or somebody packages for several distributions. <p> Besides, I just don't see a terrible amount of work when installing something from source... unless the package is <em>very</em> badly done software, in which case the installation troubles are probably just the very beginning of an extremely painful experience. A useful rule of thumb is that if installation is confusing or badly done, the rest of the stuff probably matches, and should be avoided. <p> Yes, I lived in the pre-Linux days, when there were lots of different Unixy systems around, with <em>real</em> differences among them and noone packaging "extraofficial" software. <em>That</em> was real pain. The current situation is tame in comparison. Sat, 16 Aug 2008 02:11:22 +0000 Pipe dreams... https://lwn.net/Articles/294257/ https://lwn.net/Articles/294257/ cortana <div class="FormattedComment"><pre> &lt;blockquote&gt;We already have Install/Uninstall scripts for packages. You invoke them when you do a yum install or a apt-get install to install the software then apt-get remove and yum remove when you remove the software.&lt;/blockquote&gt; Currently these scripts are minimal and are written by people who actually know what they are doing WRT distribution integration, etc. You have only to look at the incredibly complex and unreliable scripts shipped by Plesk, VMWare, etc. to see what kind of horrors such a system would unleash on our users. </pre></div> Fri, 15 Aug 2008 23:02:34 +0000 It will work somewhat like that... https://lwn.net/Articles/294247/ https://lwn.net/Articles/294247/ drag <div class="FormattedComment"><pre> Well from a Linux desktop perspective the 2.6 kernel was a pretty big improvement. It added much better level of interactivity, better response. (although I have to recompile my kernel to get it since Debian ships their with preemptive-ness disabled by default). With Udev and friends it's added significantly better hardware detection and hotplug capabilities. Improvements in drivers helped also.. ie my system no longer crashes when I plug in a USB hub chain with 7 different devices attached. (for most benefit of being able to autoconfig input devices and video cards and such you have to wait for X.org to catch up) And despite what other people may have wanted to believe, devfs sucked majorly. It may have worked in some cases, but it failed every time I touched it. Remember back in the day when the first step of any Linux installation was to break out Windows Device Manager and write down all the hardware on your system? It's been a hell of a long time since I ever seen anybody suggest that. With 2.4 when ever I tried to install Linux I'd have to go at the computer with a screwdriver in one hand and a cdrom in another. I also no longer have to drop to root to mount USB drives. I don't have to drop to root to switch networks or join a vpn... and it doesn't involve any setuid root binaries.. dbus and udev is a much safer, much more secure way to approach general desktop stuff. I now actually have suspend-to-ram that actually works. Much Much better power management facilities. The biggest differences, for desktop users, are going to be felt on mobile devices. Of course a lot of that is the kernel + userspace stuff, but it wouldn't be possible without many of the kernel changes. </pre></div> Fri, 15 Aug 2008 21:40:25 +0000 Pipe dreams... https://lwn.net/Articles/294239/ https://lwn.net/Articles/294239/ drag <div class="FormattedComment"><pre> <font class="QuotedText">&gt; Sure. It'll create thriving support industry: since RedHat will point fingers to LSB, LSB to Adobe and Adobe back to RedHat we'll have complex scripts to install/uninstall packages, clean up stuff after bad packages and so on. And NOBODY will be able to help you without access to your system - since every system will be broken in it's own unique way. Now the only question arises: does creation of such industry is worthy goal or not? To me answer is simple: thnx, but no, thnx.</font> We already have Install/Uninstall scripts for packages. You invoke them when you do a yum install or a apt-get install to install the software then apt-get remove and yum remove when you remove the software. <font class="QuotedText">&gt; Nope. They'll have yet another sector of work: scripts and subsystems designed to cope with broken installation/uninstallation programs and malware removal tools.</font> Ya. They already do that. It's called dpkg and rpm. )Maybe your a slackware user or something? I'm suprised you never heard of this stuff! (just kidding)) And they'll probably do the same thing that Microsoft does to uninstall malicious software. That is: "Nothing At All" Because there is nothing you can do, and nothing you should do. You make your system 'correct' and you make it strong and if a administrator decides to install a rootkit into their system, however mistaken, there is fundamentally nothing possible you can stop that from happenning besides using some draconian method of locking out the system using TPM or something bizzare like that. Or go with a IPhone model of software delivery. Which is kinda sorta what we have with apt-get. <font class="QuotedText">&gt; LSB is DOA - it tries to solve problem even more complex then Microsoft's one and even Microsoft's problem is unsolvable. Simple one-binary programs without external dependencies work without LSB just fine and it's useless for complex programs. It all was discussed many-many times already: it does not work IRL and it'll not work in Linux too.</font> It's solved for Microsoft's customers. I don't know about you, but whenever I install software on Microsoft Windows it works. It may not work well, but it works. This is far more then what I can say about installing any reasonably complex piece of Linux software that isn't pre-packaged for my distribution by my distribution. It usually takes a lot of effort, requires significant skill (relative to installing software on Windows), and takes upwards of hours to complete with only about a 70% success rate. I am talking about compiling software and it's dependencies from scratch and trying to get it work on Debian Testing/Sid. And this has _nothing_to_do_ with closed source software. Everything I am personally talking about up to this point is open source software. -------------------- What I would like to see is uniformity and standardization between Linux distros. Not from the top down like LSB, but from the ground up. They already do things like ship all similar versions of GCC, libc, Linux kernel, Gnome, KDE, etc etc. Every modern Linux desktop oriented distribution that I've used even uses the same programs for managing the networks. Right now we solve problems through a brute force and highly labor intensive approach.. each Linux distro is responsible for packaging software, debugging those packages, and all that sort of thing to compensate for relatively little differences between them. All this huge duplication of work for just minor differences. So there has to be a more elegant way to deal with this stuff. There is no reason on earth does it make sense to have 8 different groups of people working independently on packaging 8 different versions of the same exact piece software for the same exact hardware platform on, fundamentally, the same exact software platform. The most obvious way is to just make everything identical. Maybe that would work, maybe it won't. But I doubt it's the only solution. If that won't work then there _has_ to be a different way. Maybe something along the line of the new trend of integrating package making into using revision control systems and standards on publishing packages. Something has to be possible. </pre></div> Fri, 15 Aug 2008 21:22:57 +0000 It will work somewhat like that... https://lwn.net/Articles/294238/ https://lwn.net/Articles/294238/ NAR <div class="FormattedComment"><pre> The question is: is it worth to have so many different configuration? I understand that Linux evolves faster Windows, so there are a whole lot more versions out there - but do these releases add something to the desktop user? For example, the whole 2.6 kernel series added exactly one feature that I use: support for WiFi. </pre></div> Fri, 15 Aug 2008 21:06:42 +0000 It will work somewhat like that... https://lwn.net/Articles/294232/ https://lwn.net/Articles/294232/ khim <p>Firefox works with Windows XP and Windows Vista - <b>and</b> it can use features from both. How it's done? It's simple: check version of Windows and have two copies of code. Since Linux evolves faster then Windows you'll need more copies of code: printing with GTK or without GTK, with Cairo or without Cairo, etc. It'll introduce new, interesting bugs and will create more work for support teams.</p> <p>The only thing which saves Windows developers is long stretches between releases: Windows 2000/XP/2003 are quite similar and while Windows Vista is quite different you can finally drop support for Windows 9X! This covers Windows versions produced in NINE years. If you'll try to do the same with Linux you'll be forced to support esd/arts/alsa/pulseaudio just for sound, xine/gstreamer0.8/gstreamer/0.10 for video and even GCC 2.95/3.x/4.x for libstdc++. <b>Nightmare</b>.</p> Fri, 15 Aug 2008 20:15:51 +0000 Umm, no. https://lwn.net/Articles/294228/ https://lwn.net/Articles/294228/ i3839 <div class="FormattedComment"><pre> Except that it doesn't work like that. Changing it the way you want won't solve the things you mention. For instance, new Firefox probably depends on newer libraries, so no matter what distro you use, you need to upgrade a lot of software. And that dependency chain dribbles all the way down, so before you know it you need to upgrade half your system. This is a dependency problem, not a distribution problem. </pre></div> Fri, 15 Aug 2008 20:00:04 +0000 Pipe dreams... https://lwn.net/Articles/294225/ https://lwn.net/Articles/294225/ khim <p><i>It would be great if the Linux community could embrace these and encourage ISVs to start packaging their programs themselves.</i></p> <p>Sure. It'll create thriving support industry: since RedHat will point fingers to LSB, LSB to Adobe and Adobe back to RedHat we'll have complex scripts to install/uninstall packages, clean up stuff after bad packages and so on. And NOBODY will be able to help you without access to your system - since every system will be broken in it's own unique way. Now the only question arises: does creation of such industry is worthy goal or not? To me answer is simple: thnx, but no, thnx.</p> <p><i>With such a system the Linux distro maintainers could focus their efforts on making great operating systems.</i></p> <p>Nope. They'll have yet another sector of work: scripts and subsystems designed to cope with broken installation/uninstallation programs and malware removal tools.</p> <p><i>Meanwhile, users would be free to mix and match software and versions without delving into the minutia of system administration.</i></p> <p>Yup. If they don't actually care about <b>runability</b> of said software, that is.</p> <p><i>They'd also be free of the burden of having to upgrade their entire computing environment just to get Firefox 3.</i></p> <p>Yup - the only way to run it will be to format harddrive and install new version of OS once Mozilla Foundation will decide to drop support for Fedora 7...</p> <p>Sorry but this approach does not make Windows very happy...</p> <p>LSB is DOA - it tries to solve problem even more complex then Microsoft's one and even Microsoft's problem is unsolvable. Simple one-binary programs without external dependencies work without LSB just fine and it's useless for complex programs. It all was discussed many-many times already: it does not work IRL and it'll not work in Linux too.</p> Fri, 15 Aug 2008 19:47:41 +0000 Something going on with Fedora https://lwn.net/Articles/294224/ https://lwn.net/Articles/294224/ xorbe <div class="FormattedComment"><pre> <font class="QuotedText">&gt; we recommend you not download or update any</font> <font class="QuotedText">&gt; additional packages on your Fedora systems.</font> If it's just server problems (not security) then that was way too ominously chosen wording... </pre></div> Fri, 15 Aug 2008 19:35:33 +0000 Something going on with Fedora https://lwn.net/Articles/294222/ https://lwn.net/Articles/294222/ MattPerry <div class="FormattedComment"><pre> This is a great example of the problem of having all of your software come from packages from the distro vendor. I've long thought that ISVs should be responsible for packaging their programs and then people can install those packages as needed. Instead, we have distros package every program under the sun and you are dependent on them to provide those packages via their own repository. The Filesystem Hierarchy Standard and Linux Standard Base seem to offer the ability to get away from this model. It would be great if the Linux community could embrace these and encourage ISVs to start packaging their programs themselves. With such a system the Linux distro maintainers could focus their efforts on making great operating systems. Meanwhile, users would be free to mix and match software and versions without delving into the minutia of system administration. They'd also be free of the burden of having to upgrade their entire computing environment just to get Firefox 3. </pre></div> Fri, 15 Aug 2008 19:29:02 +0000