Umm, no.
Umm, no.
Posted Aug 15, 2008 20:00 UTC (Fri) by i3839 (guest, #31386)In reply to: Something going on with Fedora by MattPerry
Parent article: Something going on with Fedora
Except that it doesn't work like that. Changing it the way you want won't solve the things you mention. For instance, new Firefox probably depends on newer libraries, so no matter what distro you use, you need to upgrade a lot of software. And that dependency chain dribbles all the way down, so before you know it you need to upgrade half your system. This is a dependency problem, not a distribution problem.
Posted Aug 15, 2008 20:15 UTC (Fri)
by khim (subscriber, #9252)
[Link] (7 responses)
Firefox works with Windows XP and Windows Vista - and it can use features from both. How it's done? It's simple: check version of Windows and have two copies of code. Since Linux evolves faster then Windows you'll need more copies of code: printing with GTK or without GTK, with Cairo or without Cairo, etc. It'll introduce new, interesting bugs and will create more work for support teams. The only thing which saves Windows developers is long stretches between releases: Windows 2000/XP/2003 are quite similar and while Windows Vista is quite different you can finally drop support for Windows 9X! This covers Windows versions produced in NINE years. If you'll try to do the same with Linux you'll be forced to support esd/arts/alsa/pulseaudio just for sound, xine/gstreamer0.8/gstreamer/0.10 for video and even GCC 2.95/3.x/4.x for libstdc++. Nightmare.
Posted Aug 15, 2008 21:06 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (6 responses)
Posted Aug 15, 2008 21:40 UTC (Fri)
by drag (guest, #31333)
[Link] (4 responses)
Posted Aug 16, 2008 14:01 UTC (Sat)
by NAR (subscriber, #1313)
[Link] (3 responses)
Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works.
significantly better hardware detection and hotplug capabilities.
I've changed my monitor recently from a CRT to an LCD. Windows detected it fine, but under Linux I had to edit xorg.conf manually. Not much of an improvement...
I also no longer have to drop to root to mount USB drives.
Yes. Unfortunately it also means that the inserted CDs and DVDs are mounted at various places, usually not at /cdrom where it used to be. Again, I'd consider this a change, not an improvement.
I don't have to drop to root to switch networks or join a vpn...
Good for you - on my laptop Linux doesn't notice if I take it down from the docking station, I have to issue an 'ifconfig down; ifconfig up' to get the network working again. Of course, usually I have to reboot, because the graphics adapter doesn't switch to the internal LCD either, but let's blame it on the proprietary driver.
I now actually have suspend-to-ram that actually works.
Interestingly suspend-to-disk used to work for me in 2.4. Now, of course, it doesn't.
Posted Aug 16, 2008 23:39 UTC (Sat)
by strcmp (subscriber, #46006)
[Link]
Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works.
Starting applications looks like interactivity from a user's perspective, but for the kernel this counts as throughput: how long does it need to open all the files, read the data from disk (in the case of libraries this tend to be random reads, mainly determined by disk seek speed), parse the configuration data and setup the program. The interactivity drag talked about was scheduling threads when they are needed, i.e. no audio and video skips, fast reaction to mouse clicks.
Posted Aug 17, 2008 12:48 UTC (Sun)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
Posted Aug 18, 2008 14:45 UTC (Mon)
by NAR (subscriber, #1313)
[Link]
Except not changing the internal interfaces every other week...
the mount point for detected media is configurable by your
distribution or by you
Surely. The problem is that I've never found where it could be configured. Also I used to have a /dev/dvd link that was lost somewhere between upgrading from Gutsy to Hardy (which was a bad decision). I have a feeling that distributions tend to make first time installation working fine, but they still have problems with upgrades. I'm pretty sure that upgrading from Windows XP to Windows Vista is also quite painful, but while Windows users need to update only every 3-4 years, Linux users have to update much more often.
all your hardware was really well supported in 2.4 then you'll notice less improvement from 2.6
Actually 2.4 supported my hardware at that time better than 2.6 supports my hardware now. And it's not just graphics card - xawtv used to be able to control the volume, but not it just doesn't work. It's annoying enough that I'm using Windows more and more at home.
Posted Aug 16, 2008 7:34 UTC (Sat)
by khim (subscriber, #9252)
[Link]
The question is: is it worth to have so many different configuration? And the answer is: there are no alternative. Windows world includes a lot of duplication too: fresh installation of Windows XP already includes three or four versions of LibC (called msvcr* in Windows world), for example. But since there are central authority it can force it's views on the world. In Linux world the only central authority is Linus - and even he has very little control over kernel shipped to customers, let alone the rest of system. So there no way to avoid different configurations... I understand that Linux evolves faster Windows, so there are a whole lot more versions out there - but do these releases add something to the desktop user? Probably now. But then the situation is simple: either you release stuff - and it can be used by new projects, or you don't release stuff at all - and then you'll have frozen desktop forever. Joint development and early releases only for trusted partners don't work in OSS world very well... Again: no central authority - no way to synchronize development... Some weak synchronization is possible, but nothing like Windows world...
It will work somewhat like that...
It will work somewhat like that...
The question is: is it worth to have so many different configuration? I understand that Linux
evolves faster Windows, so there are a whole lot more versions out there - but do these
releases add something to the desktop user? For example, the whole 2.6 kernel series added
exactly one feature that I use: support for WiFi.
It will work somewhat like that...
Well from a Linux desktop perspective the 2.6 kernel was a pretty big improvement.
It added much better level of interactivity, better response. (although I have to recompile my
kernel to get it since Debian ships their with preemptive-ness disabled by default).
With Udev and friends it's added significantly better hardware detection and hotplug
capabilities. Improvements in drivers helped also.. ie my system no longer crashes when I plug
in a USB hub chain with 7 different devices attached. (for most benefit of being able to
autoconfig input devices and video cards and such you have to wait for X.org to catch up)
And despite what other people may have wanted to believe, devfs sucked majorly. It may have
worked in some cases, but it failed every time I touched it.
Remember back in the day when the first step of any Linux installation was to break out
Windows Device Manager and write down all the hardware on your system? It's been a hell of a
long time since I ever seen anybody suggest that. With 2.4 when ever I tried to install Linux
I'd have to go at the computer with a screwdriver in one hand and a cdrom in another.
I also no longer have to drop to root to mount USB drives. I don't have to drop to root to
switch networks or join a vpn... and it doesn't involve any setuid root binaries.. dbus and
udev is a much safer, much more secure way to approach general desktop stuff. I now actually
have suspend-to-ram that actually works. Much Much better power management facilities. The
biggest differences, for desktop users, are going to be felt on mobile devices.
Of course a lot of that is the kernel + userspace stuff, but it wouldn't be possible without
many of the kernel changes.
It added much better level of interactivity, better response.
It will work somewhat like that...
It will work somewhat like that...
It will work somewhat like that...
Three of your complaints seem to be based on running a third party proprietary video driver.
My free software driver detects the supported resolutions of connected displays at runtime
without any configuration. It works for my five year old LCD panel, my mother's old CRT, her
new widescreen panel, the projector at work, and so on. So X.org gets this right, but
obviously your proprietary driver has the option to screw it up
Replacing the only connected display in a single set shouldn't require a reboot either.
Detecting the change needs an interrupt, the proprietary driver ought to use this interrupt to
initiate the necessary reconfiguration. Alternately you could bind the change to a keypress
(my laptop has a button which seems labelled for this exact purpose).
Suspend to disk is most commonly blocked by video drivers that can't actually restore the
state of the graphics system after power is restored. This is excusable when the driver has
been reverse engineered despite opposition from the hardware maker (e.g. Nouveau) but seems
pretty incompetent if it happens in a supposedly "supported" proprietary driver from the
company that designed the hardware.
Nothing Linus or anyone else working on 2.6 could have done would have made proprietary
drivers stop being third rate. If you go look at Microsoft's hardware vendor relationships
you'll see they have the same exact problem, and they have to endlessly threaten and bribe
vendors to get them to produce code that's even halfway decent.
As to the other comments... the mount point for detected media is configurable by your
distribution or by you (the administrator) so if you're sure you'd like CDROMs mounted in
/cdrom it's not difficult to arrange for that, and still keep the auto-mounting (it's also not
difficult to disable the auto-mounting if you just don't like that). Newer 2.6 kernels also
support (but your hardware may well not) auto-detecting inserted or removed CDs/DVDs without
needing to poll the drive. Surely even if you want the mount point to be /cdrom, it's
convenient that with 2.6 + udev any CD ROM drive connected to your laptop (whether from the
base station, via USB or whatever) gets a symlink called /dev/cdrom ?
Of course if all your hardware was really well supported in 2.4 then you'll notice less
improvement from 2.6. Infrastructure-wise it seems much nicer to me. Less hard-wired
assumptions and more exposure of events to userspace.
Nothing Linus or anyone else working on 2.6 could have done would have made proprietary drivers stop being third rate.
It will work somewhat like that...
Wrong question to ask...