|
|
Subscribe / Log in / New account

Something going on with Fedora

From:  "Paul W. Frields" <stickster-AT-gmail.com>
To:  fedora-announce-list <fedora-announce-list-AT-redhat.com>
Subject:  Important infrastructure announcement
Date:  Thu, 14 Aug 2008 19:15:13 -0400
Message-ID:  <1218755713.15419.9.camel@victoria>
Cc:  Development discussions related to Fedora Core <fedora-devel-list-AT-redhat.com>, fedora-advisory-board <fedora-advisory-board-AT-redhat.com>

The Fedora Infrastructure team is currently investigating an issue in
the infrastructure systems.  That process may result in service outages,
for which we apologize in advance.  We're still assessing the end-user
impact of the situation, but as a precaution, we recommend you not
download or update any additional packages on your Fedora systems.

We'll share updates as we develop more information.  Those updates will
be published here on the public fedora-announce-list:
https://redhat.com/mailman/listinfo/fedora-announce-list 

Thanks for your patience as we continue working on this.


-- 
Paul W. Frields
  gpg fingerprint: 3DA6 A0AC 6D58 FEC4 0233  5906 ACDB C937 BD11 3717
  http://paul.frields.org/   -  -   http://pfrields.fedorapeople.org/
  irc.freenode.net: stickster @ #fedora-docs, #fedora-devel, #fredlug

-- 
fedora-devel-list mailing list
fedora-devel-list@redhat.com
https://www.redhat.com/mailman/listinfo/fedora-devel-list



to post comments

Something going on with Fedora

Posted Aug 15, 2008 19:29 UTC (Fri) by MattPerry (guest, #46341) [Link] (18 responses)

This is a great example of the problem of having all of your software come from packages from
the distro vendor.  I've long thought that ISVs should be responsible for packaging their
programs and then people can install those packages as needed.  Instead, we have distros
package every program under the sun and you are dependent on them to provide those packages
via their own repository.

The Filesystem Hierarchy Standard and Linux Standard Base seem to offer the ability to get
away from this model.  It would be great if the Linux community could embrace these and
encourage ISVs to start packaging their programs themselves.  With such a system the Linux
distro maintainers could focus their efforts on making great operating systems.  Meanwhile,
users would be free to mix and match software and versions without delving into the minutia of
system administration.  They'd also be free of the burden of having to upgrade their entire
computing environment just to get Firefox 3.

Pipe dreams...

Posted Aug 15, 2008 19:47 UTC (Fri) by khim (subscriber, #9252) [Link] (6 responses)

It would be great if the Linux community could embrace these and encourage ISVs to start packaging their programs themselves.

Sure. It'll create thriving support industry: since RedHat will point fingers to LSB, LSB to Adobe and Adobe back to RedHat we'll have complex scripts to install/uninstall packages, clean up stuff after bad packages and so on. And NOBODY will be able to help you without access to your system - since every system will be broken in it's own unique way. Now the only question arises: does creation of such industry is worthy goal or not? To me answer is simple: thnx, but no, thnx.

With such a system the Linux distro maintainers could focus their efforts on making great operating systems.

Nope. They'll have yet another sector of work: scripts and subsystems designed to cope with broken installation/uninstallation programs and malware removal tools.

Meanwhile, users would be free to mix and match software and versions without delving into the minutia of system administration.

Yup. If they don't actually care about runability of said software, that is.

They'd also be free of the burden of having to upgrade their entire computing environment just to get Firefox 3.

Yup - the only way to run it will be to format harddrive and install new version of OS once Mozilla Foundation will decide to drop support for Fedora 7...

Sorry but this approach does not make Windows very happy...

LSB is DOA - it tries to solve problem even more complex then Microsoft's one and even Microsoft's problem is unsolvable. Simple one-binary programs without external dependencies work without LSB just fine and it's useless for complex programs. It all was discussed many-many times already: it does not work IRL and it'll not work in Linux too.

Pipe dreams...

Posted Aug 15, 2008 21:22 UTC (Fri) by drag (guest, #31333) [Link] (5 responses)

> Sure. It'll create thriving support industry: since RedHat will point fingers to LSB, LSB to
Adobe and Adobe back to RedHat we'll have complex scripts to install/uninstall packages, clean
up stuff after bad packages and so on. And NOBODY will be able to help you without access to
your system - since every system will be broken in it's own unique way. Now the only question
arises: does creation of such industry is worthy goal or not? To me answer is simple: thnx,
but no, thnx.


We already have Install/Uninstall scripts for packages. You invoke them when you do a yum
install or a apt-get install to install the software then apt-get remove and yum remove when
you remove the software.

> Nope. They'll have yet another sector of work: scripts and subsystems designed to cope with
broken installation/uninstallation programs and malware removal tools.

Ya. They already do that. It's called dpkg and rpm. )Maybe your a slackware user or something?
I'm suprised you never heard of this stuff! (just kidding))

And they'll probably do the same thing that Microsoft does to uninstall malicious software.
That is: "Nothing At All" Because there is nothing you can do, and nothing you should do. You
make your system 'correct' and you make it strong and if a administrator decides to install a
rootkit into their system, however mistaken, there is fundamentally nothing possible you can
stop that from happenning besides using some draconian method of locking out the system using
TPM or something bizzare like that.

Or go with a IPhone model of software delivery. Which is kinda sorta what we have with
apt-get.

> LSB is DOA - it tries to solve problem even more complex then Microsoft's one and even
Microsoft's problem is unsolvable. Simple one-binary programs without external dependencies
work without LSB just fine and it's useless for complex programs. It all was discussed
many-many times already: it does not work IRL and it'll not work in Linux too.

It's solved for Microsoft's customers. I don't know about you, but whenever I install software
on Microsoft Windows it works. It may not work well, but it works.  

This is far more then what I can say about installing any reasonably complex piece of Linux
software that isn't pre-packaged for my distribution by my distribution. It usually takes a
lot of effort, requires significant skill (relative to installing software on Windows), and
takes upwards of hours to complete with only about a 70% success rate. 

I am talking about compiling software and it's dependencies from scratch and trying to get it
work on Debian Testing/Sid.

And this has _nothing_to_do_ with closed source software. Everything I am personally talking
about up to this point is open source software.

--------------------

What I would like to see is uniformity and standardization between Linux distros. Not from the
top down like LSB, but from the ground up.

They already do things like ship all similar versions of GCC, libc, Linux kernel, Gnome, KDE,
etc etc. Every modern Linux desktop oriented distribution that I've used even uses the same
programs for managing the networks.

Right now we solve problems through a brute force and highly labor intensive approach.. each
Linux distro is responsible for packaging software, debugging those packages, and all that
sort of thing to compensate for relatively little differences between them. All this huge
duplication of work for just minor differences. 

So there has to be a more elegant way to deal with this stuff. There is no reason on earth
does it make sense to have 8 different groups of people working independently on packaging 8
different versions of the same exact piece software for the same exact hardware platform on,
fundamentally, the same exact software platform.

The most obvious way is to just make everything identical. Maybe that would work, maybe it
won't. But I doubt it's the only solution. If that won't work then there _has_ to be a
different way. 

Maybe something along the line of the new trend of integrating package making into using
revision control systems and standards on publishing packages. Something has to be possible.


Pipe dreams...

Posted Aug 15, 2008 23:02 UTC (Fri) by cortana (subscriber, #24596) [Link]

<blockquote>We already have Install/Uninstall scripts for packages. You invoke them when you
do a yum
install or a apt-get install to install the software then apt-get remove and yum remove when
you remove the software.</blockquote>

Currently these scripts are minimal and are written by people who actually know what they are
doing WRT distribution integration, etc.

You have only to look at the incredibly complex and unreliable scripts shipped by Plesk,
VMWare, etc. to see what kind of horrors such a system would unleash on our users.

Pipe dreams...

Posted Aug 16, 2008 2:11 UTC (Sat) by vonbrand (subscriber, #4458) [Link]

Right now we solve problems through a brute force and highly labor intensive approach.. each Linux distro is responsible for packaging software, debugging those packages, and all that sort of thing to compensate for relatively little differences between them. All this huge duplication of work for just minor differences.

Reasonable distros do track upstream software as closely as possible, and are careful to ship bug reports (even with proposed fixes) upstream where relevant, so this "huge duplication of work" just isn't there. Distributions do share patches and setups (or swizzle them from each other, that is what open source is for), there are even cases where an upstream developer is the packager for a distribution, or somebody packages for several distributions.

Besides, I just don't see a terrible amount of work when installing something from source... unless the package is very badly done software, in which case the installation troubles are probably just the very beginning of an extremely painful experience. A useful rule of thumb is that if installation is confusing or badly done, the rest of the stuff probably matches, and should be avoided.

Yes, I lived in the pre-Linux days, when there were lots of different Unixy systems around, with real differences among them and noone packaging "extraofficial" software. That was real pain. The current situation is tame in comparison.

Sorry, but it's just lies at this point

Posted Aug 16, 2008 7:24 UTC (Sat) by khim (subscriber, #9252) [Link] (2 responses)

And they'll probably do the same thing that Microsoft does to uninstall malicious software. That is: "Nothing At All".

Sorry, but this does not look like nothing. And programs like this or this are not like dpkg or rpm at all. Windows model is failing - and requires a lot of crutches to work. Will it be with us for much longer? Who knows? If people will stop installing every dancing sexy screensaver they can find it'll survive - but then it'll lose a lot of appeal: you'll be forced to use very limited set of software not because there are nothing else, but because you are afraid to break the system. A lot of people are in this situation already.

It's solved for Microsoft's customers. I don't know about you, but whenever I install software on Microsoft Windows it works. It may not work well, but it works.

If you install it on freshly installed Windows - yes, sure. But if your system is few years old... chances are it'll not only not work once installed it can even kill some programs already installed! Thus there are Windows File Protection, System Restore and other related crap.

I am talking about compiling software and it's dependencies from scratch and trying to get it work on Debian Testing/Sid.

Have you tried to do the same (compile software and it's dependencies from scratch) under Windows? Try it some time. You'll probably need to find few abandoned packages, buy some tools - and in the end get some non-working piece of software because your system included wrong set of headers...

So there has to be a more elegant way to deal with this stuff. There is no reason on earth does it make sense to have 8 different groups of people working independently on packaging 8 different versions of the same exact piece software for the same exact hardware platform on, fundamentally, the same exact software platform.

They don't do this independently. Patches are fying right and left and the only truly duplicated thing is testing - in Windows world it's the same. Situations where program works fine under XP and fails on Vista (or vice versa) are common...

Sorry, but it's just lies at this point

Posted Aug 16, 2008 21:10 UTC (Sat) by drag (guest, #31333) [Link] (1 responses)

> Have you tried to do the same (compile software and it's dependencies from scratch) under
Windows? Try it some time. You'll probably need to find few abandoned packages, buy some tools
- and in the end get some non-working piece of software because your system included wrong set
of headers...

They point is _I_Don't_have_to_. Not when I just want to run the software.

Every major open source project I've seen that has Windows support supplies Windows users with
executables and supplies Linux users with source tarballs, with only a few exceptions on the
Linux side.

-------------------------------------------

I think your misunderstanding me a bit:


What I want to see is a video game like Planeshift (It's a decent Linux MMORPG which follows
the ID Software approach were the engine is GPL, but the game itself is open, but non-free)
simply supply two DEB (or whatever) packages, themselves, that outline the dependencies and
versions they need.

Then the various distributions just pull down that package. If there is something wrong then
they figure it out and send a patch back to upstream.

Right now the package makers working for the various distributions are a huge bottleneck in
the distribution of new Linux software. This is because it requires such a huge manual
brute-force approach. For some software this is the only way it's going to work, but for the
vast majority.. something I expect like 80% don't need that level of dedication from the
distributions. 

Even if they supply it in deb-src format, and distributions use their servers to compile it,
then it's still worlds better then what we have now.

Get it now?

Going back to video games, in Linux there is no way for the average Linux user to compile
them. The makers of them are forced to use all those nasty scripts and install dialogs to
package them for Linux users. That's they only way they can get it to work.

And like you said they are full of nastiness like having to use scripts with ld_library_path
and statically compiled binaries and weirdness. These make the games much larger then they
need to be and make it impossible for Linux users to use anything but the very latest games on
the very latest distribution releases.

And beta testing? _FORGET_IT_. It's impossible. Even for the latest and greatest from Debian
Unstable I need to do things like setup a chroot environment because trying to install the
software I need will obliterate any hope of having apt-get not break my system. And with that
your looking at _days_ of effort. 

And on top of that, because the way Linux users are utterly dependent on their
distributions-supplied packages, there is no way the average Linux users will ever know about
90% of the games that are available for Linux unless they subscribe religiously to something
like happypenguin.org,

So what we have in Linux distributions is a handful of fairly decent FPS or some smaller
'gnome' or 'kde' games and older stuff that are packaged by the distribution maintainers, but
they don't have the time or desire to track down every new release of every game out there.

And that's just games. 

I could go on and on for any sort of type of software you want.

How about Host-based Intrusion detection systems? I am working on evaluating that stuff from
work and due to various thingsthat are 100% out of the control of myself, my bosses, and my
entire company, having to compile things from source is not fun. It's certainly possible, but
it adds lots of hoops. And entirely besides that I don't like to have to install a large
number of developer's tools on production machines.

How many of these are packaged by your distribution?
Samhain
OSSEC
Integrit
AIDE
Tripwire (OSS version)
Tiger

They all seem to be good, solid, and stable pieces of software. And they don't have very
complex dependencies or anything like that. Some are packaged for certain distributions, some
are packaged for others, and some are packaged for none. There is no reason why they can't
simply supply a couple binaries in their own packages so I can install them and test them out,
except that I can't. Because distributions are the gatekeepers to what is installable on my
system and they don't have the manpower to manage all of that on their own.

(OOOHHH.. that's right. I am using Linux so I don't have to care about malware or rootkits
because the packages supplied by my distribution is the perfect way to provide my system with
unbreakable security. Well that solves that problem!)

Sorry, but it's just lies at this point

Posted Aug 16, 2008 23:23 UTC (Sat) by njs (subscriber, #40338) [Link]

I'm sorry you feel so much frustration.

> How many of these are packaged by your distribution? Samhain, OSSEC, Integrit, AIDE,
Tripwire (OSS version), Tiger

I did take a quick look at this, though, and it looks like for Debian and Ubuntu the answer
is, all of them except OSSEC.  Additionally, the Tiger package appears to contain extensive
enhancements to let it make use of the dpkg database to better validate installed files.  A
quick google suggests[0] that the hold-up on integrating OSSEC is a combination of manpower,
the fact that the upstream package is garbage (seriously, /var/ossec/etc, /var/ossec/bin?),
and the fact that OSSEC is *not legal to redistribute*, because the authors don't understand
that the GPL and OpenSSL licenses are incompatible.

This is a rather nice example of how expertise in coding does not imply expertise in
distribution.  They're different skill-sets.

I see two changes you might be arguing for.  The first is that upstream authors should
habitually make their own packages.  As we see in the case of OSSEC -- and this is pretty much
the universal opinion of anyone whose dealt with any sort of vendor-produced packages ever --
this is an AWFUL IDEA because a huge percentage of upstream will give you garbage.  So as a
user, I insist on having some technical and legal gatekeeper between upstream and my machine.
In fact, the possibility of getting such a gatekeeper is generally considered to be one of the
major advantages of Linux over Windows.

The other thing you seem to argue is that okay, if we need a gatekeeper, there should still
only be one of them -- systems should be similar enough that once one person has done this
work, everyone can make use of it.  Roughly, this comes down to saying "there should only be
one distribution".  Which, well, I guess I can see the argument... but frankly it doesn't
matter how good the argument is, because as soon as you successfully got things down to one
distribution, some jerk would ignore all your hard work and start another one, and there we go
again.  But maybe it helps to reflect that having multiple distributions also creates a lot of
good to justify the bad -- it creates competition to drive development, it provides space for
many different approaches to be explored (look at e.g. all the different init systems) before
any single one is picked, etc.

Hope that helps.

[0] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=361954

Umm, no.

Posted Aug 15, 2008 20:00 UTC (Fri) by i3839 (guest, #31386) [Link] (8 responses)

Except that it doesn't work like that. Changing it the way you want won't solve the things you
mention.

For instance, new Firefox probably depends on newer libraries, so no matter what distro you
use, you need to upgrade a lot of software. And that dependency chain dribbles all the way
down, so before you know it you need to upgrade half your system. This is a dependency
problem, not a distribution problem.

It will work somewhat like that...

Posted Aug 15, 2008 20:15 UTC (Fri) by khim (subscriber, #9252) [Link] (7 responses)

Firefox works with Windows XP and Windows Vista - and it can use features from both. How it's done? It's simple: check version of Windows and have two copies of code. Since Linux evolves faster then Windows you'll need more copies of code: printing with GTK or without GTK, with Cairo or without Cairo, etc. It'll introduce new, interesting bugs and will create more work for support teams.

The only thing which saves Windows developers is long stretches between releases: Windows 2000/XP/2003 are quite similar and while Windows Vista is quite different you can finally drop support for Windows 9X! This covers Windows versions produced in NINE years. If you'll try to do the same with Linux you'll be forced to support esd/arts/alsa/pulseaudio just for sound, xine/gstreamer0.8/gstreamer/0.10 for video and even GCC 2.95/3.x/4.x for libstdc++. Nightmare.

It will work somewhat like that...

Posted Aug 15, 2008 21:06 UTC (Fri) by NAR (subscriber, #1313) [Link] (6 responses)

The question is: is it worth to have so many different configuration? I understand that Linux
evolves faster Windows, so there are a whole lot more versions out there - but do these
releases add something to the desktop user? For example, the whole 2.6 kernel series added
exactly one feature that I use: support for WiFi.

It will work somewhat like that...

Posted Aug 15, 2008 21:40 UTC (Fri) by drag (guest, #31333) [Link] (4 responses)

Well from a Linux desktop perspective the 2.6 kernel was a pretty big improvement.

It added much better level of interactivity, better response. (although I have to recompile my
kernel to get it since Debian ships their with preemptive-ness disabled by default).

With Udev and friends it's added significantly better hardware detection and hotplug
capabilities. Improvements in drivers helped also.. ie my system no longer crashes when I plug
in a USB hub chain with 7 different devices attached. (for most benefit of being able to
autoconfig input devices and video cards and such you have to wait for X.org to catch up)

And despite what other people may have wanted to believe, devfs sucked majorly. It may have
worked in some cases, but it failed every time I touched it. 

Remember back in the day when the first step of any Linux installation was to break out
Windows Device Manager and write down all the hardware on your system? It's been a hell of a
long time since I ever seen anybody suggest that. With 2.4 when ever I tried to install Linux
I'd have to go at the computer with a screwdriver in one hand and a cdrom in another.

 I also no longer have to drop to root to mount USB drives. I don't have to drop to root to
switch networks or join a vpn... and it doesn't involve any setuid root binaries.. dbus and
udev is a much safer, much more secure way to approach general desktop stuff. I now actually
have suspend-to-ram that actually works. Much Much better power management facilities. The
biggest differences, for desktop users, are going to be felt on mobile devices.

Of course a lot of that is the kernel + userspace stuff, but it wouldn't be possible without
many of the kernel changes.

It will work somewhat like that...

Posted Aug 16, 2008 14:01 UTC (Sat) by NAR (subscriber, #1313) [Link] (3 responses)

It added much better level of interactivity, better response.

Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works.

significantly better hardware detection and hotplug capabilities.

I've changed my monitor recently from a CRT to an LCD. Windows detected it fine, but under Linux I had to edit xorg.conf manually. Not much of an improvement...

I also no longer have to drop to root to mount USB drives.

Yes. Unfortunately it also means that the inserted CDs and DVDs are mounted at various places, usually not at /cdrom where it used to be. Again, I'd consider this a change, not an improvement.

I don't have to drop to root to switch networks or join a vpn...

Good for you - on my laptop Linux doesn't notice if I take it down from the docking station, I have to issue an 'ifconfig down; ifconfig up' to get the network working again. Of course, usually I have to reboot, because the graphics adapter doesn't switch to the internal LCD either, but let's blame it on the proprietary driver.

I now actually have suspend-to-ram that actually works.

Interestingly suspend-to-disk used to work for me in 2.4. Now, of course, it doesn't.

It will work somewhat like that...

Posted Aug 16, 2008 23:39 UTC (Sat) by strcmp (subscriber, #46006) [Link]

Well, I haven't noticed better interactivity - the kernel might be better in this field, but it still takes a long time to start applications. What made a better desktop experience is the usage of multicore processors: if an application eats up 100% of CPU time, the rest of the system still works.

Starting applications looks like interactivity from a user's perspective, but for the kernel this counts as throughput: how long does it need to open all the files, read the data from disk (in the case of libraries this tend to be random reads, mainly determined by disk seek speed), parse the configuration data and setup the program. The interactivity drag talked about was scheduling threads when they are needed, i.e. no audio and video skips, fast reaction to mouse clicks.

It will work somewhat like that...

Posted Aug 17, 2008 12:48 UTC (Sun) by tialaramex (subscriber, #21167) [Link] (1 responses)

Three of your complaints seem to be based on running a third party proprietary video driver.

• My free software driver detects the supported resolutions of connected displays at runtime
without any configuration. It works for my five year old LCD panel, my mother's old CRT, her
new widescreen panel, the projector at work, and so on. So X.org gets this right, but
obviously your proprietary driver has the option to screw it up

• Replacing the only connected display in a single set shouldn't require a reboot either.
Detecting the change needs an interrupt, the proprietary driver ought to use this interrupt to
initiate the necessary reconfiguration. Alternately you could bind the change to a keypress
(my laptop has a button which seems labelled for this exact purpose).

• Suspend to disk is most commonly blocked by video drivers that can't actually restore the
state of the graphics system after power is restored. This is excusable when the driver has
been reverse engineered despite opposition from the hardware maker (e.g. Nouveau) but seems
pretty incompetent if it happens in a supposedly "supported" proprietary driver from the
company that designed the hardware.

Nothing Linus or anyone else working on 2.6 could have done would have made proprietary
drivers stop being third rate. If you go look at Microsoft's hardware vendor relationships
you'll see they have the same exact problem, and they have to endlessly threaten and bribe
vendors to get them to produce code that's even halfway decent.

As to the other comments... the mount point for detected media is configurable by your
distribution or by you (the administrator) so if you're sure you'd like CDROMs mounted in
/cdrom it's not difficult to arrange for that, and still keep the auto-mounting (it's also not
difficult to disable the auto-mounting if you just don't like that). Newer 2.6 kernels also
support (but your hardware may well not) auto-detecting inserted or removed CDs/DVDs without
needing to poll the drive. Surely even if you want the mount point to be /cdrom, it's
convenient that with 2.6 + udev any CD ROM drive connected to your laptop (whether from the
base station, via USB or whatever) gets a symlink called /dev/cdrom ?

Of course if all your hardware was really well supported in 2.4 then you'll notice less
improvement from 2.6. Infrastructure-wise it seems much nicer to me. Less hard-wired
assumptions and more exposure of events to userspace.

It will work somewhat like that...

Posted Aug 18, 2008 14:45 UTC (Mon) by NAR (subscriber, #1313) [Link]

Nothing Linus or anyone else working on 2.6 could have done would have made proprietary drivers stop being third rate.

Except not changing the internal interfaces every other week...

the mount point for detected media is configurable by your distribution or by you

Surely. The problem is that I've never found where it could be configured. Also I used to have a /dev/dvd link that was lost somewhere between upgrading from Gutsy to Hardy (which was a bad decision). I have a feeling that distributions tend to make first time installation working fine, but they still have problems with upgrades. I'm pretty sure that upgrading from Windows XP to Windows Vista is also quite painful, but while Windows users need to update only every 3-4 years, Linux users have to update much more often.

all your hardware was really well supported in 2.4 then you'll notice less improvement from 2.6

Actually 2.4 supported my hardware at that time better than 2.6 supports my hardware now. And it's not just graphics card - xawtv used to be able to control the volume, but not it just doesn't work. It's annoying enough that I'm using Windows more and more at home.

Wrong question to ask...

Posted Aug 16, 2008 7:34 UTC (Sat) by khim (subscriber, #9252) [Link]

The question is: is it worth to have so many different configuration?

And the answer is: there are no alternative. Windows world includes a lot of duplication too: fresh installation of Windows XP already includes three or four versions of LibC (called msvcr* in Windows world), for example. But since there are central authority it can force it's views on the world. In Linux world the only central authority is Linus - and even he has very little control over kernel shipped to customers, let alone the rest of system. So there no way to avoid different configurations...

I understand that Linux evolves faster Windows, so there are a whole lot more versions out there - but do these releases add something to the desktop user?

Probably now. But then the situation is simple: either you release stuff - and it can be used by new projects, or you don't release stuff at all - and then you'll have frozen desktop forever. Joint development and early releases only for trusted partners don't work in OSS world very well... Again: no central authority - no way to synchronize development... Some weak synchronization is possible, but nothing like Windows world...

Something going on with Fedora

Posted Aug 16, 2008 6:04 UTC (Sat) by tzafrir (subscriber, #11501) [Link]

So now you won't get those annoncements from Fedora.

And in exchange you will get them form:

* The GNOME project
* The KDE project
* The GNU project
* The Linux Kernel maintainers
* The Apache foundation
* The OpenOffice maintainers
* Sun (for VirtualBox and MySQL)
* The Eclipse foundation
* The Blender foundation
* ...

And those are only the big guys. Many of the packages are maintained by much smaller groups.
Some of them could not care less (or wouldn't have the manpower) to care about your
distribution.

You'll have to get public keys of all of them in a reliable way. Verify their announcements,
and help them debug the problems from applying the latest fix on your special platform.

Sounds like fun!

awful idea

Posted Aug 22, 2008 0:34 UTC (Fri) by surfingatwork (guest, #50868) [Link]

Not sure where to start. You mean like on Windows where user is responsible for installing and
finding security bugfixes for each software package individually. Or that the software
packages each have their own updater/downloader mechanisms?

Maybe that's not what you mean.

If it's a one-off then there can be a repository set up for a single software package which is
something companies do. Then it integrates with the system tools.


Something going on with Fedora

Posted Aug 15, 2008 19:35 UTC (Fri) by xorbe (guest, #3165) [Link]

> we recommend you not download or update any
> additional packages on your Fedora systems.

If it's just server problems (not security) then
that was way too ominously chosen wording...


Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds