Fedora developers on PackageKit
PackageKit aims to take the pain out of the package management on GNU/Linux systems and create a system that can compete with Windows and Mac. Development is proceeding at a rapid pace and it is set to be available in Fedora 9. To find out more, we talked to Richard Hughes, project creator, and Robin Norwood, the Fedora feature owner; as always, you can catch some screenshots at the end!"
(Log in to post comments)
Fedora developers on PackageKit
Posted Jan 18, 2008 17:42 UTC (Fri) by kripkenstein (guest, #43281) [Link]
The article mentions Klik and Smart PM, but what about Autopackage? It seems there is some overlap there. Anyhow, more importantly, having Klik/Autopackage/PackageKit/etc. seems not much better than yum/yast/apt-get/etc. ...
Fedora developers on PackageKit
Posted Jan 18, 2008 19:41 UTC (Fri) by nim-nim (subscriber, #34454) [Link]
autopackage reinvents the wheel even more than smart so you can take smart cons and multiply them several times over
Fedora developers on PackageKit
Posted Jan 18, 2008 20:01 UTC (Fri) by jwb (guest, #15467) [Link]
The depicted UI is one of the worst I've seen. What's supposed to be so great about it? And what's so great about the Windows and Mac package installers? Mac OS X has two completely different ways of installing packages: you can either drag an application out of a disk image, or you can double click a package to run an installer. Some application (like Adobe Reader) have custom installers. In Windows there are at least 80 wholly different styles of installer, at least three of which are used by Microsoft itself. Many of the UIs are really hideous. Somehow I don't feel that Linux is lagging either of the other two in the field of package management. It seems to me that the Linux system of the editorial repository is generations ahead of any commercial OS.
Fedora developers on PackageKit
Posted Jan 18, 2008 22:54 UTC (Fri) by alecs1 (guest, #46699) [Link]
That is the first thought that came to my mind, how is the Linux (at least Debian) way of managing installed programs worse than Windows and OS X? What is there to compete with? The same for the interface.
Too much choice
Posted Jan 18, 2008 23:24 UTC (Fri) by khim (subscriber, #9252) [Link]
Take a look from ISV's side. Debian packages work great - if you can create separate packages fro Debian stable, Debian testing, Debian unstable, Ubuntu 6.06, Ubuntu 7.04, Ubuntu 7.10. Argh. That's just for Debian! For RPM-based distributions you need 10 more packages. This is real shame of Linux. I mean: sure it's great that guys from Opera are producing huge number of versions tuned for all popular distributions. The fact that they need to do this is a shame.
Without ISV support Linux will never will the battle for Desktop and ISV will not support Linux till it's easy to create simple installable package. And no, "./confugure ; make ; make install" is NOT a good way to install package not included in your repository. And if you think 10'000 different bank clients will be nice addition to RedHat's or Ubuntu's repository then you are clearly mistaken...
Autopackage is so totally bad it's not even funny, but it does not make problem less real...
Too much choice
Posted Jan 19, 2008 0:29 UTC (Sat) by jwb (guest, #15467) [Link]
That's great, but I don't want ISV packages coming anywhere near my system. It is fine if the ISVs want to provide software, but the packages should be created by experts who know what they are doing, like all the other packages on my Ubuntu system. I do not trust software vendors to understand how to package software for Linux. It's that simple.
Too much choice
Posted Jan 20, 2008 10:18 UTC (Sun) by jhs (guest, #12429) [Link]
I think this is a chauvinist comment. You are insisting on having middle-men modify the software that the author wrote before you use it. It's possible that the authors of the software know better how to integrate it into a distribution than a volunteer packager in some cases. The reality is that this is a case-by-case situation.
And as I said in my other reply in this thread, the current package system does not scale. Nobody is satisfied: distributions can only get as much software on their platform as they have resources to make packages for; ISVs are frustrated with the effort it takes just to get the software in front of the user's eyes; and users are frustrated because it's hard to find and install software for their system easily.
In some situations, like servers and secure environments, I fully support packaging and the good tight integration that comes with it. But free software is about choice, and right now, there is no choice for software distribution. Everybody just has to bite the bullet.
Too much choice
Posted Jan 20, 2008 12:12 UTC (Sun) by jmtapio (subscriber, #23124) [Link]
I think this is a chauvinist comment. You are insisting on having middle-men modify the software that the author wrote before you use it. It's possible that the authors of the software know better how to integrate it into a distribution than a volunteer packager in some cases.
While it is possible that some authors can be better at integrating than some of the packagers, in my personal experience packagers usually are better at integration than the original authors. I believe this is because ultimately the authors are more interested on their own specific software, its quality and functionality. Packagers on the other hand are interested in making the whole system coherent. This includes stuff like placing documentation where it is supposed to be, integrating startup-scripts, wrappers, registering this and that to the system and so on. All of that is usually secondary for the authors themselves.
Too much choice
Posted Jan 22, 2008 12:58 UTC (Tue) by Tom2 (guest, #43780) [Link]
Perhaps this is a circular argument? 1. Distributions fail to provide tools to let ISV's provide software that integrates easily with their distribution. 2. ISV's therefore produce packages that don't integrate well. 3. Users blame ISVs, and applaud distributions for their centralised packagers, since they clearly work much better.
Too much choice
Posted Jan 24, 2008 21:18 UTC (Thu) by vmole (guest, #111) [Link]
1. Distributions fail to provide tools to let ISV's provide software that integrates easily with their distribution.
For Debian-based distributions, this is simply false. The tools (dpkg, debhelper, etc.) are there, and the documentation is there. The facilities to ask questions are there. And you don't have to build separate packages for each distribution variation -- it takes real effort to build packages that install and run on stable that won't install and run on testing or unstable.
Yes, it takes time to read the docs, and learn the basic packaging tools. But if you're not willing to do that, then why on earth would I trust your FooBarPkg package not to overwrite some critical library file or configuration? And if you don't want to take the time to integrate with the basic distribution (which might be a reasonable decision), then there is a long established, well-worked out scheme: unpack the tar.gz file under /opt/pkgname, and do whatever the hell you want under there. Anything in between is silly and dangerous.
Too much choice
Posted Jan 20, 2008 15:57 UTC (Sun) by robilad (guest, #27163) [Link]
It's impossible that authors know much about most of the platforms their software is being distributed on, due to factors like amount of choice in platforms, and access to them, or in case of ISVs, the platforms they make money from, and are willing to support, and the rest. A system that puts authors above distributors is going to fail, as it puts people who have next to no idea about the end user's environment in charge over those that do.
Too much choice
Posted Jan 21, 2008 12:14 UTC (Mon) by kleptog (subscriber, #1183) [Link]
Not sure if this is such a big deal, Things like Java and acrobat have been distributed as fancy tarballs for ages. All ISVs need to do is put up a tarball with a description of system requirements and people will make installers for various systems for you, no charge. If you as ISV stick to using LSB libc and other standard libraries you really shouldn't have any problems....
Too much choice
Posted Jan 21, 2008 12:59 UTC (Mon) by robilad (guest, #27163) [Link]
Java has been an amusingly painful story, if you look at the trouble the debian/jpackage/gentoo folks have been going through to make the binary blobs fit into the distributions, from in-the-middle-of-the-night updates to the tarballs by the vendors without a version change breaking md5 sums all over the place, to the inability to fetch the tarballs directly legally without interacting with a licensing mechanism, to breakage due to binary ABI changes , and a whole lot of other fun war stories the bug trackers and change logs can tell.
Too much choice
Posted Jan 19, 2008 0:38 UTC (Sat) by man_ls (guest, #15091) [Link]
ISV: "We have some nice software for you."Debian: "Just show us your packages so we know we can trust you."
ISV: "Packages? We don't need no stinking packages!"
Shooting ensues.
...or too few resources?
Posted Jan 19, 2008 0:55 UTC (Sat) by jd (guest, #26381) [Link]
I agree with the sentiment, but not with the problem. If the binaries are statically linked or the libraries are provided and installed if not already globally provided (using LD_LIBRARY_PATH or one of the other loader tricks to use local libraries where you need to force a version), you can indeed provide one installation that will work on every distro.Or you could compile to a shim layer and provide a distro-specific abstraction layer or shim. The advantage of this approach is that you would then have one x86 binary for all x86-based operating systems, using really just an extension of the IBCS module idea and the WINE approach. Then a port is nothing more than adapting an existing shim and compiling that shim. The program itself would only ever be compiled once per version. Likewise, a change to the application would only require recompiling the application, the shims would not be affected.
A third option would be for ISVs to more actively adopt the approach of licensing out building for commercial distros, so that they only have to focus on a very narrow range of builds for systems they want supported and which can't (for whatever reason) support themselves.
What do all these three things have in common? They tie up resources. The first ties up much more disk space and much more bandwidth. The second ties up a considerable amount of additional manpower. The third ties up money. They are all perfectly achievable, but only if someone invests the resources needed. No resources, no transparent multi-platform support.
People provide the multiple installation packages, then, not because they have to but because they have decided that they do not wish to invest the resouces in any of the alternative solutions that exist. It is a choice, not a requirement.
As for the package builder packages - we really do have enough of those, I think, and we have far too few abstraction libraries that can reduce the need for vast numbers of differently-configured packages. Again, not because it's impossible, but because that's not what people are developing. Package development kits are the right solution if, in fact, packaging is the right problem. Even a perfect solution is only helpful if the actual, underlying problem is something in the class of problems the solution is designed for. I have always been convinced that this is, in fact, the wrong problem, which is why none of the solutions end up really solving very much.
Of course, this entire line of reasoning only applies for code that can be made portable by correctly designing and implementing it. Some code is inherently non-portable, because it relies on too many highly implementation-specific details of what it is working with. In other cases, a portable solution is possible but so sub-optimal for any system that it is no longer worth using.
Anything that is not in those two categories can be written and compiled for a specific instruction set and word size. (And endienness, in the case of MIPS processors, as they can be switched at boot time.) Everything else can be hidden from the program, making it unnecessary for the program to know anything else.
...or too few resources?
Posted Jan 19, 2008 3:00 UTC (Sat) by raven667 (subscriber, #5198) [Link]
A third option would be for ISVs to more actively adopt the approach of licensing out building for commercial distros, so that they only have to focus on a very narrow range of builds for systems they want supported and which can't (for whatever reason) support themselves.
This is the only kind of idea I see gaining any traction. I know that Novell has available a very comprehensive automated build environment and as I recall IBM does too. RedHat has their RedHat Exchange which kind of speaks toward this issue assuming that they provide some editorial control over the packaging of the software they resell. Distributors have to reach out, maybe even at a bit of a loss, to ISVs and help them make packages that don't suck because it affects the perception of the distributors entire ecosystem. RPM hell is not what a distributor wants to be known for.
...or too few resources?
Posted Jan 19, 2008 16:02 UTC (Sat) by vonbrand (guest, #4458) [Link]
This is exactly the same discussion as the one on Stable API on the Linux kernel.
I've messed around with a few different distributions, and the differences between distributions are large, and if you look not only at ABI differences but also at packaging guidelines (what goes into the main package, what goes into a -devel package, what are the package naming guidelines, ...) you get into a mess with dependencies very fast. It was called "RPM hell" in large part because an RPM for one distribution or version just didn't fit into the next one (there was no ".deb hell" because there was just one collection of those around).
deb hell :
Posted Jan 20, 2008 7:34 UTC (Sun) by muwlgr (guest, #35359) [Link]
now there is one. Debian and Ubuntu have diverged their repositories far enough so that it's not easy to install .debs between them (however it's still not hard to rebuild them from the source). Earlier, the same could be said about Debian vs Stormix. Overall, as the policy becomes less strict and less centralized, the hell ensues.
Assuming you're right
Posted Jan 19, 2008 19:23 UTC (Sat) by lakeland (guest, #1157) [Link]
Wouldn't the solution to that problem be some sort of automatic package builder which takes in one input package and produces about a hundred packages for all sorts of distributions and versions? I mean, I'm not sure I agree with the problem definition you've given, but it does seem that most of the commercial software I've tried for Linux works that way rather than working with packages... So, the solution to that appears to be to make it easier to produce proper packages for every distribution, rather than producing a sorta meta package... doesn't it?
Assuming you're right
Posted Jan 24, 2008 20:44 UTC (Thu) by vonbrand (guest, #4458) [Link]
Creating that "automated package builder" (and then keeping it up to snuff WRT a few hundred distributions) is surely much harder than just let one guy hash it out for each package at each end...
Too much choice
Posted Jan 20, 2008 10:02 UTC (Sun) by jhs (guest, #12429) [Link]
I could not agree more! I did not know anybody else thought this way! Every time I think about distributing software for Linux I want to scream!
I have developed a lot of software for Linux as an ISV, and I can say that packaging is the worst part of making software for Linux. khim, add the VirtualBox download page to your shame list.
I have packaged plenty of software for all Linux distributions and for Solaris. I understand all the technical issues involved, and I understand how rpm/yum and deb/apt is technically far more advanced than the Windows. But the reality is, it is trivial to install third party software on Windows. This is one remaining area where Windows just works and Linux is woefully inadequate from the user perspective.
I think there are several things to think about in order to reach a good mix of the advantages of Linux distribution packaging and Windows "mu-packaging" (i.e. no packaging or minimalist packaging).
- Well-done packages for Linux are advantageous. Besides being impossible with non-free software, some components (LDAP, most servers, logging) are best done by experts who understand the big picture of the distro.
- But, distributions are also hamstringing themselves since in practice all software must be packaged for each distro in a process ISVs consider tedious and time-wasting.
- The current Linux software distribution system is centralized and assumes dumb nodes (ISVs) on the edge of the network, with smart hubs (packagers) in the center. This is wrong. To scale to be a great software platform, we need a decentralized distribution system where smart edge nodes can get software to all their users with minimal effort.
- Windows ISVs consider their software packages (setup.exe) as code, Linux packages consider their software packagages (something.rpm) as data. In reality, both are code and data. The difference is perception.
FWIW, my favorite package system is: Solaris. Conceptually, packages are just tarballs with the standard preinst hooks and basic dependency tracking. Not too smart, not too dumb, and convenient for ISVs
Too much choice
Posted Jan 20, 2008 10:57 UTC (Sun) by mjthayer (guest, #39183) [Link]
> khim, add the VirtualBox download page to your shame list. Shame for VirtualBox, or for Linux packaging?
Too much choice
Posted Jan 20, 2008 11:15 UTC (Sun) by jhs (guest, #12429) [Link]
Linux packaging. Notice how Windows and OS X require one entry each, and Linux gets twenty-one, plus a walkthrough for installing through apt.
Too much choice
Posted Jan 20, 2008 11:30 UTC (Sun) by mjthayer (guest, #39183) [Link]
That is actually also a kernel API/ABI-related problem. If you try them out, you will find that many of the rpms and debs on the web page will install on different distributions (in fact, you could get by with one of the debs and two or three of the rpms). But since each distribution comes with different kernel versions, the distribution packages also contain pre-built kernel modules so that the people installing do not need to build their own. If someone does install a package which does not contain modules for their kernel though, and if they have gcc, kernel headers and friends installed, the right module will be built during the installation.
Too much choice
Posted Jan 20, 2008 12:03 UTC (Sun) by jhs (guest, #12429) [Link]
That's a good point, and I'm sure a product like VirtualBox is particularly sensitive to kernel API/ABI changes. Kernel developers don't want a stable API and ABI, and I think they make their case well. But there is still a problem that distributing software for Linux is hard, and the whole community suffers because of that. For free software projects (or at least software distributed in source form), the changing APIs and ABIs aren't the only problem: boot scripts, SELinux, firewalls, mountpoints, authentication back-ends, and dependencies (to name but a few) must all be considered when packaging software for Linux. Again, I don't think I have the answer, I just think it needs broad discussion. But in my opinion, a package-building wizard that works for >= 90% of ISVs is sorely needed. But to be successful it will probably require development and maintenance from experts from all the major distributions. We need a way to get all the benefit from the package systems (updates, dependability, maintainability, scalability, dependency tracking, authentication, etc.) but still make it a no-brainer for (most) ISVs to ship their software on "Linux." We need to shed ourselves of this bottleneck that is packagers for every distro. Why can't third-party software catch on among the masses and *then* Canonical decides to make *their* own package to make it really sharp and integrated (probably brown in color of course)?
Too much choice
Posted Jan 20, 2008 14:51 UTC (Sun) by mjthayer (guest, #39183) [Link]
I think that my current packaging ideal would be as follows. The package developer creates a source meta-package using some reasonably easy-to-understand system which can build to (source and binary) rpm, deb and similar targets. The developer would of course still need to do a build on a few different systems (which could however be virtual machines or chroot jails) to get a set of packages which will install on most distributions. Then the people who would otherwise have packaged the software for distributions can simply liaise with the developer to make sure that the meta package complies with any distribution policies. If source is available, distributions could do automatic rebuilds of the source debs and rpms when they add the software to their repositories. I am planning on giving something like this a try when I have time.
Too much choice
Posted Jan 24, 2008 21:02 UTC (Thu) by vonbrand (guest, #4458) [Link]
Each distribution has its own "ABI":
- Names of "system" accounts and groups
- Exact versions (and configuration) of base libraries
- Setup for starting/stopping services (see this LWN issue for a discussion of some different alternatives)
- Even if a traditional SysV runlevel-based system is in place, runlevels have very different meanings
- Kernel versions, and their small (and not-so-small) differences in behavior
Too much choice
Posted Jan 25, 2008 8:49 UTC (Fri) by mjthayer (guest, #39183) [Link]
This is true, but it is still possible to produce a package which will work just about everywhere (take a look at the VirtualBox "all distributions" installer for a proof of concept, putting aside the merits and weaknesses of the implementation). It is certainly quite a bit of work, but that is at least in part due to lack of experience in the field - I think that someone who has done it once would be able to do it again with much less effort.
Too much choice
Posted Jan 21, 2008 3:50 UTC (Mon) by salimma (subscriber, #34460) [Link]
That is what DKMS is for, surely.
Too much choice
Posted Jan 21, 2008 6:17 UTC (Mon) by mjthayer (guest, #39183) [Link]
DKMS is something I hear of once every six months. Is it really in use? And if so, how much overhead does it mean for end users?
Too much choice
Posted Jan 22, 2008 0:40 UTC (Tue) by salimma (subscriber, #34460) [Link]
It's used by at least one popular third-party repository for Fedora (FreshRPMS), and since it was developed by Dell, presumably the repositories that Dell host to support their Linux machines use DKMS for kernel modules. The only overhead is that you need a compiler and the kernel headers installed. (Oh, and on Fedora, right now, it cannot clean up old modules when the corresponding kernel is removed. On Debian/Ubuntu this works, and they're working to port the fix)
Too much choice
Posted Jan 20, 2008 16:30 UTC (Sun) by robilad (guest, #27163) [Link]
Decentralized systems for shipping interdependant binaries don't work in practice, as sweet as they sound in theory. * you get metadata drift, * you get binary incompatible dependencies due to build time differences, * you lose any opportunity to QA the distribution as you no longer know what your dependency graph looks like, * you need to ship and support every (broken) version of a dependency someone has ever used as ISVs don't upgrade their dependencies unless they really, really have to, * etc. DLL hell, RPM hell, JAR hell, the phenomenon has many names. The only reason why Debian, Fedora, ports, or even CPAN scale so well is * they are centralized * they are source code based
Too much choice
Posted Jan 21, 2008 4:24 UTC (Mon) by jhs (guest, #12429) [Link]
I think these are all great points. Package frameworks are highly evolved, while package developers are talented and invaluable. The current package system is great for core components and for tight integration. Where there is a problem yet to be solved is with end-user applications which are not (yet) core components of the distro. Plenty of developers write end-user software that is an endpoint in the dependency graph. These developers want to reach as many new users as possible, so it would be great if they could trivially distribute software on the major distros. (Again, dependencies are only one issue: boot scripts, on-line documentation, iptables, inetd, cron, and other components must also be dealt with.) Much free software is built with Autotools to make its *source* portable on many platforms. But at the other end, it's not so easy to turn compiled binaries into packages for each distro and architecture. Ideally, end-user software could be distributed in "good enough" wizard-built packages that allow people to try the software out. If it looks like something is the new killer app, and Distro X decides to bundle it in their distro, then at that point they can work on their custom packages, ideally in cooperation with the upstream developer. It's all free software, of course!
Too much choice
Posted Jan 21, 2008 12:25 UTC (Mon) by robilad (guest, #27163) [Link]
Usually, if something is the killer app, someone volunteers to package it for their favorite distribution. You can encourage that as a pro-active upstream, and keep your packagers updated of important changes ahead of releases, for example.
Too much choice
Posted Jan 24, 2008 21:07 UTC (Thu) by vonbrand (guest, #4458) [Link]
autotools is almost unneeded now, the staggering differences among Unix systems are very much non-existent among Linux distributions. And, given the right tools, building one's own RPMs is not that hard. Sure, following the packaging guidelines for one's distribution and creating a package acceptable for official inclusion is another pair of shoes.
Too much choice
Posted Jan 26, 2008 11:46 UTC (Sat) by talex (guest, #19139) [Link]
All the problems you listed also apply to centralised systems. e.g. if I create a .deb for Ubuntu that depends on other Ubuntu packages, what stops Ubuntu from updating their packages and causing exactly the same problems?"You need to ship and support every (broken) version of a dependency someone has ever used as ISVs don't upgrade their dependencies unless they really, really have to,"
I'd say this problem only applies to centralised systems. e.g. I might need to ship a fixed libpng with my Ubuntu package, but not with my Fedora package. In a decentralised system, I'd just link against the known-good Fedora version for both.
Too much choice
Posted Jan 24, 2008 20:54 UTC (Thu) by vonbrand (guest, #4458) [Link]
Re: Windows: Sure, just stash everything you might need into the installer, and drop all onto the unsuspecting victim installee, in the vage hope nothing breaks due to several versions of the same stuff... plus, there are not that many versions of the base system around.
Re: Solaris: As always, it is very easy if the target is one, and tightly controlled to boot.
Linux, with its literally hundreds of distributions, each with its own objectives and guidelines, is a quite different beast. That is exactly the key point, and the biggest strength of Linux (and, as this thread shows), source of a weakness too.
Too much choice
Posted Jan 20, 2008 17:57 UTC (Sun) by mjthayer (guest, #39183) [Link]
> Autopackage is so totally bad it's not even funny, but it does not make problem less real... But their web page is an excellent resource on binary compatibility issues on Linux, if you look past the ranting.
Too much choice? huh?
Posted Jan 21, 2008 6:32 UTC (Mon) by grouch (guest, #27289) [Link]
Take a look from ISV's side.
No, thanks anyway.
It has been a very long time since I felt the need to rent software. (That "V" in "ISV" stands for vendor, but they don't sell stuff). If that "vendor" is unwilling to provide source and a suitable free (libre) software license, I have no interest in their wares and certainly do not want them bumbling around in unknowable ways with my operating system.
Too much choice
Posted Jan 21, 2008 9:34 UTC (Mon) by Jel (guest, #22988) [Link]
You don't need "10 more packages". You need once source package, repository servers (ie, websites) and a few automated build scripts. Then you need to supply yum and apt uris to users. It's actually much easier than having to write an applet that sits in the windows systray and monitors for updates, informs the user, downloads them, etc. Even on windows, you have to worry about x86, x86-64, intel, amd, vista, xp, etc. It's just that on debian etc., the issue has been solved.
Fedora developers on PackageKit
Posted Jan 19, 2008 0:54 UTC (Sat) by tzafrir (subscriber, #11501) [Link]
The only thing PackageKit helps is by creating a single user interface to various packaging systems. You would still need different packages (formats, behavious, and such) for different distributions. Only now you can save on the training costs. You can also add an updates manager to your favorite desktop without having to rewrite it on each distribution. Will it actually work well? I figure that we are yet to see that.
Fedora developers on PackageKit
Posted Jan 19, 2008 10:02 UTC (Sat) by alecs1 (guest, #46699) [Link]
A curiosity of mine. Opera always had a package for each OS I had on my computer, and they all worked perfectly (and this applyes even to the weekly builds) . What is their effort to do that?
Effort is beyond abilities of average ISV
Posted Jan 19, 2008 11:14 UTC (Sat) by khim (subscriber, #9252) [Link]
To make it all possible you need few distributions on few computers (KVM is your friend), someone who know all these distributions and writes correct instructions for package builders, etc. 99% if ISV are two guys with two computers and Visual Studio .NET (5 years ago it was Visual Basic 6). They DON'T have serious resources to pour on packaging problem. More: they don't even have TIME to learn how to do packages. In Microsoft world they are using wizards for the whole process - from start to finish: they only need to enter few names. While the resulting "installer" can be nightmare to support at least it exist. Linux does not even comes close: there are no easy way to create quick-and-dirty multy-platform package.
Effort is beyond abilities of average ISV
Posted Jan 19, 2008 14:44 UTC (Sat) by tzafrir (subscriber, #11501) [Link]
And in Microsoft world you have huge pains eventually with third-party programs. You can't simply "upgrade". Or you can use klik and such which is equivalent to the naive "Microsoft" approach. You have something equivalent for a "multi-platform-packages": LSB RPM packages. Those will sortof work. But how do you resolve dependencies? If a security issue comes up with libpng, will the vendors of all third-party packages provide you updates in a timely manner? (hint: not). With any linux distribution, you just update the libpng package and don't need to download 200MB of updates for that.
Effort is beyond abilities of average ISV
Posted Jan 19, 2008 23:04 UTC (Sat) by cortana (guest, #24596) [Link]
I am terrifyied when using a Windows system, really. How many private copies of vulnerable versions of libpng and zlib infest a typical machine?
Effort is beyond abilities of average ISV
Posted Jan 20, 2008 10:04 UTC (Sun) by NAR (guest, #1313) [Link]
I'm afraid, most people are not interested in security updates, especially if they can't install the damn thing in the first place. Currently the "Next->Next->Finish" type installer usually works better (for installing!) for casual users than installing some 3rd party linux package.Anyway, how many application can be really vulnerable to a libpng bug? The browser, the mailer, some mediaplayer? Most of them do get security updates, unless the user turned it off.
Effort is beyond abilities of average ISV
Posted Jan 20, 2008 12:59 UTC (Sun) by tzafrir (subscriber, #11501) [Link]
Again, we have those in Linux (e.g: klik). And they are not popular, for a good reason. next->next->next does not include the time it takes to: * Locate the software * Verify that it is not a trojan The mere fact that you have to ask the user questions is a usability bug. In Debian it was fixed long ago with debconf: a standard way to ask questions. With priority (so you can tell the installed package to only ask important questions, or ask all questions) and you can provide answers in advance.
Effort is beyond abilities of average ISV
Posted Jan 22, 2008 13:19 UTC (Tue) by petebull (guest, #7857) [Link]
And if they use a standalone installer, they have yet another application they have to look after to stay secure. If they put up with the burden to add the repository, verify the repository signing key and install it with the distributions package management system, the updates will come in like any other security patch. ISV packaging will lead to even more code duplication with libraries like libpng etc. IIRC openSUSEs One Click Install provides a way to the casual user to add repositories and install software with one click.
Effort is beyond abilities of average ISV
Posted Jan 21, 2008 12:54 UTC (Mon) by robilad (guest, #27163) [Link]
Money can be used to purchase time from those that want to trade it for money.
Fedora developers on PackageKit
Posted Jan 20, 2008 20:34 UTC (Sun) by lacostej (guest, #2760) [Link]
Is there any substantial research in this area (e.g. thesis). Sounds like discussing the merits of centralized vs decentralized packaging/distribution has several interesting aspects (quality, economics, ...) It looks like there's a topic our beloved editor could undertake one day: ISV on Linux. May even be interesting to survey the actors themselves (opera, skype, ) and the distributors. E.g. ubuntu has a commercial repository. Were they commissioned to package those packets ?
What about zero-install?
Posted Jan 22, 2008 8:20 UTC (Tue) by thierryg (guest, #50061) [Link]
Hum, as a ROX-Filer user, can I advertise zero-install (http://0install.net) as an example of a distribution-neutral packaging system? No admin rights needed, minimal dependencies on the host, multiple (eventually conflicting) dependencies handling, distributed depository setup (i.e. an ISV can publish software by himself).
What about zero-install?
Posted Jan 24, 2008 21:12 UTC (Thu) by vonbrand (guest, #4458) [Link]
Doesn't cut it. Each user installing stuff willy-nilly isn't a solution, it is a huge problem.
Sure, if it is a one-user machine, this works fine; but in that case, installing system-wide is not that much harder...
What about zero-install?
Posted Jan 25, 2008 13:03 UTC (Fri) by Tom2 (guest, #43780) [Link]
The page the GP pointed to only contains 147 words of text, including: "Users can share downloads without having to trust each other" It took 15 seconds for me to read the page (I timed it). You might also like to read this page: http://0install.net/sharing.html OTOH, if you actually tried it and it didn't work (or caused "a huge problem") then by all means post the details of what happened.
What about zero-install?
Posted Jan 25, 2008 13:22 UTC (Fri) by vonbrand (guest, #4458) [Link]
If you own the box, "install as a regular user" vs "install as root" is no big deal.
If it is a box with several users, "each one installs their own, untested stuff" is a headache in the best case, and a horrible security risk in any case.
Ever heard that Windows has to be reinstalled periodically due to being messed up by random installs? Happened to me too when managing a Unix/Linux system included fetching and installing software from source. Exactly the same situation here, user accounts will have to be rebuilt once entropy has become excessive.
What about zero-install?
Posted Jan 25, 2008 14:00 UTC (Fri) by Tom2 (guest, #43780) [Link]
Well, Windows doesn't even have a version of Zero Install, so it must have been something else that messed it up. I'm not terribly familiar with Windows, but as I understand it, installation works basically the same way as with RPM or dpkg: - You get a package file, containing executable code (e.g. "setup.exe" or "preinst.sh"). - You run the executable code with admin/root privileges. - The code makes whatever changes it feels like to your system. Obviously, that's likely to mess up your system. However, I don't see how that applies to Zero Install. In fact, Zero Install seems to be the exact opposite of the Windows/RPM/Debian approach. Could you clarify what exactly you're worried about?
What about zero-install?
Posted Jan 25, 2008 15:02 UTC (Fri) by vonbrand (guest, #4458) [Link]
A messed up system because the sole user installed lots of junk is different how from a messed up $HOME because the user installed lots of junk?
If you look around, latest malware doesn't take over the machine (it has become harder as MSFT has slowly tightened security), they content themselves with using the user's resources. Users installing applications under their control is exactly what such stuff needs...
What about zero-install?
Posted Jan 25, 2008 17:14 UTC (Fri) by Tom2 (guest, #43780) [Link]
You wrote: "Sure, if it is a one-user machine, [Zero Install] works fine", so let's look at multi-user machines, which is (presumably) what you have a problem with. On a multi-user system, a messed up system is worse than a messed up user account because: 1) All users are affected. 2) Any security policies that might limit the damage (iptables, AppArmor, SE-Linux) are compromised too. On a single-user system (where the user is the admin) and where the user doesn't make use of multiple accounts or other sandboxing or security technologies you're right: Zero Install isn't a significant improvement on Debs. Except, of course, for the benefits mentioned by the OP: "No admin rights needed, minimal dependencies on the host, multiple (eventually conflicting) dependencies handling, distributed depository setup (i.e. an ISV can publish software by himself)." (and let's all agree that on a single-user machine, a user who accepts Zero Install's "Do you trust this GPG key" question is just as likely to enter the sudo password when dpkg prompts for it).
