User: Password:
|
|
Subscribe / Log in / New account

GNU Guix launches

From:  ludo-AT-gnu.org (Ludovic =?utf-8?Q?Court=C3=A8s?=)
To:  gnu-system-discuss-AT-gnu.org, bug-guix-AT-gnu.org
Subject:  Introducing GNU Guix
Date:  Fri, 23 Nov 2012 10:04:22 +0100
Message-ID:  <87y5hswuzt.fsf@inria.fr>
Archive-link:  Article

I am pleased to announce GNU Guix, an on-going project to build a
functional package manager and associated free software distribution of
the GNU system.

  https://savannah.gnu.org/projects/guix/

In addition to standard package management features, Guix supports
transactional upgrades and roll-backs, unprivileged package management,
per-user profiles, and garbage collection (more details in the manual.)

Guix is approaching its first alpha release.  It comes with a small and
growing user-land software distribution–i.e., it’s not a bootable
distribution yet, but rather one to be installed on top of a running
GNU/Linux system.

The distribution is self-contained: each package is built based solely
on other packages in the distribution; the root of this dependency graph
is a small set of bootstrap binaries.

The ROADMAP file sketches the current plan.  GNU hackers are encouraged
to add their package to the distribution.  A distribution built by GNU
hackers is a great opportunity to improve consistency and cohesion in GNU!
The TODO file details some of the many ways you can help.

Happy hacking, geeks!  :-)

Ludo’.

PS: Please follow-up to gnu-system-discuss@gnu.org or bug-guix@gnu.org.


(Log in to post comments)

GNU Guix launches

Posted Nov 25, 2012 17:59 UTC (Sun) by landley (guest, #6789) [Link]

I note that Android is a gnu-free platform. (It's also sugar free, fat free, and cholesterol free.)

GNU Guix launches

Posted Nov 25, 2012 18:25 UTC (Sun) by pheldens (guest, #19366) [Link]

it's also free as in freedom free

GNU Guix launches

Posted Nov 26, 2012 16:09 UTC (Mon) by drag (subscriber, #31333) [Link]

Android is free in the same way that FreeBSD is free. Which is to say more free then Linux. Except for the Linux parts.

GNU Guix launches

Posted Nov 27, 2012 2:04 UTC (Tue) by Trelane (subscriber, #56877) [Link]

Unless your phone is locked down, and/or carrying proprietary forks, in which case it's less free.

GNU Guix launches

Posted Nov 27, 2012 4:13 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

Which, I guess, would include things like what happened with Honeycomb, where Google decided not to release the source because the branch was a dead end. That's a risk any time you get software from a company (or project) that holds all the copyright, since they can arbitrarily relicense it even if it's under a copyleft license, but it can happen to any software that's under a non-copyleft license. It's a classic question about whose freedom you care more about: the developer who already has the source or the downstream user who may want it. Permissive licenses care more about the freedom of developers, and copyleft licenses care more about the freedom of users.

GNU Guix launches

Posted Nov 27, 2012 5:49 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

I read that Honeycomb source was released with Ice Cream Sandwich, just not tagged as such. Provided that source (IIRC, it was on this site I read it) was correct, it's just missing the neon lights, but you have access to it.

Also, a different way of putting the permissive/copyleft distinction is that permissive "cares" more about the code whereas copyleft "cares" more about the project as a whole. The GPL has certainly helped Linux as a project keep an identity and permissive licensing has helped get companies using a solid code as a base (e.g., the PostgreSQL forks) instead of Yet Another Flavor. Obviously, this isn't a complete picture (Android exists and there are how many BSD flavors?).

GNU Guix launches

Posted Nov 30, 2012 6:22 UTC (Fri) by khim (subscriber, #9252) [Link]

I read that Honeycomb source was released with Ice Cream Sandwich, just not tagged as such. Provided that source (IIRC, it was on this site I read it) was correct, it's just missing the neon lights, but you have access to it.

You have work which was done on trunk before the cut-off, but not what was actually released. Not that you want it: ICS is clearly better thus by now Honeycomb is mostly of historical interest.

The whole story also shows the other side of freedom: freedom to push the thing out of the door on time. How many times opportunity to seize some piece of market share was there but "proper FOSS projects" (community-based ones) failed to realize it because of their "it's ready when it's ready" philosophy?

Of course this is mostly "company-driven" vs "community-driven" thing, not a licensing thing: when RedHad needed decent C++ support it pushed "gcc 2.96" out of the door even if "community" loudly protested. But the ability to hide embarrassing gory details helps.

So?

Posted Nov 25, 2012 20:32 UTC (Sun) by man_ls (guest, #15091) [Link]

I fail to see the relevance to the present story.

As to GNU-free, I also fail to see how that is a feature to be advertised. I sorely miss the GNU toolset on my Nexus S.

GNU Guix launches

Posted Nov 25, 2012 18:30 UTC (Sun) by oever (subscriber, #987) [Link]

The package manager is not new. It is Nix, which is also used in NixOS.

Nix is a package manager that allows users to install separate package version from the main packages versions. It also allows easy rollbacks of versions.

I believe that it also wants to make building packages from sources deterministic; building a particular version of a package should always result in the exact same binary package, regardless of the system on which it is built. This would allow users to verify that a binary package was actually built from the sources it claims to be built from.

GNU Guix launches

Posted Nov 25, 2012 18:51 UTC (Sun) by idupree (guest, #71169) [Link]

Yay! I was thinking "why are they duplicating Nix" but, you note, they're using it!

Re: "building a particular version of a package should always result in the exact same binary package, regardless of the system on which it is built",

I wish that were the goal. Last time I checked, derivations were specified by a transformation A -> B where A is fixed but B is a large nigh-unknowable set of possible binary packages.

It's a lot of work to reduce "B" to a set with a single member. Build dates are often embedded in programs. Systems like `make`, indeed, depend on file timestamps. Parallel builds can make products appear in unspecified orders. `uname -a` can return different results; filesystem directory order can differ; CPU instruction support can vary. Luckily, we wouldn't have to make all of these unobservable to build systems; it is enough to patch build-systems to produce a deterministic result. If we miss something for a package, we just get two conflicting builds and can fix the build script.

How practical is that? How many people are trying to do that work?

GNU Guix launches

Posted Nov 26, 2012 1:45 UTC (Mon) by jreiser (subscriber, #11027) [Link]

The "Domain Software Engineering Environment (DSEE)" of Apollo Computer Inc (mid to late 1980's or so) handled many of the "free variables" that you point out. It was possible to reconstruct old builds and get bit-for-bit identical results. For instance, the entire software development tool chain [the identity and version of every tool that touched the code] was captured in the manifest for a build.

GNU Guix launches

Posted Nov 26, 2012 7:09 UTC (Mon) by oever (subscriber, #987) [Link]

That sounds like bliss! It would be great if we could have a way to do this today. It would really help with security and debugging and could help reduce compile times. If everybody in a development team is using the same tool chain, determined by the checksum for the toolchain, then many of the compiled files could be shared among the team.

It would allow build systems to look at checksum of the output and input files to see if a file is up to date or needs rebuilding. This is much more reliable than using mtimes. A log would need to be kept of which inputs give which outputs, but that seems worth it.

GNU Guix launches

Posted Nov 26, 2012 10:33 UTC (Mon) by rossburton (subscriber, #7254) [Link]

The Yocto Project does something like this to speed up distribution compiles, by checksumming all variables, dependencies, and sources. If the hash exists in the cache, a pre-compiled package is extract and used.

GNU Guix launches

Posted Nov 26, 2012 12:55 UTC (Mon) by NAR (subscriber, #1313) [Link]

In a previous project a couple of guys did just that. C++ compiling on ridiculously slow Sparc workstations was not fun, so checksums were made from the sources and object files were reused. If I remember it correctly, the speed gained was not that much as we hoped for and sometimes strange things happened - but this was 10 years ago and my memories could be wrong here. Faster CPU and disk was the real solution.

GNU Guix launches

Posted Nov 26, 2012 14:01 UTC (Mon) by vonbrand (guest, #4458) [Link]

ccache to the rescue... almost invisible, works like a charm (only got bitten when crashes left empty .o files lying around, which ccache deemed correct...)

GNU Guix launches

Posted Nov 26, 2012 13:59 UTC (Mon) by vonbrand (guest, #4458) [Link]

I'm sorry, but if you have a "development team" which shares object files, having all of them using the exact same tools should be a cinch, just have each one run the same version of your favorite distribution (modulo "recompile all worlds" each time GCC/coreutils/... version gets bumped). Not really needed most of the time, I re-make some projects I follow with updates, and that doesn't require to recompile everything even if gcc changes. Only rarely do I have to clean beforehand.

Even so, compile timestamps get embedded in object (and other) files, perhaps even host names and assorted other trivia. Just take a look at the lengths GCC's build goes to check that stage 2 and 3 builds are the same.

GNU Guix launches

Posted Nov 26, 2012 20:02 UTC (Mon) by zooko (guest, #2589) [Link]

The Vesta configuration management tool accomplishes some of what you want by requiring all of your build tools and, in general, everything that could affect the build output, to be served by an NFS local mount, so that Vesta can keep track of all the files that the build process looked at while building. Vesta then makes a copy of the current version of each of those files in its revision control history so that you can rebuild the exact same build output in future years. :-)

(Caveat: obviously there are always more ways that this can go wrong, like your new kernel has a different set of bugs than your old kernel, that affect your compiler's behavior. But still, Vesta's approach sounded good to me.)

https://en.wikipedia.org/wiki/Vesta_%28Software_configura...

Back to the direct topic: I'm excited about Guix! Because I love the ideas of determinism, transactional upgrade and rollback, and the other related features. Nix is also associated with another Very Good Idea — your autobuild system should also run the unit tests automatically, unifying "Continuous Integration" (like Buildbot, Jenkins, etc.) with autobuilder.

GNU Guix launches

Posted Nov 26, 2012 7:43 UTC (Mon) by Lev (subscriber, #41433) [Link]

You'd probably be interested in Vesta, which dealt with such issues. See http://www.vestasys.org/ and especially http://www.vestasys.org/why-vesta.html

GNU Guix launches

Posted Nov 26, 2012 8:49 UTC (Mon) by oever (subscriber, #987) [Link]

Guaranteed repeatability of builds. Builds completely specify everything that affects their outcome, including the exact versions of all source files, as well as compiler versions, library versions, etc. This makes it possible to perform any build you've ever done in the past and be certain that you will get identical results. (This can be a big help with finding and fixing bugs and other QA issues; with Vesta you never have to worry that a bug has been masked rather than fixed by intervening changes, because you can always re-build the exact version that exhibited the problem.)
source

In the 'modern' way of working this sounds like magic. I've just tried to build Vesta, but unfortunately it fails on my Fedora machine. So for some reason Vesta did not catch on to a larger audience even though a lot of effort was put in:

Vesta is a mature system. It is the result of over 10 years of research and development at the Compaq/Digital Systems Research Center, and it was in production use by Compaq's Alpha microprocessor group for over two and a half years. The Alpha group had over 150 active developers at two sites thousands of miles apart, on the east and west coasts of the United States. The group used Vesta to manage builds with as much as 130 MB of source data, each producing 1.5 GB of derived data.
source

An updated version of Vesta should probably use Git and combine a sha1 for the toolchain with a sha1 for the code version to get identical binaries and identical binary packages with a checkable checksum.

GNU Guix launches

Posted Nov 26, 2012 16:00 UTC (Mon) by welinder (guest, #4699) [Link]

> Guaranteed repeatability of builds

That would require an audit of all packages' build system to ensure
they only depend on what they claim.

Anything using "date" to embed a timestamp anywhere will not be
repeatable. Anything using /dev/urandom is unlikely to be repeatable.
(I can see collision-hardened hashes do that and hash ordering would
change. You would get in the build phase if that runs anything built.)

GNU Guix launches

Posted Nov 26, 2012 18:54 UTC (Mon) by oever (subscriber, #987) [Link]

The only timestamps in the build should be ones that come from the inputs: the build tools and the source code. There should be no use of randomness in a build.

The value of knowing exactly where your code come from is huge. Currently there is no easy way to check that a binary packages correspond to source packages.

GNU Guix launches

Posted Nov 28, 2012 9:46 UTC (Wed) by oak (guest, #2786) [Link]

For example packages going to OBS (OpenSUSE Build Service) are patched to remove such things as it messes up their daily test re-builds. For example:
https://build.opensuse.org/package/view_file?file=inkscap...

Noticing date & time usage in package sources is easy in daily automated builds. Other differentiators taken from the environment are harder to find though, because build machines are pretty identical.

GNU Guix launches

Posted Nov 26, 2012 8:59 UTC (Mon) by oever (subscriber, #987) [Link]

I have not read it yet, but a text search shows that the author of Nix, Eelco Dolstra, was aware of Vesta when writing his PhD thesis.

GNU Guix launches

Posted Nov 26, 2012 9:57 UTC (Mon) by renox (subscriber, #23785) [Link]

> Yay! I was thinking "why are they duplicating Nix" but, you note, they're using it!

But it still isn't clear: What are the differences between Guix and NixOS?
And why is-it a different distribution instead of adding features to NixOS?

GNU Guix launches

Posted Nov 26, 2012 11:25 UTC (Mon) by cmm (guest, #81305) [Link]

My understanding is that they forked Nix to replace its configuration language with Guile.  Which is, at least in theory, pretty neat, and is probably very much interesting to Ludovic Courtès personally (he is the Guile maintainer, or was last I checked anyway).

The practical benefits of Guix over NixOS are less clear to me, but what do I know.

GNU Guix launches

Posted Nov 26, 2012 13:11 UTC (Mon) by renox (subscriber, #23785) [Link]

> My understanding is that they forked Nix to replace its configuration language with Guile.

Thanks for the clarification.
About Guile, I was thinking: it has been the default extension language recommended for GNU for a long time, yet it is very rarely used..
So I think that they should really define criteria to either really push Guile or to stop pushing it and looking for a different extension language: Python or Lua or OCaml would probably be much more successful..

GNU Guix launches

Posted Nov 26, 2012 13:51 UTC (Mon) by cmm (guest, #81305) [Link]

The whole "standardized extension language" thing brings two benefits:

  • Uniform surface syntax across extensible programs.  But surface syntax is really the least of your worries when scripting an application (unless the "little language" it comes with is really atrocious, of course) — pretty much all the cognitive load in such a situation is knowing what the hell to write in the script, not in what particular syntax.
  • Mature and uniform "adding an extension language" API for application developers.

Which is all cool if you need to add an extension language to your program and haven't done so yet.  If you already have one and it's not brain-dead — I don't really see much point in replacing it.  Nix's configuration language does not seem brain-dead.

As for the alternatives: Lua is younger than Guile, I think, and anyway it's basically a Scheme with syntax (and some questionable design choices mixed in, like not requiring you to declare variables.  It's amazing how much typo-related pain that is customarily ascribed to the dynamically-typed nature of "scripting" languages is actually the result of this particular brain damage which most of them share...).  I don't know how nice Python is as an extension language — but it is not backward-compatible with itself, which in my eyes makes it a joke (yes, I know the whole world is in love with it anyway).  This is not GNU advocacy, I'm just trying to illustrate the fact that their extension language policy (as I understand it) is not arbitrary.

GNU Guix launches

Posted Nov 26, 2012 14:10 UTC (Mon) by renox (subscriber, #23785) [Link]

> Lua is younger than Guile,

Yet Lua is much more used than Guile..
The number of users is not all that matters (otherwise this would be 'Windows Weekly News' not 'Linux Weekly News') but it still something to keep in mind to see if something is a success or a failure..

GNU Guix launches

Posted Nov 26, 2012 14:16 UTC (Mon) by cortana (subscriber, #24596) [Link]

OTOH, Lua development is done behind closed doors.

GNU Guix launches

Posted Nov 26, 2012 14:20 UTC (Mon) by cmm (guest, #81305) [Link]

Yes, Lua is currently more used than Guile.  You may want to consider that when Guile just started, everybody wondered "why not TCL?" or "why not Perl?".  Later it got more like "why not Python?", and now it's "why not Lua?".

The X in "why not X?" keeps changing, and Guile is still there. :)

Also, as I said, surface syntax is really the least important thing in an extension/configuration language.

GNU Guix launches

Posted Nov 26, 2012 14:33 UTC (Mon) by renox (subscriber, #23785) [Link]

> The X in "why not X?" keeps changing, and Guile is still there. :)

So? Perl or Python are still much, much more used than Guile..

> Also, as I said, surface syntax is really the least important thing in an extension/configuration language.

You said it but it's not necessarily true: it's very likely that the syntax is a big part of the reasons why the Lisp family has so few users..

GNU Guix launches

Posted Nov 26, 2012 14:37 UTC (Mon) by cmm (guest, #81305) [Link]

It's not just likely, it's a fact.
Why are we discussing on this infinitely tedious topic, anyway?

GNU Guix launches

Posted Nov 26, 2012 16:34 UTC (Mon) by khim (subscriber, #9252) [Link]

Because it's quite relevant. When guile was starting it promised choice of languages to use in extending their program. Translators were supposed to make it possible to write in any language of your choice. Kind of like CLR but on top of scheme.

This was ambitious goal and it's implementation could have made guile much better then yet-another-language-run-of-the-mill-extension-language.

Yet last time I've checked guile was yet-another-version-of-scheme (as obscure and unknown by general public as any other scheme dialect) which made it kind of useless as an extension language.

GNU Guix launches

Posted Nov 26, 2012 16:42 UTC (Mon) by cmm (guest, #81305) [Link]

Oh, I realize that.

Thing is, the GNU folks realize that quite well too.  And since they haven't changed their policy yet, what makes anyone think that rehashing the same tedious arguments yet again will have any effect?

BTW, I believe Guile comes with a reasonably functional implementation of ECMAScript these days.  I don't think anyone is using that, though.  Perhaps that's selection bias at work (if you've already chosen Guile over, say, Lua, then it's a safe bet you like Scheme), and/or perhaps syntax is just Not Interesting enough anyway.

GNU Guix launches

Posted Nov 26, 2012 17:30 UTC (Mon) by tzafrir (subscriber, #11501) [Link]

I read the name of that language as EMACSscript.

GNU Guix launches

Posted Nov 27, 2012 0:09 UTC (Tue) by nix (subscriber, #2304) [Link]

It comes with that too :) at least, it comes with a (partial, last I checked) implementation of emacs lisp.

GNU Guix launches

Posted Nov 26, 2012 21:30 UTC (Mon) by khim (subscriber, #9252) [Link]

BTW, I believe Guile comes with a reasonably functional implementation of ECMAScript these days.

Really? Since when? Manual says ECMAScript was not the first non-Schemey language implemented by Guile, but it was the first implemented for Guile's bytecode compiler. The goal was to support ECMAScript version 3.1, a relatively small language, but the implementor was completely irresponsible and got distracted by other things before finishing the standard library, and even some bits of the syntax. So, ECMAScript does deserve a mention in the manual, but it doesn't deserve an endorsement until its implementation is completed, perhaps by some more responsible hacker.

Not a ringing endorsement to say the least. The only language which is kinda-sorta-maybe-supported is Emacs Lisp and it's not clear why anyone will want that if they don't like Scheme.

P.S. It's funny, really. Guile is still pushed really hard by FSF backers and it's used in some fringe projects (like Lilypond, or Gnucash, or, I don't know, Guix) but when serious people need real scripting for their real needs they choose... something else. In fact you can easily see when project built around guile tries to finally reach normal people: it's when guile is finally get a sane alternative.

GNU Guix launches

Posted Nov 26, 2012 23:09 UTC (Mon) by elanthis (guest, #6227) [Link]

Wow, that manual entry actually insults someone who contributed code for free? Good job on completely pushing away other on-the-fence contributors. Some people do have more to do with their time than try to fully implement a complex feature for nothing in return besides potential contempt and ridicule.

GNU Guix launches

Posted Nov 27, 2012 0:10 UTC (Tue) by nix (subscriber, #2304) [Link]

I suspect you'll find the manual entry was written by the very person who got distracted. :) it feels like self-deprecation to me.

GNU Guix launches

Posted Nov 27, 2012 1:27 UTC (Tue) by elanthis (guest, #6227) [Link]

Ah, alright. That obviously doesn't translate very well with a quick reading. :)

GNU Guix launches

Posted Nov 26, 2012 17:57 UTC (Mon) by vonbrand (guest, #4458) [Link]

"Uniform surface syntax" among extension languages doesn't matter that much, what matters much more is uniform(ish) syntax/programming model with whatever the user does all day long. Scheme might be the nicest extension language around, but if you program C all day it will be all Greek to you. Plus what "power users" really do is to tweak the configuration a bit, or add some piece of code found on the Internet. They do not waste a significant slice of their time futzing around with configurations/extensions of the tools.

GNU Guix launches

Posted Nov 26, 2012 15:18 UTC (Mon) by SEJeff (subscriber, #51588) [Link]

I think this blurb from a mailinglist[1] does a better job describing Guix/Nix than you did:

Guix & Nix
~~~~~~~~~~

Nix is really two things: a package build tool, implemented by a library
and daemon, and a special-purpose programming language. Guix relies on
the former, but uses Scheme as a replacement for the latter.

Technically, Guix makes remote procedure calls to the ‘nix-worker’
daemon to perform operations on the store. At the lowest level, Nix
“derivations” represent promises of a build, stored in ‘.drv’ files in
the store. Guix produces such derivations, which are then interpreted
by the daemon to perform the build. Thus, Guix derivations can use
derivations produced by Nix (and vice versa); in Guix, the cheat code is
the ‘nixpkgs-derivation’ procedure. :-)

With Nix and the Nixpkgs distribution, package composition happens at
the Nix language level, but builders are usually written in Bash.
Conversely, Guix encourages the use of Scheme for both package
composition and builders.

[1] http://lists.gnu.org/archive/html/guile-user/2012-07/msg0...

GNU Guix launches

Posted Nov 26, 2012 12:37 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Because Nix has one fatal flaw - they didn't write it!

GNU Guix launches

Posted Nov 25, 2012 19:00 UTC (Sun) by theophrastus (guest, #80847) [Link]

Does this imply that the venerable and reliable debian dpkg/apt/deb package management suite is somehow not "100% free GNU-oriented"? (...not that there's anything fundamentally wrong with the not-invent-here approach)

GNU Guix launches

Posted Nov 25, 2012 19:50 UTC (Sun) by pboddie (guest, #50784) [Link]

One of the goals appears to be user-installable packages, which is something that can be done with dpkg/apt/deb package management but only with a bunch of hacks and special moves (for which I have tried to maintain one approach).

I hope that one day, the people who probably need persuading to allow and encourage a remedy for this situation will eventually realise that not everyone who needs to install packages on the system they use either "has root" or can get instant access to (and gratification from) those who do have root. As is probably quite common in many workplaces, between the user and the theoretical power of package management is the "support request ticket" paradigm where the answer to any requests may well be negative. Not acknowledging this is like telling everyone that "it works for me", which only infuriates and alienates users.

GNU Guix launches

Posted Nov 25, 2012 21:54 UTC (Sun) by armijn (subscriber, #3653) [Link]

Nix has many more features which are quite nifty, such as atomic upgrades + downgrades, plus being able to install variants of packages. Think GNU stow, but then on steroids.

On the website for Nix there is a link to PhD thesis of the author of Nix, which is well worth a read (the first few chapters are very accessible and clearly explain the problem that Nix tries to solve).

GNU Guix launches

Posted Nov 26, 2012 5:17 UTC (Mon) by davidescott (guest, #58580) [Link]

> not everyone who needs to install packages on the system they use either "has root"

The key word there is "need." Who defines "need"? I've been on the side you describe many times in the past and desperately "needed" a package that I just couldn't get on the corporate machine. On the other hand I would never suggest that it was inappropriate for them to make it difficult for me to install what I thought I "needed".

When I put myself in their shoes and think about it I'm not sure that if I would choose to use a distro that made it easy for users to install packages without root or some administrative proxy permission.

If I'm setting up a linux box for you and I don't give you root/wheel/sudo/admin access then I'm doing so for a reason. I can think of three reasons:

a) The all software must follow corporate policy. If I retain root I can ensure that you don't install anything that violates policy. Whether that be "no dancing bikini babes on the desktop", to "we use GCC version X not Y." Sure this is heavy-handed, annoying, and (usually) easily circumvented, it does make clear that when you download something from the web you are responsible for it. If your adult background turns out to be a virus then you get fired, and if your usage of a newer GCC means your code doesn't compile on the server and causes a project delay thats YOUR fault.

b) Disk space (ie shared over NFS). Sure everyone could install a cutting edge desktop to their $HOME, but if that is made easy then everyone will do so and then everyones $HOME ends up being full which becomes my problem.

c) To avoid confusing Grandma, who really doesn't know how to administer the system and shouldn't be doing it. I much prefer she call me so I can SSH in or drive over and fix it for her, rather than try to figure out what she did.

For (a) I personally think: "Employeers should trust their employees" and not put in ineffective "security restrictions" that only server to frustrate their employees attempts to get the tools they need to be most effective. BUT as long as the company pays the check its their decision to make about the relative risks of giving employees access to software, and the company has the final say on "need."

Just because all the construction workers have easy access to a crowbar to break the lock on the dynamite case, doesn't mean you give them all the key to the dynamite case.

GNU Guix launches

Posted Nov 26, 2012 7:02 UTC (Mon) by oever (subscriber, #987) [Link]

Just like it's possible to disable many features, like a user being able to write to /etc/passwd, it's possible to disallow users from running the package manager in a way that does not suit policy even if it is technically capable of doing so. There is no need to limit the implementation of a feature, just because someone does not want or need it.

GNU Guix launches

Posted Nov 26, 2012 13:03 UTC (Mon) by pboddie (guest, #50784) [Link]

When I put myself in their shoes and think about it I'm not sure that if I would choose to use a distro that made it easy for users to install packages without root or some administrative proxy permission.

Unless you take away the compilers and, given the availability of binary installers for certain applications, unless you take away the right to download and run software - you can get pretty far by playing with the mount options, at least for the latter - then you have to accept that people will be able to run non-system software.

For some kinds of users, imposing such restrictions is probably feasible and acceptable for all the usual reasons and according to the rules of the workplace, but when you have people who develop software unable to conveniently acquire and run software - setting noexec on the partition where they build their software isn't going to be very productive - then everybody has to ask themselves how much time is being wasted because of "policy".

The perverse outcome of the conflict between policy and productivity is the proliferation of the technological sledgehammer that is the virtual machine, resulting in even more environments that need to be configured and managed properly, when the technological nut was just the installation of a handful of packages, potentially under a non-root user's privileges, that could have been done in a few seconds had the package manager permitted such a thing.

I agree with you about not letting people install anything and everything - I worked in an environment once where people thought it was fun to click through on the "funny game/video/picture" attachment in Microsoft Outlook, and it was obvious that such activities weren't exactly going to end well - and I also agree that where an employer doesn't trust its employees sufficiently, that employer shouldn't expect things to get done in a timely or effective manner, but I also think that distributions should entertain the use-case of non-root package management for places with more enlightened administration policies where such features, if enabled, would be a clearly superior alternative to handing out administrative privileges.

GNU Guix launches

Posted Nov 26, 2012 14:11 UTC (Mon) by davidescott (guest, #58580) [Link]

Two comments.

I'm not saying that policy is always sensible, rather that policymakers have the right to make it what they want. If the policy is swiss cheese to anyone who knows what a compiler is that nonetheless might meet the policymakers goals.

I disagree with your description of the Virtual Machine as a technological sledgehammer. VMs make it trivial for "Systems" to hand out sand-boxed units of server space, without requiring any domain specific knowledge about how exactly those servers are going to be used.

I just don't see who really wants "unprivileged package management." In an ideal world where packages never conflict and there are no bugs it sounds great, but when security/functional upgrade X breaks Alice's programs, but is required for the well functioning of Bob's, then the admin has to step in and fix the conflict. If I'm that admin, I'm just going to hand out VMs to everyone who needs one and reduce a space of 2^N potentially conflicting configurations to N independent systems. If the package manager is capable of allowing multiple version installations for every user then it is just implementing containers (AKA VM-lite) and we are talking about the same thing.

GNU Guix launches

Posted Nov 26, 2012 15:05 UTC (Mon) by pboddie (guest, #50784) [Link]

Well, I was just saying that it's frustrating when people are in a situation where they could install an application in a few moments if they were only allowed to, but instead are obliged to retrieve and compile the dependencies as well as the application itself from source, replicating the work already done by the packagers, with the only difference in the outcomes of these two activities currently being that the former will put files in special places whereas the latter will put them in user-writeable places, and with no difference in the outcomes if the package manager could just install stuff in a user-nominated place anyway. (I'm not advocating that users would get to install their own stuff centrally.)

In other words, when the choice between wasting a few minutes of "admin time" and a few hours or days of "user time" once again goes against the user, this usually plays out fairly badly after a while because people tend to find their own ways of working around constraints that they don't regard as acceptable: that's just the way people deal with social or institutional restrictions. So when some project group suddenly has their own server running their own choice of software, and where their manager is more or less able to shout down various people in the hierarchy about needing to get the job done, and where some administrator has to go and negotiate a compromise, no-one should really be particularly surprised that it reached that stage.

That said, I very much have the impression that people are continually surprised at such events because they either fail to anticipate predictable human behaviour or because they believe that the policies that deter such behaviour will act as a sufficient deterrent. Consequently, from such a static viewpoint and contradicting common experience, any need for more flexibility surely cannot exist for those people. (They presumably wonder why that administrator couldn't just go and tell that manager to shut down his server.)

I do acknowledge that some compromises exist and that virtualisation in the broadest sense of the term recurs frequently, so that in the Debian environment an administrator could enable something like schroot, for example, but then again, I get the impression that if users have to ask for something like that, they may well not get it. In any case, I think it is absurd that when the bulk of the work done by a package manager could be delivered for arbitrary filesystem locations and for arbitrary users, one should be forced to reach for the virtualisation sledgehammer. What next? Running "make install" only puts things under /usr because one can always create another virtual machine?

GNU Guix launches

Posted Nov 26, 2012 19:32 UTC (Mon) by dlang (subscriber, #313) [Link]

I'll point you several layers up in this thread

In a construction company, just because everyone is carrying tools that would let them break into the Dynamite Storage locker doesn't mean that you should just give everyone the key (or not bother to lock it in the first place)

Similarly, just because it's possible for a knowledgable person to bypass the restrictions on installing software on a machine doesn't mean that yhere are no cases where such restrictions make sense.

As far as I'm concerned, per-user application installs really don't make much sense nowdays. Almost all systems are single-user, so there is only the one user to install the app for. The few cases that remain where there are multiple users on a single system image are all cases where it's probably a bad thing to have users installing software (especially given how easy it is to have separate system images with either virtual machines ot containers to isolate the users when you do want them to share hardware)

GNU Guix launches

Posted Nov 26, 2012 19:52 UTC (Mon) by apoelstra (subscriber, #75205) [Link]

> As far as I'm concerned, per-user application installs really don't make much sense nowdays. Almost all systems are single-user, so there is only the one user to install the app for. The few cases that remain where there are multiple users on a single system image are all cases where it's probably a bad thing to have users installing software (especially given how easy it is to have separate system images with either virtual machines ot containers to isolate the users when you do want them to share hardware)

Even on a single-user system, there are many "users" like apache, ftp, sshd, dbus, etc. And these programs often have security as a high priority, and would not like libraries being switched out underneath them.

So it's nice to be able to install applications and libraries in my own home directory, even though I'm the only human user of the system, just to avoid upsetting the rest of the system.

GNU Guix launches

Posted Nov 26, 2012 21:07 UTC (Mon) by dlang (subscriber, #313) [Link]

the "users" like Apache, ftp, sshd, dbus, etc had better not be installing software packages for themselves.

If you want to not have these different things depend on a common set of libraries, you still don't need per-user installable packages, you just have a distro do static linking of libraries instead of dynamic linking (or you use chroot sandboxes or containers)

GNU Guix launches

Posted Nov 26, 2012 22:39 UTC (Mon) by sorpigal (subscriber, #36106) [Link]

If you had perfect foresight that would be a fine answer. It's hard to 'go back' and change the way things are set up when you find you need something a little different today.

GNU Guix launches

Posted Nov 27, 2012 13:15 UTC (Tue) by pboddie (guest, #50784) [Link]

Enough with the dynamite analogy: I never suggested that people should be able to "blow up" their systems with full root access. Indeed, it's precisely this attitude - that wanting to do only a little more than currently possible is somehow the same as "wanting it all" - that undermines any discussion on such matters and drives people towards exactly the social and institutional workarounds I already described.

It's interesting to hear the claim that most systems - presumably Unix-related ones - are now single-user systems. Even if that is the case, giving people the ability to customise that environment without having to nag the administrative staff all the time is surely beneficial. Indeed, what has often happened is that people on those traditionally single-user Windows systems have been able to more or less install what they want, with all the accompanying consequences, whereas everyone else has had to make do with what they're given.

If people could just apt-get a package and have it installed in their home directory, it wouldn't necessarily affect the other users at all. Of course they could fill up the disk with a complete installation of GNOME or whatever, but that's what quotas are for. (Although I now expect to be told that quotas are archaic and people virtualise entire systems to get the same effect for this as well. If so, Linus Torvalds and the GNU project maintainers should consider slashing away at the realms of apparently dead code that no-one seriously uses any more.)

GNU Guix launches

Posted Nov 27, 2012 16:00 UTC (Tue) by davidescott (guest, #58580) [Link]

> I never suggested that people should be able to "blow up" their systems with full root access.

We are both coming at this from different perspectives, so your example is installing Inkscape, and mine is installing Apache/Ruby/Haskell/GCC

Inkscape is a reasonably safe application to install because in the end the application either works or doesn't and what matters is the produced SVG file which is either compliant or non-compliant. I would have no objections to anyone installing that for one off interactive use.

Installing a web-server or a full-fledged programming language raises more concerns because it introduces something that has to be maintained into the future.

The problem is that the acceptability of applications is defined by the individual policy of the firm. Somehow the package manager has to classify packages in a manner that is consistent with the policy articulated by the firm, allowing the firm to then blacklist/whitelist individual packages as needed.

The problems are:
a) The semantics of any such configuration are unclear. If I blacklist X, but it is listed as a dependency of package Y which is in an allowed class... does that mean Y cannot be installed? What if it was a recommended dependency of Y and we have recommended packages turned on? What if Y was previously installed and a system upgrade moves X from recommended to required, does Y now need to be removed?

b) How granular is the classification to be? Are packagers really going to want to classify applications at the level of detail required by administrators trying to implement policy? Do they even know what administrators consider important? Do they agree?

c) Can any classification scheme possibly match policy? Policy might have more to do with use in practice than capability. Consider your Inkscape example. I might allow the occasional usage of Inkscape to create SVGs, but want to prohibit the scripting of Inkscape through python to automate the creation of charts for publishing. If Inkscape is white listed and python is a system dependency it may be impossible to express policy in a whitelist/blacklist at the package level, and some more conservative firms will then look at that and decide to blacklist everything.

d) Is policy defined or is it fuzzy? The firm may not know what to allow because they cannot anticipate all the crazy ways people might use the software they install. If the firm could actually articulate their policy clearly enough to implement as package management rules then the approval process would be much faster and might eliminate the need to even have this feature.

GNU Guix launches

Posted Nov 28, 2012 13:29 UTC (Wed) by pboddie (guest, #50784) [Link]

First of all, thanks for keeping the discussion productive and informative. I think that one of the fundamental differences in perspective is related to the following remark:

The problem is that the acceptability of applications is defined by the individual policy of the firm. Somehow the package manager has to classify packages in a manner that is consistent with the policy articulated by the firm, allowing the firm to then blacklist/whitelist individual packages as needed.

Here, we're talking about more than one organisational role. One matter is what people can do on their workplace machines to do their work in the most convenient and productive way possible without disturbing others or doing bad things to the workplace's systems. Another matter is whether the techniques used are sustainable and documented so that other people in the workplace can follow what was done.

In the case of Apache, surely the way to deal with the possibility of someone installing it is to make sure that any instance of it will never be seen from any other computer, which is probably a policy employed in organisations where "high ports" are regarded as completely untrusted. The tools already exist to contain unprivileged users, mostly because that was the motivation for having multi-user/privilege systems in the first place.

So it seems to me that forbidding package installation - recalling that I only advocate installation under non-root privileges - is a very coarse way of controlling what users do, and the extent of that control will rely on the existing measures, anyway. (Forbidding the installation of Ruby doesn't stop people from writing "bad" programs unless you take the shell away from them as well.)

GNU Guix launches

Posted Nov 28, 2012 17:13 UTC (Wed) by davidescott (guest, #58580) [Link]

I don't see how we are ever going to agree, because we see something like "running apache on a high port on an internal desktop for internal business purposes" differently.

You view it as developer Joe just wanting to be able to run the server for his own use, and it cannot possibly harm anyone so long as the corporate firewall works.

Management may see it as (a) something that others may come to rely upon and when Joe leaves for another firm someone else has to take over requiring that the code be brought up to the appropriate standards or (b) an internal application that leaks data across business lines and fails to integrate with the standard security policies managed by the firm.

Similarly with installing Inkscape, it could become something that is integrated into a process without approval, such that nobody knows how to manage it when the installer leaves, or could expose the firm to legal risk down the road (those automated charts Joe creates with inkscape are deemed to be deceptive according to Regulation 142.6(a) subsection (iv) paragraph 3.14 which requires that all bar charts have a width of at least 22px). I personally don't like this attitude, but after working in a regulated industry I recognize that it exists.

You are also implicitly suggesting that every machine on the internal network run a firewall that blocks incoming packets on high ports. How many companies actually do this, vs just having a firewall at gateway? It could be a lot of work for network admins to customize the firewall rules to the individuals machine.

I'd also be curious to know how a company like Google handles this kind of situation. Certainly their staff is skilled enough to be able to run personal web-servers, but at the same time someone with a misconfigured server could leak data across the google network.

--------------------------------------------------

A lot of this audit stuff is ridiculous, and I think there is a tacit recognition that it is absurd. Saying that an employee went off an did something on his own without approval makes it possible for the corporation to avoid liability, whereas if you all them to apt-get install something its much less clear that they were violating policy in doing so.

GNU Guix launches

Posted Nov 28, 2012 19:59 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

I assume that there would be a way that administrators could disable per-user package installation if wanted. I don't see how the *existence* of the feature is a problem in and of itself. I'd certainly like to be able to install different versions of the boost packages into a local directory so that I don't have to touch Boost.Jam umpteen times to test against everything from 1.44 to the latest release. Or try out the newest shiny (especially with things like newer mesa where if it screws up, I still have a fallback to rely on). without having to upgrade an entire distro (or install an ancient one for older versions).

I would like to have 10 compiler versions installed so that I can test everything with one machine instead of needing 10 VMs of various versions of distros to get those versions and installing things by RPM is a whole lot more reliable than me trying to replicate what is installed in those systems manually.

GNU Guix launches

Posted Nov 28, 2012 21:23 UTC (Wed) by dlang (subscriber, #313) [Link]

The issue is that the main case for why this feature is needed is a very close overlap with the cases where it is very questionable due to the reasons mentioned above.

As far as installing multiple compilers, go read Rob Landley's notes of the fun he has doing cross compiles, specifically all the work he has to do to beat gcc into using the right version of things. It sure doesn't look like it would be nearly as trivial as "install multiple copies of gcc in different directories"

If I had to do that, I'd use debbootstrap to create chroot sandboxes with different versions installed in each one. You don't even need containers, let alone VMs.

GNU Guix launches

Posted Nov 29, 2012 14:23 UTC (Thu) by pboddie (guest, #50784) [Link]

I do actually use debootstrap with chroot sandboxes precisely to be able to manage multiple software environments. However, this only works reliably for genuine chroots and not fakechroots, at least if you want to sample other distribution versions (mostly because of system library incompatibilities), and it gets to the point where you also need a newer kernel version to run significantly newer distribution versions. At that point, I actually use User Mode Linux, but there are plenty of root privilege obstacles that would prevent me from having such a sandbox if I didn't have root access.

Setting up chroot sandboxes isn't really so lightweight, but I suppose it is at the lighter end of the virtualisation spectrum in the broadest sense of the term.

GNU Guix launches

Posted Nov 29, 2012 17:14 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

I don't want to install old RPMs as-is; that way does indeed lead down into its own dependency hell. I would like to recompile Fedora 14 RPMs for my current machine and still be able to install it somewhere.

GNU Guix launches

Posted Nov 29, 2012 18:00 UTC (Thu) by pboddie (guest, #50784) [Link]

I don't do things with RPMs, but if I wanted to build a Debian package for my current distribution version, I guess I'd go through the usual dpkg-buildpackage route (or use pbuilder if I had root access) after tweaking the package metadata. Since I only ever back-port things (and not that often given that I may choose to run them in a chroot), I don't know how much work would actually be necessary to forward-port a package, but quite possibly not that much.

Installing the package is another matter, though. Without root access, I'd have to hope that my fakechroot sandbox is up to the task, but given that it would be the same distribution version, the chances of that are a bit higher than they otherwise might be. I suppose that febootstrap would be able to deliver the same experience for Fedora, potentially not even needing fakechroot to do an initial bootstrap.

GNU Guix launches

Posted Nov 29, 2012 17:20 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

Hmm…installing a different GCC in macports looks like it just dumped it all in its own subdirectory. I'm not so interested in cross compilers, just different versions of various compilers.

I don't know why GCC would have problems with it unless you used the same $PREFIX for all of them. It's not like GCC would know to look anywhere other than its installation tree and possibly the system. Usually I use $HOME/misc/root/$name-$version as the prefix so that removing one is an rm -rf without having to rely on whatever (usually broken) uninstall mechanisms the project uses. LLVM is certainly happy being installed to $HOME/misc/root/llvm-3.2svn without being confused by the system version.

GNU Guix launches

Posted Nov 29, 2012 17:06 UTC (Thu) by davidescott (guest, #58580) [Link]

We aren't objecting to the existence of the feature. If someone wants to implement it that is their right. What we are questioning is the marketability of the feature. It was initially described as a killer feature of Nix, but in my mind Virtualization looks a lot better for a number of different potential use cases. Other than shared hosting (which I don't have a particularly high opinion of to begin with). I'm genuinely curious what use case makes this the killer feature that really sells Nix/Guix.

GNU Guix launches

Posted Nov 27, 2012 15:17 UTC (Tue) by lambda (subscriber, #40735) [Link]

Your objections seem to be based on not understanding the Nix package manager. If a user installs a package under Nix, it's installed in its own environment, and does not affect what packages other users see or behavior of other packages.

I guess that's what you call "containers (AKA VM-lite)", but it's not really the same thing; in fact, I would say both of those characterizations are wrong. A container isn't really "VM-lite", unless you want to call a modern operating system with separate users, memory protection, and preemptive multitasking "VM-lite" as well. A container is a way in which you can selectively isolate more resources, while sharing resources that still ought to be shared; it's the continuation of the idea of separate processes and memory protection, allowing you to apply isolation to more resources.

And Nix differs from a container, since in a container, you either give the user their own root with nothing shared, in which case, from a filesystem perspective, sure, it's VM-lite; everyone has to install all of their own copies of everything. Or you could share certain directories like /usr, but then you wind up with problems updating the base system breaking your packages installed in your container.

Nix works by building up a tree of package dependencies. Each package specifies the exact packages it depends on (the precise version and dependency tree, similar to the way Git refers commits by the SHA-1 of the current state and the SHA-1 of its ancestors), and packages are never replaced, new versions are installed in a new location. So a user can install local packages which depend on system packages, only using the extra space for what they specifically need; but if the base system updates, their packages will still depend on the old versions, causing them to stay around. Once the user updates their packages to newer version, depending on the new system libraries, the old versions will be GCed if no one depends on them.

GNU Guix launches

Posted Nov 27, 2012 16:29 UTC (Tue) by davidescott (guest, #58580) [Link]

> Your objections seem to be based on not understanding the Nix package manager.

The objection is NOT that nix will somehow screw up the system by having conflicting packages. I'm objecting to the concept of user-installable packages, under the assumption that the package manager handles conflicts perfectly.

There are various objections (5 have been listed) the one closest to what you describe is that someone installs X to build Y. Initially this is fine because Y is a non-critical internal only application and a stop-gap measure until the real solution can be released. Of course Y morphs into something more and becomes a critical permanent external application and needs to be brought into compliance (moved to a secure server, audited, etc), and the approved systems are not capable of running the required X.

See http://us.thedailywtf.com/Articles/Excellent-Design.aspx for an example of this.

GNU Guix launches

Posted Nov 27, 2012 17:17 UTC (Tue) by davidescott (guest, #58580) [Link]

After submitting I went back and rereading my comment and realized that you were responding to my response to phoddie's description of VMs as perverse sledgehammers.

You are correct that my previous comment is not correct with respect to how Nix operates. My point (poorly expressed) was that a package manager is either going to introduced a combinatorial explosion of potential conflicts or be VM-lite by supporting independent package installations.

I'm objecting to the suggestion that VMs are a perverse sledgehammer. I think they are a very good sledgehammer to quickly roll out lots of independently configured instances. If Nix+Containers can accomplish that while eliminating some duplication then VM-lite may be preferable to pure VM, but that has nothing to do with user-installed packages.

GNU Guix launches

Posted Nov 28, 2012 13:39 UTC (Wed) by pboddie (guest, #50784) [Link]

To be fair, I use virtualisation in the broadest sense myself, but when you have package management systems like opkg which appear to be able to operate for unprivileged users in their own chosen areas of the filesystem, having to use "virtualisation" even at the level of maintaining separate chroot environments has to be seen as a kludge and not the product of two virtuous technologies coming together to "solve" an apparently unsolvable problem. If anything, it just pushes the complexity avoided by the somewhat overly conservative design decisions of the package management systems concerned onto some other activity instead.

GNU Guix launches

Posted Nov 26, 2012 15:11 UTC (Mon) by pspinler (subscriber, #2922) [Link]

As a professional sysadmin, I can add a couple more cases here:

d) When the user installs their own packages and it breaks, they still call me. I don't have any idea what they're doing and can't debug it. Yet all too often, the user views the brokenness as my fault and my responsibility (and yes, that includes escalating their problems up the management chain). Ergo, I'm not going to let said user cause problems for me in the first place.

e) All too often, the user does something very ill thought out and irresponsible with their own installs, such as using a HPC cluster to spam a corporate database not designed to handle that load or sending confidential data out with insufficient security and controls (we're subject to HIPAA and Sarbanes-Oxley, at a minimum). I find it necessary to insert myself into this process as a basic design check on what the user is doing.

It's the classic trade off between free innovation and rational controls. Yes, rational controls significantly slows down the process of free innovation, but recall that most new ideas don't actually work, something like 90% of new ideas and/or companies fail in a quite short amount of time, many of them dramatically.

For the "do whatever you want" systems, we give users heavily sandboxed cloud resources with a very explicit contract "you break it, you fix it, and IT security knocks on your door, not mine"; or we just send them out to amazon or similar.

-- Pat

GNU Guix launches

Posted Nov 26, 2012 16:50 UTC (Mon) by pboddie (guest, #50784) [Link]

d) When the user installs their own packages and it breaks, they still call me.

Admittedly, there's not much you can do about this other than to get them to provide details of exactly what they did, so that you can then tell them that it is a problem of their own making. On various mailing lists for Free Software projects, this is typically handled by asking things like whether the user is using any plugins or extensions, and getting them to actually include real output in their reports instead of paraphrasing the problem. I know this is hard work, but I imagine that you can't avoid this even if you lock down as much as you can because there's almost always some unforeseen variable that has to be identified.

sending confidential data out with insufficient security and controls

People can still get up to this kind of mischief without installing their own software, though.

or we just send them out to amazon or similar

For more and more organisations, there are going to be awkward conversations in the years ahead as upper management discover that large parts of their systems don't actually reside under the control of that dedicated department any more, but actually live in some cloud somewhere. You see this happening already as people go outside their institutional systems and use mass-market products instead (just as you have people using their own phones and computers instead of the ones provided by their employer).

At that point, the question "What do we pay those people for?" is going to be even more threatening, especially after all the anecdotes have been heard about how people couldn't do what they needed or were put off until something "official" was ready for a particular task. (You see these symptoms in any large organisation where the resources to enable new services or activities are constrained by the tendency to shoehorn such things into the existing ways of working, thus eliminating the competitive advantage of the large, well-resourced organisation.)

I'm not really arguing against systems administration policies, but it really baffles me that instead of entertaining and encouraging workable compromises and perhaps loosening the leash on captive users, organisations and developers would rather keep the leash as tight as possible even though history shows that the leash will break as a result.

GNU Guix launches

Posted Nov 27, 2012 0:28 UTC (Tue) by davidescott (guest, #58580) [Link]

> I'm not really arguing against systems administration policies, but it really baffles me that instead of entertaining and encouraging workable compromises and perhaps loosening the leash on captive users, organisations and developers would rather keep the leash as tight as possible even though history shows that the leash will break as a result.

I think the relevant question is what is worse, having the leash break or just letting the dogs run free?

I don't think anyone (certainly not myself) is suggesting that policy is perfect, just that having policy is better than not having it. If you have no policy in place and anyone can install anything on any system then you have no guarantees that people are doing things correctly. And if your policy is that the end user gets to make the technical decisions based on their own assessment of their knowledge and capabilities in order to keep the business agile then you get some idiot who read "PHP for Dummies" making a website and keeping passwords in plaintext.

Dealing with the constraints on our group enforced upon us from systems was my #1 reason for leaving my last job, but I don't think the firm was crazy to have those restrictions in place. It made the firm less agile, but I also believe it prevented more problems than it caused. I had tacit permission from my boss to go off-policy in a number of areas, and the mere fact that I was going off policy made me consider every decision very carefully, and document what exactly I was doing in a way I wouldn't have if there were no rules.

One can do a lot of damage with software if they don't know what you are doing. I thought I was extremely careful and had a good idea of what the correct choices were, despite this I made some bad design and technology choices that somebody is going to have to back out within the next few years. So I'm all for trying new ways of organizing systems groups, and I don't know what the answer is, but I don't think it is giving everyone the ability to install anything in the package archive.

Another analogy would be that policy and controls on what can be installed is like having a curb on the side of the road. Sure you can jump the curb and drive on the sidewalk if you know what you are doing. If you are James Bond you might even drive up the stairs and onto the roofs of the building, but we aren't all James Bond, and having those curbs discourages many from doing things that are very dangerous.

GNU Guix launches

Posted Nov 27, 2012 13:53 UTC (Tue) by pboddie (guest, #50784) [Link]

I'm not advocating that "the dogs run free". The dogs do run free, however, when technical measures to achieve their goals have been exhausted and they adopt social or political measures to achieve them in another way.

It's funny that you mention people deploying Web sites after reading "PHP for Dummies". I once had a discussion with someone who had pointed out that a Web site I had become responsible for - not in PHP nor developed by dummies, mind you - was running on a "high port" which in turn made his systems people uneasy, and he wondered whether it might one day be made available on port 80 instead. After a fairly small amount of work, the site was deployed within the existing port 80 infrastructure and I was able to get back to the guy within a day or so. This apparently made him simultaneously overjoyed at the prompt progress in the matter and frustrated that something similar would take weeks to get done in his organisation.

Having such restrictions are understandable - I have been aware of lots of crazy things going on in large organisations including some that were perpetrated by systems administrators themselves - but it does no-one any good if those restrictions are consistently implemented at the expense of people doing their work in a responsible fashion. When someone wants to run a program like Inkscape, to take a random desktop application that isn't in its normal form going to DDOS various Web sites as part of a botnet, surely the logical "first stop" is for the user to take advantage of the existing package available for the system and not to have to "manualize" the process by making a human being whose time is presumably precious run the install command on that user's behalf. (And virtualising the whole thing as a solution instead of supporting a non-privileged installation of packaged software just confirms that the software isn't inherently dangerous, anyway, because it shouldn't be the case that the host system is more insecure and that Inkscape could do more evil in that environment than on what will inevitably be a network-connected virtual host just to reduce the level of inconvenience involved.)

Anyway, I think I've made my point, as has everybody else in this discussion, and I just think that we all have different perspectives on the matter.

GNU Guix launches

Posted Nov 27, 2012 16:33 UTC (Tue) by pspinler (subscriber, #2922) [Link]

We're also talking about two separate cases here: managing servers, and managing interactive machines (most probably desktops).

Managing servers you typically take a more conservative approach to. I'm more in the server than the desktop profession, but I can see the need to be more permissive with desktops.

With desktops I can see a use for a package manager that allows non-root installations in arbitrary paths, for instance to a network home directory that would then be available in any workstation you logged in at.

Even on (most) desktops, I can see not allowing normal office workers full root on the machines. However, there would likely need to be exceptions for certain classes of users -- basically people doing experimental stuff with their desktops. I'd perhaps setup an automated request mechanism for doling that out, so a) I'd have a record of who did it, and b) Id' have at least a chance to talk to the users and see what they're doing, and if they really actually need root, or could do with something else.

-- Pat

GNU Guix launches

Posted Nov 27, 2012 18:21 UTC (Tue) by dlang (subscriber, #313) [Link]

> Even on (most) desktops, I can see not allowing normal office workers full root on the machines.

what's the practical difference on a desktop machine between giving the user of the machine root (or sudo style package manager access like Ubuntu does) and allowing them to install arbitrary packages as "non-root installations in arbitrary paths"?

It seems to me that the latter is much more complicated (where did this user install this package...)

GNU Guix launches

Posted Nov 28, 2012 1:12 UTC (Wed) by pspinler (subscriber, #2922) [Link]

what's the practical difference on a desktop machine between giving the user of the machine root (or sudo style package manager access like Ubuntu does) and allowing them to install arbitrary packages as "non-root installations in arbitrary paths"?

Lots. For instance:

  • No root means no messing about with contents of /etc, with selinux / apparmor policies, firewall, etc
  • Limiting filesystems where the packages can be installed
  • Making sure the places where it can be installed are mounted nosuid / nodev
  • Between all the above, it's notably harder to actually damage a system
  • User specific changes are isolated to a user filesystem, so the rest of the OS can be upgraded / replaced with (hopefully) minimal effect on user's customization
  • etc, etc

Anyway, point is, there's lots and lots of administrative advantages to limiting user customizations to limited areas and to stuff that requires no privs. Heck, I do this on my own workstation where I do have full privs.

-- Pat

GNU Guix launches

Posted Nov 27, 2012 16:47 UTC (Tue) by pspinler (subscriber, #2922) [Link]

one other thought

I'm not advocating that "the dogs run free". The dogs do run free, however, when technical measures to achieve their goals have been exhausted and they adopt social or political measures to achieve them in another way.

I think this logic is faulty. To use an analogy "they're going to break security anyway, so why do XXXXX ...". The point isn't to be perfect, you can't ever be. The point is to put layers in place, each of which adds something toward the final goal.

This applies to procedures and people as much as to systems and security.

So, if, at a corporate level, I want people to comply with certain policies to protect what the company sees as its best interest, then yes, one layer will be technical restrictions of various sorts. Other layers will include policy manuals and websites, required annual training, easy contact points to the sysadmins and policy makers, scanning software, proxies and filtering software, and etc.

Sure, people will still work around that, but with stuff like this in place it makes these people think about it, and hopefully brings what they're doing to other people's attention. This is a good thing: it might mean that their solution gets adopted, that procedures get changed, or that an actual stupid thing gets squashed.

To use your example, "Oh, Janet installed inkscape! Hmmm ... do people need to be creating SVG's? Maybe we need to look at a wider solution for that. Oh, Fred installed a web server, and look, the logs show a bunch of external hits, uh oh, we need to squash that ...

My personal philosophy: people doing stuff isn't necessarily good or bad, but people doing stuff in isolation is most definitely bad. Corollary: people are lazy, and if they don't have to do something (like, say, tell someone else and document it), they won't. And yes, I'm like this, too. :-)

-- Pat

GNU Guix launches

Posted Nov 27, 2012 1:35 UTC (Tue) by pspinler (subscriber, #2922) [Link]

I'm not really arguing against systems administration policies, but it really baffles me that instead of entertaining and encouraging workable compromises and perhaps loosening the leash on captive users, organisations and developers would rather keep the leash as tight as possible even though history shows that the leash will break as a result.

The issue is what management is willing to pay for. Sure, I can act as issue catcher and general hand holder, but then that takes a lot more of my time and the ratio of sysadmin to machine goes down. Management looks at that and says "what with all this automation, what's the problem?"

Ergo, we took a pretty firm stance what we'd allow users to do on systems we administered, and management, by policy, required that production apps were on systems that were officially administered.

Our compromise was that we still gave people sandboxes to play in where they could do anything to the system. We just took a very strict hands off to those sandboxes, and frowned mightily at using free play sandboxes for serious production work.

-- Pat

GNU Guix launches

Posted Nov 27, 2012 1:55 UTC (Tue) by dlang (subscriber, #313) [Link]

having worked at a company where things have devolved a couple of times into the "no restrictions" mode, you also end up finding that production reliability suffers drastically as well.

everybody thinks that they are above average in their ability to decide what to run, and this gets people in trouble.

Preventing them from installing packages doesn't solve all the problems by any means, but it does put people on notice that they aren't supposed to be doing that.

This sort of problem doesn't scale linearly with the number of users either. If you need one admin to help 5 people, 5 admins can't keep up with 25 people. It seems like they should be able to, but in practice they can't.

If you aren't willing to live with this sort of restriction on your work computer, find a job at another company, and be prepared to do so again in a few years as that company either grows and starts to implement restrictions, or goes the other direction.

Or become valuable enough that they make exceptions to their policies for you, but this takes having a track record of doing things right and not causing problems.

GNU Guix launches

Posted Nov 27, 2012 15:35 UTC (Tue) by lambda (subscriber, #40735) [Link]

If you don't allow people to install their own packages, they will just download, compile, and install them into their own directory (or run software written in scripting languages that don't require compilation), and now they have an outdated copy sitting in their home directory that's hard to update, and it's hard to find out that they're even doing this without looking.

Why is it so threatening for users to be able to run their own software? They will do it anyhow; providing a framework for them to do so, while sharing dependencies, having a central database that the administrator can audit and tell people when they need to upgrade because of security issues (or forcibly upgrade them if need be) seems a lot preferable to having random pieces of software in who knows what state scattered around in home directories.

Why is PHP so popular? It's certainly not its technical merits. One reason is that a user can just untar an application in their site directory and it will work; they don't have to ask a sysadmin to install it for them, request that their hosting provider install it and wait 3 months for it to actually happen, or the like. Nowadays environments like Rails are similar; there's a standard interface to the web server, and you can bundle all of your dependencies in with your application (other than the version of Ruby itself), so you can just stick a directory tree in an appropriate place on your server and it will just work.

People install their own packages all the time in this manner. Why should this be restricted to web apps written in dodgy languages like PHP (or somewhat less dodgy environments like Ruby on Rails)?

GNU Guix launches

Posted Nov 27, 2012 17:49 UTC (Tue) by davidescott (guest, #58580) [Link]

> If you don't allow people to install their own packages, they will just download, compile, and install them into their own directory (or run software written in scripting languages that don't require compilation), and now they have an outdated copy sitting in their home directory that's hard to update, and it's hard to find out that they're even doing this without looking.

In in ideal world: yes that would be better, but you are assuming that all software that might be installed through the package manager is regularly updated. What if the user installs something from a dead or dying project? The package might not be out of date (because no new release is forthcoming), but the sysadmin still needs to know enough about the program to know if it is a security risk.

Requiring explicit permission from root to install anything ensures that anyone who circumvents root's authority to approve/deny software installs is clearly doing something wrong. If its too urgent to bring through normal approval channels and they screw up the install and leave a security whole, then you can fire them. If they aren't confident that the tools they want to install are safe then they can do it the slow way with approved tools.

> Why is it so threatening for users to be able to run their own software? They will do it anyhow

Part of the problem is that you and I are talking about us. We know how to ./configure --prefix=...; make; make install; so WE can circumvent the policy, but WE are also fairly capable of recognizing good safe software from bad unsafe software, WE try to keep track of what we are doing, WE remove stuff we don't need, WE keep our software up to date, and WE appreciate having a tool to automate that process.

I'm not concerned about us, I'm concerned about THEM. The THEM who don't know a phishing scam from a real email, the THEM who think ftp is secure. I don't want THEM installing software. I want THEM to bring a use case forward, and a candidate application for installation so that people like us can guide them in finding the best supported way of accomplishing their goals.

GNU Guix launches

Posted Nov 28, 2012 1:00 UTC (Wed) by hummassa (subscriber, #307) [Link]

It's a simple fallacy separating "we" (us?) from "them". We sometimes click on wrong links. We drive to the wrong neighborhood. One who has root can still veto some installed package or upgrade it and force the upgrade to the users' profiles. The facility here is that, instead of downloading a tarball and ./configuring make install, the user apt-gets (nixes, guixes) it from the repository where things are better controlled.

GNU Guix launches

Posted Nov 28, 2012 13:47 UTC (Wed) by pboddie (guest, #50784) [Link]

You made my point much more concisely than I managed to do. :-)

Again, it's a matter of whether one can concede a degree of control in order to maintain a degree of supervision, or whether people will eventually feel obliged to break out and go to external entities for the goodies, leading to all sorts of recriminations afterwards (especially if something went wrong).

GNU Guix launches

Posted Nov 27, 2012 23:54 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

If you don't allow people to install their own packages, they will just download, compile, and install them into their own directory (or run software written in scripting languages that don't require compilation), and now they have an outdated copy sitting in their home directory that's hard to update, and it's hard to find out that they're even doing this without looking.

Assuming you have users who are capable of compiling from source or who have scripting languages installed on their machine. And that kind of user is probably capable enough that they ought to be given some kind of control over the software on their system, if for no other reason than that you can't stop them anyway.

But those users are not the only kind that sysadmins need to be worried about. Put bluntly, some users really shouldn't be allowed to put software on their own machines. Not every user is capable and trustworthy enough to be given full control over their machine. Some systems contain sensitive information that must be protected from disclosure for legal or contractual reasons, and those machines really should be running only authorized, vetted software. Other machines may be provided in specific places for narrowly tailored purposes, like information kiosks, and should be running only software intended for that purpose. Real world admins need to be able to deal with those kinds of users and situations, and there should be tools that allow them to lock down machines to prevent unauthorized software from being run on them.

GNU Guix launches

Posted Nov 26, 2012 17:19 UTC (Mon) by vonbrand (guest, #4458) [Link]

Even more: Setting system-wide policy for the machine on which the company/department/project runs is very sensible; setting policy on what I run on my machine is moot. As the power of hardware increases while prices plummet, this "any old user can install whatever they want" makes less and less sense as she most probably is the only user anyway.

GNU Guix launches

Posted Nov 27, 2012 14:54 UTC (Tue) by lambda (subscriber, #40735) [Link]

You can't think of situations in which you don't want to give a user root, but want them to be able to install software? How about a corporate server, shared by several users. You don't want one user to be able to read and write another's files without permission, so you can't give them root, but to get their job done, they need to install various packages, and the sysadmins don't have the bandwidth to respond immediately to each and every request for an updated version of package X.

Or how about shared hosting? People install their own software all the time in shared hosting environments; but it's generally tarballs of PHP that they expand into their home directory, and without package management, it's always out of date and insecure. If I want to do my web programming in a language like Go, I need to compile it and its dependencies, carefully configuring them to install into my home directory, and again, I don't get package management so upgrades are a pain.

Just because you think that there might be good reasons not to allow a user to install their own software (though I'm pretty skeptical of those) doesn't mean there aren't situations in which you want a user to be able to install their own software, but don't want to give them root.

GNU Guix launches

Posted Nov 27, 2012 18:55 UTC (Tue) by davidescott (guest, #58580) [Link]

I don't think we were saying it was NEVER a useful feature, just that it was not a particularly useful feature, and certainly not for the workplace environments mentioned in the initial comment.

Shared hosting yes because the provider in shared hosting isn't really an admin. The provider can never say "NO" and has no responsibility to ensure that individual applications work. Their only real responsibility is make sure the lights are on.

Corporate environments no, because the server admins are required to approve deployment which is directly connected to their responsibility to keep servers updated and verify that security updates don't break any deployed applications.

For something that advertises itself as a "free software distribution of the GNU system" user installed packages seems a less useful feature.

GNU Guix launches

Posted Nov 28, 2012 13:58 UTC (Wed) by pboddie (guest, #50784) [Link]

Well, workplace environments are perhaps the most obvious example, or at least the first one I could think of, where for certain classes of user it would be very useful to be able to install packages as a non-root user. But shared hosting is another good example: my Web host will let me compile stuff, but if I wanted to install a system package in my area, it would be incredibly difficult (if not technically impossible). That this arrangement could potentially lead to me running outdated or insecure software is surely something that the hosting provider would rather avoid. Interestingly, SourceForge has started to discontinue various centrally-maintained hosted applications in favour of people installing their own versions in their own hosting areas: a concrete example of this counterintuitive trend in action.

Of course, I could choose to use a virtual private host instead, or something that provides some lighter form of virtualisation - perhaps OpenVZ or Linux-VServer - but I have to admit that I wouldn't know whether the latter solutions would necessarily give me access to package installation tools or whether I'd still need to bother the central administators.

GNU Guix launches

Posted Nov 28, 2012 21:41 UTC (Wed) by dlang (subscriber, #313) [Link]

running in a container (OpenVZ or Linux-VServer) should give you all the package management capabilities you would have running your own system, with the one exception that you would not be able to change the kernel.

Running a full VM will give you that capability as well.

GNU Guix launches

Posted Nov 29, 2012 4:01 UTC (Thu) by idupree (guest, #71169) [Link]

> a) The all software must follow corporate policy.

Somehow the GNU su story makes me think GNU isn't interested in that: https://www.gnu.org/software/coreutils/manual/html_node/s...

GNU Guix launches

Posted Nov 26, 2012 14:05 UTC (Mon) by vonbrand (guest, #4458) [Link]

"User installable packages" smells awfully of Plan 9...

GNU Guix launches

Posted Nov 27, 2012 12:21 UTC (Tue) by coriordan (guest, #7544) [Link]

I've no idea how awful Plan 9 smells, but if you think that this sort of idea hasn't been thought of a thousand times then you might have the special mindset necessary to get a job in the patent office :-)

GNU Guix launches

Posted Nov 27, 2012 13:01 UTC (Tue) by vonbrand (guest, #4458) [Link]

No slur to Plan 9 meant: It is just that in Plan 9 every user can tailor exactly what view of the filesystem she gets. So you can keep a bunch of legacy/experimental stuff lying around to mix and match.

Leads to the proverbial combinatorial explosion, for sure; but if they ask for it (and are willing to pay the rather steep price), who's to complain.

GNU Guix launches

Posted Nov 27, 2012 14:05 UTC (Tue) by pboddie (guest, #50784) [Link]

Well, lots of people are "willing to pay the rather steep price" given the scale on which Java libraries and applications are distributed and acquired, and given the proliferation of various language-specific packaging mechanisms.

I find it odd that people don't want to let users install system packages in an unprivileged fashion but will entertain stuff like Apache Maven pulling hundreds of .jar files down from apache.org or some random mirror.

Instead of giving people access to a potentially limited selection of official packages, people probably go off and download all sorts of random things from the Internet instead. The readership here may be experts at finding the official upstream locations for software, but I suggest that they observe non-experts searching the Internet to see how the typical experience turns out.

GNU Guix launches

Posted Nov 25, 2012 19:58 UTC (Sun) by idupree (guest, #71169) [Link]

ROADMAP and TODO are in http://git.savannah.gnu.org/cgit/guix.git/tree/ (Source Code menu --> Browse Sources Repository; then click on "tree": it took me a little while to find so I'm sharing.)

Investment in resources

Posted Nov 25, 2012 20:48 UTC (Sun) by brianomahoney (guest, #6206) [Link]

There are a number of FOSS where developers are forming competing products to do the same thing, some bring great benefit to the community, eg Android, Open v Libre Office, gcc v llvm ... but some bring very little, the neverending graphical churn in Gnome but also KDE both of which are now poorly documented and in which bugs take a long time to squash, if they ar note declared features.

With Smart Phones and BYOD things are moving quickly beyond the Desktop, and in the new space Microsoft has no essential competative advantage and little chance to lock in the mobile space BUT some pieces are obviously missing in the Free Userspace stack, and it would be helpful if the community considered what needs to be done. Certainly we need Outlook/Exchange in transition.

Investment in resources

Posted Nov 26, 2012 10:01 UTC (Mon) by dgm (subscriber, #49227) [Link]

Is this some sort of automated comment? It is completely unrelated to the article at hand.

Investment in resources

Posted Nov 30, 2012 20:25 UTC (Fri) by speedster1 (guest, #8143) [Link]

The grandparent comment got misplaced, has to be intended for another discussion (poster probably had 2 lwn tabs open and typed the comment into the wrong one)

GNU Guix launches

Posted Nov 26, 2012 2:10 UTC (Mon) by jcm (subscriber, #18262) [Link]

You know, I was thinking to myself earlier that the world would be so much better if only there were one more distribution with one more package manager. Perhaps, one day, we can reach a state wherein there exists one distribution and package manager per potential user, thus obviating any possibility of mainstream acceptance.

Another day, another package manager

Posted Nov 26, 2012 12:17 UTC (Mon) by man_ls (guest, #15091) [Link]

I assume that you think that package managers are already feature-complete and perfect for every need out there. I don't. Debian's APT for instance might gain a few of those new capabilities such as transactional upgrades and roll-backs. APT has improved a lot over the years by adding new features invented elsewhere.

The same is valid for distributions: there are many good ones but they are not perfect. That said, the effort of packaging a lot of software is quite a burden, so it would be nice if Guix explored automatic packaging for GNU software, or something like that.

Another day, another package manager

Posted Nov 26, 2012 14:12 UTC (Mon) by vonbrand (guest, #4458) [Link]

Please explain how "transactional updates" and "rollbacks" are suposed to work. Sure, in complete absence of bugs and requirements to tweak changed configuration files, they are easy. But (as always), practice is mighty different from theory. They aren't in mainstream package tools (APT, RPM), and not for lack of imagination by their authors.

Another day, another package manager

Posted Nov 26, 2012 15:08 UTC (Mon) by man_ls (guest, #15091) [Link]

Precisely: as practice is different from theory we need someone to do a real implementation, so they can shake out the bugs and identify the weak spots. Then the required work can be done on mainstream package managers.

It is not lack of imagination, but lack of experience what usually holds down improvements. I imagine that transactional databases were also seen as impractical at some point, but are now easy to implement.

Another day, another package manager

Posted Nov 26, 2012 17:48 UTC (Mon) by vonbrand (guest, #4458) [Link]

My point being that it has been considered (and tried) many, many times over; the real-world benefits turn out to be very scanty, while the difficulties in getting it really right are humongous. I.e., it just isn't worth doing.

Another day, another package manager

Posted Nov 26, 2012 17:58 UTC (Mon) by man_ls (guest, #15091) [Link]

You asked me to explain how it is supposed to work. Well, the use case is easy to explain:
  1. The system works fine.
  2. User upgrades a set of packages.
  3. The system does not work anymore.
  4. User wants to roll back to state 1.
Given that there is a real need to do this sort of thing, and that people are resorting to filesystem-checkpointing to get it done, I think it is worthwhile to try it again. Also, apparently this is a solved problem with nix so the GNU project is not experimenting, just giving the solution a real-world framework.

Another day, another package manager

Posted Nov 26, 2012 20:40 UTC (Mon) by vonbrand (guest, #4458) [Link]

The real scenario is more like "System works fine (at least it looks that way), user updates something, does random stuff, screws around with some configuration, after boot something is now broken." Roll back what? The update? Which of the 200 package updates since last boot? The configuration change(s)? The changes in other, perhaps (or not) related data? Local configuration in the user's account?

As I said, it is definitely never that simple.

Not the cure for cancer

Posted Nov 26, 2012 21:23 UTC (Mon) by man_ls (guest, #15091) [Link]

Way to be positive, man! At least with a transactional package manager you can roll back the 200 package updates since last boot and discard that factor. Or confirm it; and in that case you can even bisect which package update caused the wreckage.

And yes, before you say anything: a transactional package manager will not protect you if robbers come to your house or the CIA confiscates your hard drive or a lightning disintegrates the sodding computer. It is a limited tool with limited uses, which savvy administrators can use to their advantage.

Another day, another package manager

Posted Nov 26, 2012 22:16 UTC (Mon) by zlynx (subscriber, #2285) [Link]

I think filesystem checkpointing is probably the only real way to do this.

Doing it in the package manager is only coming up with a weak version of it anyway.

Package systems that attempt it need to do things such as track file changes made in scripts. It is much easier to let the filesystem do this for you.

To make a completely reliable system, it probably needs filesystems for / and /usr; /etc; /home/$USER/.local. That way, to solve a problem you can roll back individually the binary executable files, the system configuration files and the user's configuration files.

Btrfs subvolumes seem to work really well for this.

Another day, another package manager

Posted Nov 27, 2012 11:37 UTC (Tue) by hummassa (subscriber, #307) [Link]

Filesystem checkpointing has the disadvantage that takes your data away with it (unless you do it in a very contrived way so you can separate what is "user configuration" from what is "user data")...

GNU Guix launches

Posted Nov 26, 2012 9:03 UTC (Mon) by akeane (guest, #85436) [Link]

>Happy hacking, geeks! :-)

Die a thousand deaths :-)

I can't work out if this is the most sophisticated and subtle troll I have ever seen or this so-called "Ludo" is just having his "day out in the community"

Ah man, even the GNU people have lost it, from now on I will only deal in ones and zeros written by myself, and I really, really, hat1100100101001101001001001111000000000000000001010101010100100100

zero-me-do

GNU Guix launches

Posted Nov 26, 2012 10:07 UTC (Mon) by oever (subscriber, #987) [Link]

A presentation on Guix which lists the reasons to not use Nix:
so what’s the point of Guix?
  • keeping Nix’s build & deployment model
  • using Scheme as the packaging language
  • adding GNU hackers to the mix
why Guile Scheme instead of the Nix language?
  • because it rocks!
  • because it’s GNU!
  • it has a compiler, Unicode, gettext, libraries, etc.
  • it supports embedded DSLs via macros
  • can be used both for composition and build scripts
I do not know Nix well enough to know if these improvements justify not using plain Nix.

GNU Guix launches

Posted Nov 26, 2012 10:44 UTC (Mon) by macc (guest, #510) [Link]

scheme?
Because we want to stay in our little niche
and not be bothered by the danes.


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds