|
|
Subscribe / Log in / New account

Pulling Linux up by its bootstraps

By Daroc Alden
July 31, 2024

A bootstrappable build is one that builds existing software from scratch — for example, building GCC without relying on an existing copy of GCC. In 2023, the Guix project announced that the project had reduced the size of the binary bootstrap seed needed to build its operating system to just 357-bytes — not counting the Linux kernel required to run the build process. Now, the live-bootstrap project has gone a step further and removed the need for an existing kernel at all.

The live-bootstrap project was started in 2020 by Samuel Tyler (also known as "fosslinux") as a way to automate a complete bootstrap of a modern Linux system. Since then, Tyler has been joined by Andrius Štikonas and Gábor Stefanik as co-maintainers, along with 17 other contributors. The project's goal is to create a usable system "with only human-auditable, and wherever possible, human-written, source code". The project pulls in a number of other pieces of software, from bootstrapping tools like stage0-posix and GNU Mes, to historical versions of software that are necessary to build their more modern counterparts, such as GCC version 4.0.4 (the most recent version that does not require a C++ compiler to build). The whole process of bootstrapping a system is automated, making it possible to run automatic tests as new steps are added or software is updated.

The process

Running through the bootstrapping process is remarkably straightforward; the first step is to clone the project's Git repository in order to obtain the necessary sources. If Git is not available, it is also possible to download the release tarball for the project and its submodules from GitHub. The project does use a large number of submodules in order to incorporate other bootstrapping software, so using a recursive clone is the most convenient method:

    git clone --recursive https://github.com/fosslinux/live-bootstrap

The repository contains a tool called rootfs.py that can be used to run the entire bootstrapping process. There are a few different configurations, to make contributing to or using the project easier: building in a chroot environment, building in a virtual machine using QEMU, or building on bare metal. For chroot and QEMU builds, rootfs.py oversees the whole process. For a bare-metal build, it assembles a bootable disk image. The simplest way to run everything is ./rootfs.py --qemu, but running a moderately complex script does not seem like the best way to build confidence in a bootstrapped system.

Luckily, using rootfs.py is entirely optional — the repository includes instructions for building an equivalent bare-metal disk image by hand. The first step is to run ./download-distfiles.sh, which downloads release tarballs for the pieces of software used in the bootstrap that are not included as Git submodules. Then, one hand-assembles the Builder-Hex0 minimal kernel, places it at the beginning of a new disk image, and concatenates all of the necessary files and downloaded sources onto the image in a particular format: for each file, a plain-text header of the form "src <number of bytes> <path of file>\n" followed by the content of the file. The result is a disk image consisting mostly of compressed source tarballs that should run on any x86 machine with enough memory. The project recommends having 4GB of memory and, ideally, multiple cores available.

A minimal kernel

The whole process starts with the Builder-Hex0 kernel. Started by Rick Masters in 2022, Builder-Hex0 is a minimal 32-bit kernel. Its sole purpose is to be small enough to be verified by hand, and yet able to run the shell scripts that direct the first phase of the live-bootstrap build. The full kernel runs to 2682 lines of commented machine code, but it can be built and loaded by a bootloader that fits in a single 512-byte disk sector.

The bootloader uses BIOS commands to read the human-readable sources from disk, strip out comments, and convert hexadecimal numbers into raw bytes. It assembles those bytes at a fixed address, and then jumps to it. The kernel itself has a built-in shell that it uses to interpret a shell script that comes after it on the disk. The kernel eschews niceties such as a disk-based filesystem or the ability to run multiple programs at once. Instead, the shell script manually creates an in-memory filesystem with the sources necessary for the next step, assembles, and runs them.

Toward a larger kernel

At this point, the computer begins building stage0-posix, a set of increasingly capable assemblers and shells that eventually culminate in being able to build basic filesystem tools such as mkdir and chown. That is enough to build Mes — a Scheme interpreter written in C, and a C compiler written in Scheme. Mes's Scheme interpreter is written using only a handful of C features, so that it can be compiled by a macro assembler from stage0-posix. Mes's C compiler is not a fully conformant C compiler, but it can build Fabrice Bellard's Tiny C Compiler (tcc), which, in turn, can build Fiwix, an OS kernel that aims for compatibility with Linux 2.0 system calls.

Fiwix supports luxuries such as preemptive multitasking, filesystems, and virtual memory. The final steps of the Builder-Hex0 kernel's shell-script are to build an ext2 filesystem image for Fiwix to use, place it in memory, and then jump to the Fiwix kernel.

A chain of legacy software

Equipped with a POSIX-compliant kernel and a mostly compliant C99 compiler, the rest of the build process is a matter of building older versions of various open-source projects until GCC can be built. The most complicated part is building musl. The C library linked to tcc is the one provided by Mes, which among other things cannot handle floating-point numbers. Fixing that is a multi-step process that involves building musl, rebuilding tcc, rebuilding musl, and then rebuilding tcc.

Eventually, however, the system can build Perl 5 (which is used by GCC's build system), and then GCC 4.0.4. GCC can build Linux 4.14.341-openela — a long-term support version maintained by the Open Enterprise Linux Association. Then Fiwix can kexec the new Linux kernel. From there, the project builds a series of successively newer versions of Perl, Python, GCC, and their dependencies, culminating in a minimal but usable Linux user space with GCC 13.1.0, Python 3.11.1, and a handful of other necessary tools and libraries.

Overall

In total, the process takes many hours and a good deal more CPU time than seems entirely reasonable. At one particularly frustrating point, the computer I was testing the bootstrapping process on ended up running a lengthy build using kaem, a minimal shell that is part of the stage0-posix project that does not show any progress indications. I had to take it on faith that the computer had not in fact hung, but it did eventually make progress.

With the Linux kernel removed from the set of bootstrapping requirements, it is finally possible to definitively lay to rest the worries raised by Ken Thompson's "Reflections on Trusting Trust" Turing award lecture. David Wheeler described a technique — diverse double-compilation — to use a trustworthy compiler (such as the one produced by the bootstrapping process) to check whether there was a trusting-trust backdoor in another compiler. But in reality, the increasing attention paid to reproducible and bootstrappable builds has made trusting-trust-based attacks persisting without notice increasingly unlikely over the past several years.

The real benefit of bootstrappable builds comes from a few things. Like reproducible builds, they can make users more confident that the binary packages downloaded from a package mirror really do correspond to the open-source project whose source code they can inspect. Bootstrappable builds have also had positive effects on the complexity of building a Linux distribution from scratch — such as by convincing critical GNU projects to make releases that are compressed with gzip, instead of XZ, to cut XZ out of the complicated web of interdependent software that underlies a modern Linux user space.

But most of all, bootstrappable builds are a boon to the longevity of our software ecosystem. It's easy for old software to become unbuildable. By having a well-known, self-contained chain of software that can build itself from a small seed, in a variety of environments, bootstrappable builds can help ensure that today's software is not lost, no matter where the open-source community goes from here.



to post comments

Why not Forth?

Posted Jul 31, 2024 19:29 UTC (Wed) by Wol (subscriber, #4433) [Link] (6 responses)

I'm surprised they didn't start with Forth! How big is a minimal Forth engine that can start pulling in and compiling source? A few KB (if that) of assembler, and you can build a big system pretty quick!

I'm not sure how big the ROM in my Jupiter Ace was, but it wasn't much ...

Cheers,
Wol

Why not Forth?

Posted Jul 31, 2024 19:59 UTC (Wed) by daroc (editor, #160859) [Link] (3 responses)

There are quite small Forths available — I've come across sectorforth, which is less than 512 bytes, but I'm sure there are many others. But I admit it didn't occur to me to ask that question when putting the article together. I briefly corresponded with one of the maintainers, so I'll pass on the question and see if they're willing to provide an answer.

Why not Forth?

Posted Aug 1, 2024 10:16 UTC (Thu) by dottedmag (subscriber, #18590) [Link]

Where is the discussion taking place usually?

Some ideas to bounce that _could_ cut down on the amount of bootstrap work:

- Throw away problematic configuration/build systems, especially for old fixed version of software. Their complexity comes from their portability, and here the target is pretty much nailed down. A particular approach that worked well for me to trim down compilation dependencies is to run a configuration script, run the compilation, record all the compilation steps and make a shell file to play them back. This approach has a benefit of having zero logic in the resulting build script, and no maintenance burden for fixed software versions. Another benefit for C and especially C++ software is that a ton of separate compilation commands may be merged into one, and that improves compilation speed.

- Do the "good enough" implementation of various tools in assembly/whatever language is able to issue syscalls to short-circuit their dependencies.

Why not Forth?

Posted Aug 1, 2024 14:15 UTC (Thu) by Wol (subscriber, #4433) [Link]

I just downloaded Going Forth (I used to have a paper copy years ago), and a quick skim gave me the impression a minimal Forth engine fits inside 512 bytes with ease. Everything else is just pulling in text files to build a fully capable Forth environment, which should be able to build a C compiler pretty quickly - even a somewhat complex C compiler.

Cheers,
Wol

Why not Forth?

Posted Aug 1, 2024 16:33 UTC (Thu) by salewski (subscriber, #121521) [Link]

I, too, am interested in hearing whether a tiny Forth -- or more broadly, whatever else -- was considered. E.g. sectorlisp?

https://justine.lol/sectorlisp2/
https://github.com/jart/sectorlisp

Why not Forth?

Posted Aug 1, 2024 15:02 UTC (Thu) by oriansj (guest, #172721) [Link] (1 responses)

We did bootstrap several FORTHS.

Only Virgil Dupras ran with it to create duskOS and collapseOS.

They just don't provide a path to Linux and GCC yet.

But they are quite excellent FORTH bootstraps and for anyone interested in FORTH bootstrapping I do recommend them heavily.

Personally, I found assembly and C simpler and more fun to work in. We only work on what we feel is fun and worth doing.

Why not Forth?

Posted Aug 1, 2024 16:12 UTC (Thu) by Wol (subscriber, #4433) [Link]

> Personally, I found assembly and C simpler and more fun to work in. We only work on what we feel is fun and worth doing.

If you're not paid, why do anything else? :-)

(That said, I think you can put assembly directly into Forth, no problem :-) (It's the getting your head round RPN and the fact everything is back to front is the problem :-)

Cheers,
Wol

Proof of DDC

Posted Jul 31, 2024 22:49 UTC (Wed) by Foxboron (subscriber, #108330) [Link] (3 responses)

I think it's worth pointing out that the Reproducible Builds project / Bootstrappable Builds project did manage to reproduce the Mes C compiler between 3 different Linux distros and arrived at the same checksum.

To our knowledge that is the first non-academic proof of DDC.

https://reproducible-builds.org/news/2019/12/21/reproducible-bootstrap-of-mes-c-compiler/

Proof of DDC

Posted Jul 31, 2024 23:16 UTC (Wed) by ms-tg (subscriber, #89231) [Link] (1 responses)

Apologies, I'm trying to figure out what DDC stands for here, is it this?
* [Diverse Double Compiling](https://dwheeler.com/trusting-trust/dissertation/html/whe...)

Proof of DDC

Posted Jul 31, 2024 23:49 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

>Apologies, I'm trying to figure out what DDC stands for here, is it this?
>* [Diverse Double Compiling](https://dwheeler.com/trusting-trust/dissertation/html/whe...)

Yep, that's linked from the news post referenced.

Proof of DDC

Posted Aug 1, 2024 20:00 UTC (Thu) by crhodes (subscriber, #28923) [Link]

> To our knowledge that is the first non-academic proof of DDC.

The SBCL Common Lisp system is built, in theory, by arbitrary Common Lisp compilers, and is (again in theory) written in portable Common Lisp. In 2014 this was demonstrated by having the built system be bitwise-identical independent of which compiler was used to build it: http://christophe.rhodes.io/notes/blog/posts/2014/reprodu...

Practicalities

Posted Aug 1, 2024 12:01 UTC (Thu) by Phantom_Hoover (subscriber, #167627) [Link] (13 responses)

On the one hand, this is extremely cool, and I love it. But the way it all rests on BIOS, which is a bunch of opaque firmware code and also is an obsolete compatibility mode for vastly more complex modern UEFI firmware does make it feel somewhat built on sand, and it makes me seriously question any claimed practical value of this work. While it’s eminently possible to put a Thompson backdoor in any given toolchain, the ‘hermetically sealed’ version that can infect every practical route to bootstrapping a clean toolchain, *and* every auditing tool that might pick up the extremely pervasive backdoors required to do so, seems very implausible to me. I think practically we can already have a lot more confidence in the integrity of gcc and linux binaries than in firmware and hardware, so tightening up the former even more isn’t winning any more real security.

Practicalities

Posted Aug 1, 2024 12:25 UTC (Thu) by dottedmag (subscriber, #18590) [Link] (1 responses)

Not much prevents this from being ported to arm64 or RISC-V and being run on CPU with no supervisor mode, except the sheer amount of work. Then one has to trust silicon, of course.

Practicalities

Posted Aug 1, 2024 13:39 UTC (Thu) by somlo (subscriber, #92421) [Link]

> Not much prevents this from being ported to arm64 or RISC-V and being run on CPU with no supervisor mode

You might find this interesting: https://archive.fosdem.org/2023/schedule/event/rv_selfhos...
(disclaimer: I'm the author)

It's a (functional, albeit slow) proof of concept for equating the trustability of a running computer to that of a bounded, finite set of software, hardware, and toolchain sources.

It boots Fedora (so no reason not to keep supervisor mode :) ) and can run yosys/nextpnr to build bitstream for its own underlying FPGA.

There's an argument that using FPGAs trades performance in exchange of removing the ASIC foundry's ability to predict where on the die it could insert a silicon-based backdoor...

Practicalities

Posted Aug 1, 2024 12:59 UTC (Thu) by fosslinux (guest, #172718) [Link] (5 responses)

(hey, an author of live-bootstrap here!)

I totally agree that BIOS + UEFI is a gigantic mess that I think most in the bootstrappable community would consider very difficult to trust. The blocker here is that live-bootstrap currently only supports x86, which obviously has a hard BIOS/UEFI dependency. We have people working on riscv/arm support (which is an effective prereq to running this trustable firmware/hardware). This is obviously a long-term ideal but there is a lot of work to get there.

Another thought, on "practical value". We can still do DCC (Diverse Double Compiling) style builds of live-bootstrap on a wide variety of different x86 hardware and hopefully see that they all match checksums at the end (meaning they are either all subverted in the same way or not subverted). Of course, in the long term I hope we can have provable trust.

Side note: I'm not totally convinced that BIOS/UEFI trust is egregiously worse than trusting higher level software supply chains, working on this has been a real eye-opener on how fragile and untrustable those software supply chains are. But it is surely not ideal.

Practicalities

Posted Aug 1, 2024 14:17 UTC (Thu) by Phantom_Hoover (subscriber, #167627) [Link]

I guess my question here is whether there’s any practical value to building this work off ‘bare metal’ (which is really a swamp full of mysterious proprietary code). The DDC method is an abstract property; could you not implement it just as well building from a simple virtual machine spec? Given that I’m sure the vast majority of runs of this code will take place in qemu that’s arguably already the case, so I wonder if you’d have a more robust system for DDC verifications if the initial base execution model was easier to independently implement than x86.

Practicalities

Posted Aug 1, 2024 14:42 UTC (Thu) by anton (subscriber, #25547) [Link] (3 responses)

I am sure that you are aware of coreboot, which, as far as I understand it is a free-software replacement for UEFI/BIOS. Of course that has to be built in some trusted way, too, and AFAIK it only runs on some hardware, but that points to two ways of getting rid of UEFI/BIOS:
  1. Use the same techniques that coreboot uses to let your bootstrapping system run on bare coreboot-capable hardware.
  2. Do the same stuff for coreboot that you did for Linux (may be made easier by coreboot being derived from some Linux kernel AFAIK), but for full trust you will probably want to do that starting with way 1.
Even if few people have coreboot-capable hardware, those can check that the Linux kernels agree with those built on an UEFI system (but of course that does not protect against UEFI doing something evil when the kernel image is booted).

Independent of that, a very cool project!

Practicalities

Posted Aug 1, 2024 16:26 UTC (Thu) by mjg59 (subscriber, #23239) [Link] (1 responses)

Coreboot only runs on modern x86 using proprietary blobs (FSP for Intel, AGESA for AMD), you need to go back to roughly a decade old hardware to be blobless. Thankfully that's probably still capable enough for (somewhat more slowly) bootstrapping.

Practicalities

Posted Aug 23, 2024 8:20 UTC (Fri) by TRS-80 (guest, #1804) [Link]

BIOS and Coreboot

Posted Aug 1, 2024 16:49 UTC (Thu) by farnz (subscriber, #17727) [Link]

Note that Coreboot has the SeaBIOS payload, which provides the "traditional" BIOS interface using Coreboot services. Means trusting Coreboot and SeaBIOS, but reduces the amount of closed source in your trusted base.

Practicalities

Posted Aug 1, 2024 13:06 UTC (Thu) by ecashin (subscriber, #12040) [Link]

It's technically correct that no layer of software can be expected to be 100% secure 100% of the time, and that secure hardware and firmware is hard to get right, but that's not a good reason to abandon security goals for any specific layer. There's a lot of software that depends on a secure Linux kernel, and getting that right makes the job of attackers harder.

A cynical way of looking at the situation is this: Attackers going for money will target people that have thrown their hands up and left all the vulnerabilities in place, because it's easier to attack those systems. Attackers with specific political goals will be more persistent, but the more difficult their job, the more costly their operations, and the more likely they'll be discovered or will fail.

Practicalities

Posted Aug 1, 2024 17:29 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Realistically, the binaries are so small that a non-trivial backdoor that would recognize the compiler source code and infect it is unlikely.

Practicalities

Posted Aug 1, 2024 18:22 UTC (Thu) by Phantom_Hoover (subscriber, #167627) [Link] (2 responses)

Yes, this is why I’ve always found claims that Thompson hacks are a practical possibility to be rather overblown. Software supply chain attacks are absolutely a concern but they’re going to be much more boring than a super-virus with near oracular abilities to detect and compromise every version of every compiler in use. I think the risk of anything persisting through existing build pipelines for long is remote.

Practicalities

Posted Aug 2, 2024 22:47 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (1 responses)

> Software supply chain attacks are absolutely a concern but they’re going to be much more boring than a super-virus with near oracular abilities to detect and compromise every version of every compiler in use. I think the risk of anything persisting through existing build pipelines for long is remote.

It should also be emphasized that, due to Rice's theorem, the original hack as described by Thompson is literally impossible. In the general case, you cannot recognize a C compiler (or logind etc.) by looking at its source. You can probably recognize a lightly-modified GCC or Clang by looking at its source, but there will always be some degree of modification where that no longer works (trivially, because you could delete the whole thing and write a new compiler from scratch).

Real supply chain attacks are going to look like the xz backdoor, not like the Thompson trusting-trust attack. But I would go further than that: Interpreting Thompson's paper literally is sort of missing the point. Thompson was not *just* warning about this highly convoluted method of compromising logind through the compiler, or even about compromising binaries in general through the compiler. Thompson was warning about the broader and more general principle that the source code is not the final authority for what the machine actually does.

xz is a prime example of this principle - if you looked at the OpenSSH source code, you would not see anything suspicious, because the attack wasn't in the OpenSSH source code. For that matter, neither was it present in the xzutils source code (as displayed on GitHub). It was present, in part, in one of the xzutils opaque binary test files, which was named in a way as to suggest that it was a corrupt file for xz to reject as invalid. Another part of the attack was present in a post-configure script. Of course the average developer will not even look at a configure script or makefile unless there is no alternative, but it doesn't matter, because this post-configure script wasn't in Git at all - it was only in the release tarballs. There is a great deal of further complexity after that, but you get the idea. The point is, "read the source code," as an auditing strategy, would never have found this attack, so reading the source code is (by itself) inadequate as a means of auditing.

Practicalities

Posted Aug 7, 2024 21:22 UTC (Wed) by paulj (subscriber, #341) [Link]

You don't have to specifically recognise a compiler or logind. You just have to know how to inject code into some binary that your code handles and transforms. Given nearly all binaries are in some well defined executable format with well-defined entry-points, this is straight-forward.

Thomson's paper is explicit that his point relates to anything that handles code (interprets or transforms).

Very cool

Posted Aug 1, 2024 16:47 UTC (Thu) by paulj (subscriber, #341) [Link] (5 responses)

Very cool, and amazing progress towards the day when we have people doing reproducible builds from hardware all the way up (note: This requires the home fab projects to make progress too).

This doesn't lay Thomson's worries to rest though. The opposite in fact. It _makes his point_. And even then, this is something that most people are not going to be able to do (by skill, or practicalities such as time). Also, I note the reliance on very old software - which is itself a threat, given what we know about shelf-life of cryptographic hashes, e.g. see observations of Valerie Aurora. I wrote a bit more on this and double diverse compiling here: https://paul.jakma.org/2010/09/20/critique-of-diverse-dou...

Very cool

Posted Aug 1, 2024 19:26 UTC (Thu) by Phantom_Hoover (subscriber, #167627) [Link] (3 responses)

What would make Thompson’s point is a working demonstration of a backdoor that’s durable to even basic countermeasures, or one found in the wild. Yet more science fictions about what an impossibly perfect program could allegedly do aren’t going to cut it.

Very cool

Posted Aug 2, 2024 8:43 UTC (Fri) by chris_se (subscriber, #99706) [Link]

> What would make Thompson’s point is a working demonstration of a backdoor that’s durable to even basic countermeasures, or one found in the wild. Yet more science fictions about what an impossibly perfect program could allegedly do aren’t going to cut it.

I think Thompson's argument is correct in a philosophical sense, but not in a practical sense. I agree with you in that I don't believe that such a super-backdoor doesn't exist.

But other supply chain attacks are real (as we've seen with e.g. the XZ backdoor). And I applaud any work that tries to make it harder and harder for such an attack to occur undetected. Methods that can detect vastly more sophisticated (and possibly unrealistic) attacks will also help detect the more realistic ones.

I also think that most developers aren't thinking enough about supply chain attacks in the modern world. So I'm very excited about projects that push these types of ideas more into the current zeitgeist.

Very cool

Posted Aug 6, 2024 3:02 UTC (Tue) by NYKevin (subscriber, #129325) [Link] (1 responses)

As I explained upthread,[1] the original attack is, was, and has always been a fantasy, and so it is logical to conclude that Thompson was not speaking literally. I think it is plausible to read Thompson as anticipating the general(!) category of attack which includes the xz backdoor. I would summarize this interpretation of Thompson as "supply chain attacks don't have to be visible in source code to be effective."

[1]: https://lwn.net/Articles/984430/

Very cool

Posted Aug 6, 2024 8:54 UTC (Tue) by chris_se (subscriber, #99706) [Link]

> I would summarize this interpretation of Thompson as "supply chain attacks don't have to be visible in source code to be effective."

Regardless of whether Thompson himself meant it like that or not, I really like your summary. It's catchy enough that one could make a t-shirt out of it. :-)

Very cool

Posted Aug 7, 2024 20:45 UTC (Wed) by naesten (subscriber, #71199) [Link]

Very cool, and amazing progress towards the day when we have people doing reproducible builds from hardware all the way up (note: This requires the home fab projects to make progress too).
Even without homebrew hardware, it should still count for something if we get bit-for–bit identical results on a sufficiently wide array of hardware/firmware. Requirements I can think of:
  • Use a mix of CPU vendors. (Hopefully, AMD and Intel aren't colluding.)
  • Use motherboards with firmware of different lineage. To avoid any chance that they're all using edk2 to implement UEFI, include plenty of legacy BIOS boards.
  • Use different brands of disk drive and video adapter.
  • Include systems of quite different age, to rule out short-term conspiracies.
Another thing that makes me a bit nervous is the fixed sequence of old GCC versions; it would be more confidence-inspiring if several different paths through GCC history were verified to produce the same result. (It could be worse: some compilers only support building using a specific earlier version, possibly checked into the source repository.)

Two compilers before MES

Posted Aug 14, 2024 12:53 UTC (Wed) by FransFaase (guest, #172898) [Link]

The article does not mention that stage0 contains two compilers for minimal sets of C before MES is compiled. The first is written assembly and thus CPU specific. The second, which also contains a driver program that includes a pre-processor, is written in C-like language of the first.

I personally feel that MES is the odd fellow in the whole chain of compilers and I have been looking into whether it would be possible to reduce the number of steps. It turned out that the M2 compilers are kind of buggy. So, I have been thinking about a stack based language with which it would be possible to write a C compiler that can be used to compile TCC.

See https://www.iwriteiam.nl/Software.html for what I have been working on, including some attempts to generate a webpage with all the sources that are used to compile stage0.


Copyright © 2024, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds