A kernel.org update
He gave a tour of the site's architecture, which your editor will not
attempt to reproduce here. In general terms, there is an extensive backend
system with a set of machines providing specific services and a large
storage array; it is protected by a pair of firewall systems. The front
end consists of a pair of servers, each of which runs two virtual machines;
one of them handles git and dynamic content, while the other serves static
content.
The front end systems are currently located in Palo Alto, CA and Portland, OR. One will be added in Seoul sometime around the middle of 2014, and another one in Beijing, which will only serve git trees, "soon." Work is also proceeding on the installation of a front end system in Montreal.
There is an elaborate process for accepting updates from developers and propagating them through the system. This mechanism has been sped up considerably in recent times; code pushed into kernel.org can be generally available in less than a minute. The developers in the session expressed their appreciation of this particular change.
Konstantin was asked about the nearly devastating git repository corruption problem experienced by the KDE project; what was kernel.org doing to avoid a similar issue? It comes down to using the storage array to take frequent snapshots and to keep them for a long period of time. In the end, the git repository is smaller than one might think (about 30GB), so keeping a lot of backups is a reasonable thing to do. There are also frequent git-fsck runs and other tests done to ensure that the repositories are in good shape.
With regard to account management, everybody who wants an account must appear in the kernel's web of trust. That means having a key signed by Linus, Ted Ts'o, or Peter Anvin, or by somebody who has such a key. Anybody who has an entry in the kernel MAINTAINERS file will automatically be approved for an account; anybody else must be explicitly approved by one of a small set of developers.
With regard to security, two-factor authentication is required for
administrative access everywhere. All systems are all running SELinux in
enforcing mode — an idea which caused some in the audience to shudder.
System logs are stored to a write-only write-once medium.
There is also an extensive
alert system that calls out unusual activity; that leads to kernel.org
users getting an occasional email asking about their recent activity on the
system.
Plans for the next year include faster replication through the mirror network and an updated Bugzilla instance. Further out, there are plans for offsite backups, a git mirror in Europe, a new third-party security review, and the phasing out of the bzip2 compression format.
[Next: Security practices].
Index entries for this article | |
---|---|
Kernel | Development tools/Infrastructure |
Kernel | Kernel.org |
Conference | Kernel Summit/2013 |
Posted Oct 30, 2013 2:54 UTC (Wed)
by mricon (subscriber, #59252)
[Link]
Posted Oct 30, 2013 6:48 UTC (Wed)
by dlang (guest, #313)
[Link]
I hope they mean write-once, not write-only :-)
Posted Oct 30, 2013 12:19 UTC (Wed)
by arekm (guest, #4846)
[Link] (1 responses)
Posted Nov 7, 2013 15:09 UTC (Thu)
by mricon (subscriber, #59252)
[Link]
Posted Oct 31, 2013 9:59 UTC (Thu)
by mgedmin (subscriber, #34497)
[Link] (4 responses)
What's the story behind the phasing out of bz2?
Posted Oct 31, 2013 13:05 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
If you are asking why:
IIRC, .bz2 is slower and larger than .xz (lzma), so it has no advantage
If you are asking about when, I can't answer
Posted Oct 31, 2013 13:58 UTC (Thu)
by Jonno (subscriber, #49613)
[Link]
The decompression memory requirement of xz -2 (3 MiB) and -3 (5 MiB) are comparable to bzip2 -9 (4 MiB), but the compression memory requirement are slightly higher (17M / 32M compared to 8M).
xz -6 (the default) compresses even better, but compress is usually about an order of magnitude slower than bzip2 -9, and it requires a lot more memory (94 MiB to compress, 9 MiB to decompress). So xz -6 is not really a direct replacement for bzip2, but can make sense when distributing files that are compressed once but downloaded and decompressed many times.
Posted Oct 31, 2013 14:32 UTC (Thu)
by mgedmin (subscriber, #34497)
[Link]
Posted Nov 7, 2013 15:08 UTC (Thu)
by mricon (subscriber, #59252)
[Link]
Posted Nov 6, 2013 17:54 UTC (Wed)
by dag- (guest, #30207)
[Link] (2 responses)
I am interested in such technology myself.
Posted Nov 6, 2013 19:17 UTC (Wed)
by johill (subscriber, #25196)
[Link]
Posted Nov 7, 2013 15:05 UTC (Thu)
by mricon (subscriber, #59252)
[Link]
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update
While there might be some pathological cases, in the general case:
xz -2 compresses faster, and usually better, than bzip2 -9.
xz -3 compresses better, and usually faster, than bzip2 -9.
xz always decompresses faster than bzip2.
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update
A kernel.org update