|
|
Log in / Subscribe / Register

Leading items

Welcome to the LWN.net Weekly Edition for September 6, 2018

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Life behind the tinfoil curtain

By Jake Edge
September 5, 2018

LSS NA

Security and convenience rarely go hand-in-hand, but if your job (or life) requires extraordinary care against potentially targeted attacks, the security side of that tradeoff may win out. If so, running a system like Qubes OS on your desktop or CopperheadOS on your phone might make sense, which is just what Konstantin Ryabitsev, Linux Foundation (LF) director of IT security, has done. He reported on the experience in a talk [YouTube video] entitled "Life Behind the Tinfoil Curtain" at the 2018 Linux Security Summit North America.

He described himself as a "professional Russian hacker" from before it became popular, he said with a chuckle. He started running Linux on the desktop in 1998 (perhaps on Corel Linux, which he does not think particularly highly of) and has been a member of the LF staff since 2011. He has been running Qubes OS on his main workstation since August 2016 and CopperheadOS since September 2017. He stopped running CopperheadOS in June 2018 due to the upheaval at the company, but he hopes to go back to it at some point—"maybe".

Ryabitsev is a system administrator, he warned, not a security researcher or a kernel developer. He is "a little bit paranoid", though not nearly as much as some people are. His goal with the talk was to share his experiences in using these Linux-based security-oriented operating systems. Since he has access to various keys and other secrets, he wants to try to ensure that compromising his systems would take a significant amount of effort.

Qubes OS

The guiding principles behind Qubes OS are threefold: compartmentalization using virtualization, hardware isolation, and network isolation. Qubes OS uses Xen as its hypervisor and runs the graphical interface in dom0. All applications run within application virtual machines (AppVMs); windows created by those applications are decorated with different colors to identify which AppVM they belong to.

[Konstantin Ryabitsev]

Hardware isolation is done by requiring that I/O devices are assigned to specific AppVMs using "convenient management tools" before they can be used. For the most part, it works well, Ryabitsev said, but does sometimes fail mysteriously. The USB controllers are typically assigned to their own "sys-usb" VM for isolation. When USB devices are plugged in, they can be assigned to other AppVMs. The network isolation provides full control over each AppVM's networking capabilities—or lack thereof for the "vault" and USB VMs. Each AppVM could have its own VPN or Tor connection and all applications started from there would use that mechanism for their connections.

Qubes OS does require lots of RAM. He typically runs around ten VMs using 1-4GB each; the official requirements are for at least 4GB, "but you will not be a happy camper" with that amount of RAM. He recommends at least 16GB of RAM and fast, large SSD disks, preferably NVMe. Each application will be read cold from the disk, since the VMs will not share any of the OS caching—fast disks are a must.

The system also needs to be multi-processor with multiple cores per processor; Qubes OS "works OK" with two processors, each with two cores, as on his laptop. The CPU must have VT-x and VT-d virtualization for Intel processors or the equivalent for AMD CPUs. While Qubes OS can work with other graphics hardware, it is far better to use Intel graphics. Using other graphics will make things more complicated; "believe me, your life is already going to be complicated".

Qubes OS is built on top of Fedora and uses a modified Fedora installer. Post-installation requires some knowledge of what you plan to do with the system in order to set it up correctly. For the most part, you can probably accept the defaults, but you will need to decide whether to have a sys-usb VM for USB devices, as well as what other AppVMs to create. If you do create a sys-usb, you will need to decide which USB controllers to assign to it; for laptops, it is easy, just assign them all to it. For workstations with a USB keyboard and mouse, you will be unable to use them (and the system as a whole) if you assign that controller to sys-usb.

AppVMs should be treated as logically isolated workspaces, he said; there is no need to have separate VMs for each application. The best way to think about them is as if they are fully separate hardware workstations with convenient ways to do copy-paste and to copy files between them. You should "learn to love and use disposable VMs"; a questionable file or site can be accessed with little risk and no persistence. There are template VMs of various community distributions (Debian, Fedora, Whonix) available; they can be run and then destroyed with only data stored in the /rw filesystem persisting.

In daily use, most things just work as expected. Copying files is more complicated, as you must use the GUI or the command-line tool, but that is easy to get used to. Copy-pasting will, instead, "drive you crazy", Ryabitsev said. It is not broken and works as designed, but you must use Ctrl-Shift-C to copy text to the global clipboard (after selecting and copying it to the local clipboard using, say, Ctrl-C) and Ctrl-Shift-V to be able to paste it elsewhere (then doing a paste operation in the local application). It is meant to ensure that an untrusted AppVM does not override the global clipboard but it is hard to adjust to.

Installing software via DNF or APT only affects an AppVM while it is active; those changes disappear once the AppVM is shut down. In order to make the changes persistent, they must be installed in a template. Global configuration files need to be changed in the template as well; he suggested making symbolic links in the templates to items in /rw/config.

There are things that "you will love" about Qubes OS, including the feeling that you have done your best to protect your data. The Qubes OS developers clearly have security foremost in their minds. Opening mail attachments in disposable VMs is another lovable feature. You can also sanitize a PDF file by opening it in a disposable VM that renders it as an image. If a template upgrade goes poorly (or you don't like the result), you can fall back to the previous template. Having different network endpoint egress per-AppVM is great, as are vault VMs for storing sensitive data since they have no network egress at all.

But there are also things that "will drive you mad" with Qubes OS, including copy-paste. There's no way to do screen sharing without setting up a standalone windowed VM. Suspend/resume bugs are quite frustrating; he can only resume correctly 10% of the time, so he doesn't bother. There is a noticeable lag when launching an application in an AppVM that is not running (45-60 seconds); it is worse if it has to launch a network VM first.

There is some "rare but random weirdness" when running Qubes OS. There is lots of magic in a modern Linux workstation, running Xen just amplifies that. Sometimes, AppVMs won't start, the microphone stops working, the resolver in a VM doesn't work, and so on. It is also complicated to do backups for a Qubes OS machine; it must be done manually, which is annoying.

He then turned to look toward the future. Qubes OS is sponsored by Invisible Things Lab (ITL) and is under active development. It is partially funded by user donations. It also uses Xen, which is "never gonna be in mainline"; Amazon is moving away from it as well. But Qubes OS has been written to use other virtualization mechanisms if needed; Xen has the best hardware virtualization, according the ITL web site. Qubes OS does have an active and diverse user base, so the project may be able to continue on, even if ITL were to stop working on it.

Qubes OS is meant for system administrators with access to privileged data, journalists (as long as they have a knowledgeable support staff), and those who expect targeted attacks from well-funded adversaries. It is not meant for anyone who is not familiar with Linux, especially if they lack a technical support organization. It also is not usable by anyone who cannot afford expensive and powerful modern hardware. Beyond that, anyone who is in danger of a physical-duress threat should not use it; even having Qubes OS on a system is a red flag. Using Tails would be better for deniability.

There are some alternatives that can be combined to reach "some degree of feature parity" with Qubes OS. Firejail can be used for browser sandboxing and Flatpak for isolating other applications. Whonix and Tails can be used for privacy and anonymity. But Qubes OS offers all of that plus convenience, though the cost is that there is a lot to learn in order to use it.

CopperheadOS

Ryabitsev then shifted gears to more secure mobile operating systems. He has had an Android phone for some time now; CopperheadOS is a hardened Android distribution, which made it attractive. Because his office is next door to a psychiatrist's office in his village, he kept getting asked about it by Android; now Google thinks he is a patient. That was one of the things that drove him toward CopperheadOS.

So he switched, until things went awry in May. "CopperheadOS in its previous incarnation is dead". One of the main developers has left the company and destroyed the signing keys, leaving anyone with an installed CopperheadOS device with no way to get updates. Once that happened, he went back to Android, at least for now.

The guiding principles for CopperheadOS are to have a Google-free Android experience with fast security patching turnaround; that is similar to other pure Android open-source project (AOSP) builds. CopperheadOS adds some unique features to that, including a hardened kernel plus the kernel self-protection project (KSPP) patches and a hardened build toolchain. There are also stricter policies for SELinux, stricter defaults in general, MAC address randomization, and more. He was sad to see it go away.

CopperheadOS is only available for a small set of devices, including Google Pixel and Pixel 2 phones and a HiKey development board. It can be downloaded and installed locally, though there won't be any over-the-air (OTA) updates if so. CopperheadOS offers pre-loaded Pixel phones at 80% markup or you can send in your own Pixel device, along with quite a bit of money, to get CopperheadOS installed. Those options do come with OTA updates, however.

In daily use, CopperheadOS is pretty much like any other pure AOSP system. Apps are available from F-Droid or can be side-loaded from various sources, though many apps do not work right (or at all). CopperheadOS made a design decision not to support microG as a replacement for some of the Google proprietary libraries and services, so it does not work. Overall, it is excellent for secure communication and browsing; it also has a remote attestation feature to ensure a handset is running what is expected.

When running CopperheadOS, the battery life is one thing that "you will love"; Google's Play services are extremely battery hungry, he said, so not having them is a boon to the battery life. You will also not be tracked or, at least, not tracked as much. The telephone companies still do plenty of tracking and certain apps (e.g. browsers) may also be tracking you. Fast security patches is another good feature, as is the knowledge that you are using free software—though some features may only be "source available".

Get used to seeing a popup about Google Play services being missing; it is one of things you will hate about the system. There is also a huge loss of convenient perks that you have likely become accustomed to. Mobile phones are expected to be more than just email and phone calls these days. Side-loaded apps probably won't be able to deliver notifications and the app author will not be interested in hearing about your weird setup.

You will be able to communicate securely, but only with those using the same messaging apps, which is not likely to be many of your friends and colleagues. Owning such a device "comes with all the social perks of being a gluten-intolerant vegan with a peanut allergy", he said. Worse yet, from his perspective, is that you are unable to play Pokémon Go on CopperheadOS (or other pure AOSP devices).

CopperheadOS is not exactly dead at this point, he said. It is building and releasing images and still has a store, but that may not last.

It is targeted similarly to Qubes OS, for those who are worried about data collection by governments and large companies. It is also for those expecting precision attacks by deep-pocketed adversaries; that would include journalists, activists, and even government employees. CopperheadOS is not for anyone who needs to use their phone for more than just secure communication, however, he said.

He has gone back to stock Google Pie since the split within the CopperheadOS company; he reimaged his device that day. He has applied some privacy tweaks and recommended the Hermit app as being helpful in that regard. He does not plan to switch to LineageOS (another pure AOSP system), in part because not getting notifications was impacting his work.

Based on Ryabitsev's presentation, it would seem that there is lots more work to be done on the mobile side, but that there is a viable (if inconvenient at times) alternative for the desktop. Others who are not so reliant on notifications may find that LineageOS does what they need, but they are still likely to find a fairly large loss of functionality. Interested readers may wish to have a look at the slides [PDF] from the talk as well.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to attend the Linux Security Summit in Vancouver.]

Comments (18 posted)

Strengthening user-space Spectre v2 protection

By Jonathan Corbet
September 5, 2018
The Spectre variant 2 vulnerability allows the speculative execution of incorrect (in an attacker-controllable way) indirect branch predictions, resulting in the ability to exfiltrate information via side channels. The kernel has been reasonably well protected against this variant since shortly after its disclosure in January. It is, however, possible for user-space processes to use Spectre v2 to attack each other; thus far, the mainline kernel has offered relatively little protection against such attacks. A recent proposal from Jiri Kosina may change that situation, but there are still some disagreements around the details.

On relatively recent processors (or those with suitably patched microcode), the "indirect branch prediction barrier" (IBPB) operation can be used to flush the branch-prediction buffer, removing any poisoning that an attacker might have put there. Doing an IBPB whenever the kernel switches execution from one process to another would defeat most Spectre v2 attacks, but IBPB is seen as being expensive, so this does not happen. Instead, the kernel looks to see whether the incoming process has marked itself as being non-dumpable, which is typically only done by specialized processes that want to prevent secrets from showing up in core dumps. In such cases, the process is deemed to be worth protecting and the IBPB is performed.

Kosina notes that only a "negligible minority" of the code running on Linux systems marks itself as non-dumpable, so user space on Linux systems is essentially unprotected against Spectre v2. The solution he proposes is to use IBPB more often. In particular, the new code checks whether the outgoing process would be able to call ptrace() on the incoming process. If so, the new process can keep no secrets from the old one in any case, so there is no point in executing an IBPB operation. In cases where ptrace() would not succeed, though, the IBPB will happen.

This code, Kosina said, has been shipping in SUSE kernels since the initial disclosure of the Spectre vulnerabilities.

Comments on this change focused on two specific areas. One was the fact that Casey Schaufler has been working on a "sidechannel" Linux security module (LSM) specifically intended to make decisions on when operations like IBPB should be performed. This module leaves it up to the security policy to decide when the possibility of an attack exists; different security mechanisms might make different decisions here. Kosina's patch, instead, wires the policy directly into the kernel. Schaufler argued in this discussion that using ptrace() was the wrong approach: "Even if ptrace_may_access() does exactly what you want it to for side-channel mitigation today it would be incredibly inappropriate to tie the two together for eternity."

Kosina initially agreed to drop his patches in favor of the sidechannel module, but later backtracked for a couple of reasons, including the fact that this module is not ready for production use yet. Beyond that, giving the administrator the flexibility to choose among policies has its advantages, but those advantages come at a cost, he said:

I am a bit afraid that we are offloading to sysadmins decisions that are very hard for them to make, as they require deep understanding of both the technical details of the security issue in the CPU, and the mitigation.

Instead, he said, the only choice provided should be whether protection against Spectre v2 is needed or not.

There are also questions about whether security modules are the right place to make decisions on the use of mitigations for hardware vulnerabilities. Andrea Arcangeli noted that the other defenses against Meltdown and Spectre are not controlled by security modules:

Even if you build with CONFIG_SECURITY=n PTI won't go away, retpoline won't go away, the l1flush in vmenter won't go away, the pte/transhugepmd inversion won't go away, why only the runtime tunability or tweaking of IBPB fits in a LSM module?

For now, Kosina plans to continue to push the patch set using the ptrace() check for now; "we can then later see whether the LSM implementation, once it exists, should be used instead".

The other question on developers' minds was the performance impact of this change; Kosina did not include any numbers with the patch set. Andi Kleen complained about that omission, saying "It's ridiculous to even discuss this without them". Arcangeli asserted that "IBPB has never been measurable if done only when the prev task cannot ptrace the next task", but he, too, did not offer numbers to back that claim up. It seems certain that some real measurements will be required before this code can go upstream. On the other hand, some developers clearly see the security aspect as being the most important; as Thomas Gleixner put it: "Either we care about that problem and provide a proper mechanism to protect systems or we do not. That's not a performance number problem at all".

One other aspect of the Spectre problem is, inevitably, hyperthreading. Sibling CPUs share resources, including the branch-prediction buffer, so it may be possible for code running on one sibling to attack the process running on the other. The "single thread indirect branch predictors" (STIBP) feature provided on some Intel CPUs disables this sharing, and can thus block those attacks. The kernel does not currently use STIBP, but Kosina's patch set adds it, leaving it enabled whenever hyperthreading is in use. Again, no numbers showing the performance impact of the change were provided.

Once the dust settles, the kernel is likely to end up with increased protection against Spectre v2 for user-space processes. It might seem unfortunate that this fix is not likely to find its way into a released kernel until a full year after the disclosure of this vulnerability; Kosina clearly thinks so. On the other hand, it is not clear that there are a lot of attackers out there trying to use this vulnerability against user-space processes. Dealing with hardware always involves tradeoffs and workarounds for difficult behavior; the best ways to handle such behavior can take some time to work out. The kernel developers will eventually settle on the best ways to deal with Spectre v2. That would be more comforting, of course, if we had any confidence that the flow of speculative-execution vulnerabilities will slow down sometime soon.

Comments (3 posted)

Protecting files with fs-verity

By Jonathan Corbet
August 30, 2018
The developers of the Android system have, among their many goals, the wish to better protect Android devices against persistent compromise. It is bad if a device is taken over by an attacker; it's worse if it remains compromised even after a reboot. Numerous mechanisms for ensuring the integrity of installed system files have been proposed and implemented over the years. But it seems there is always room for one more; to fill that space, the fs-verity mechanism is being proposed as a way to protect individual files from malicious modification.

The core idea behind fs-verity is the generation of a Merkle tree containing hashes of the blocks of a file to be protected. Whenever a page of that file is read from storage, the kernel ensures that the hash of the page in question matches the hash in the tree. Checking hashes this way has a number of advantages. Opening a file is fast, since the entire contents of the file need not be hashed at open time. If only a small portion of the file is read, the kernel never has to bother reading and checking the rest. It is also possible to catch modifications made to the file after it has been opened, which will not be caught if the hash is checked at open time.

One could imagine a number of implementations where the kernel would create and maintain the Merkle tree, and protect it from being modified in its own right. The actual implementation in fs-verity is not such a scheme, though; much of the work is pushed out to user space. Developers wanting to know how to work with fs-verity will be needing to know how to do this work; for now we are told that "A documentation file in Documentation/filesystems/ is planned but not yet included". So, for now, one must reverse-engineer the patch set instead.

The process starts with the creation of a file to be protected in the normal way. User space must then make a pass over the file to generate the Merkle tree so that it can append a series of structures to the file:

  • The contents of the Merkle tree itself.
  • An fsverity_descriptor structure (defined at the end of the first patch in the series) containing information about the fs-verity version in use, the hash algorithm employed, etc. It also includes a count of "extension" structures to follow containing additional information.
  • The series of extension structures. One of these, for example, contains a signed copy of the root hash of the Merkle tree; this can be used to verify that the tree, itself, has not been tampered with.
  • An fsverity_footer structure at the end of the file; it only contains a magic number and the offset of the fsverity_descriptor structure.

Once user space has populated the file with this information, it must reopen the file read-only, then perform an ioctl() call with the FS_IOC_ENABLE_VERITY command (which takes no arguments). A number of interesting things happen at that point, starting with the fact that the file is rendered immutable; it can be deleted, but the contents of the file can no longer be changed. The underlying filesystem tweaks the visible length of the file so that it appears to end before the Merkle tree, thus hiding the fs-verity metadata. And, of course, it will also cause the fs-verity checking to be performed for any subsequent accesses; any attempt to read a page with the wrong hash will fail with an EIO error.

This is the first public posting of the fs-verity patch set, but the idea has been circulating for a while; it was first described in January, and was discussed at the 2018 Linux Storage, Filesystem, and Memory-Management Summit. While the idea of providing integrity guarantees at the file level has some appeal, there are a number of persistent concerns about this particular approach to the problem.

One that was extensively discussed in January is the overlap with the kernel's integrity measurement architecture (IMA) mechanism, which already exists to ensure the integrity of individual files. There are a number of reasons for avoiding IMA in this setting, it seems. IMA itself is seen as a large and complex mechanism that also requires cooperation from the security-module subsystem; the Android developers are looking to avoid this complexity. IMA hashes an entire file at open time, making opens expensive (especially when only a small part of the file is read) and making it unable to catch changes made after the file has been opened. And IMA is intended to protect a system starting at boot time, but Android has its own boot-time mechanisms.

The fact that security modules are not involved in fs-verity decisions at all was an area of disagreement in its own right. IMA developer Mimi Zohar argued that, while fs-verity could check for file integrity, the policy decision of what to do about a verification failure should be implemented in a security module. Ted Ts'o pointed out that not all kernel security mechanisms work with security modules, and maintained that fs-verity should not either.

While it has not been implemented, the fs-verity developers have mentioned the possibility of interoperating with IMA in the future. In particular, the fs-verity hashes could be used by IMA to verify files without having to read their entire contents.

The "hide the Merkle tree after the end of the file" technique has raised some concerns of its own. It exposes the details of the mechanism to user space, requiring that it will be supported forever and making it hard to change. Copying a file (to a backup, for example) will lose the verity information. Having the file be longer than its stated length may confuse code in both the kernel and user space, and may be hard for some filesystems to implement; the current patch set has support for ext4 and f2fs. This mechanism will also work poorly for network filesystems like NFS, which would like to have better integrity support but which cannot access the hidden Merkle tree.

There does not appear to be a viable alternative to this mechanism at the moment, though. One often-suggested idea is to put the Merkle tree in an extended attribute, but the amount of space available for extended attributes is not enough to hold the tree for larger files. It would be possible to simply not store the lower levels of the tree and to regenerate them when the file is opened — Merkle trees support working in that mode — but that would add complexity and make opens more expensive.

The fs-verity developers also defend the current approach by noting the advantages of storing the tree as ordinary file data. The tree, which can indeed become large, can be managed in the page cache in the usual way. If the underlying filesystem is encrypted, the fs-verity metadata will also be encrypted with no additional effort. Thus, fs-verity developer Eric Biggers argued, the best way to avoid the after-end-of-file hack would be to finally support named file streams in the kernel. That idea spawned an extensive thread of its own that makes interesting reading for people interested in the details; the executive summary is that the developers who have looked at the problem are mostly united in thinking that it is not a good idea.

The end result of all this is that there do not seem to be a lot of changes in store for this patch set. Nobody has — yet — put a foot down and expressed outright opposition to it being merged in its current form. Unless that changes, something closely resembling the current patch set seems likely to find its way into the mainline fairly soon.

(See also: slides [PDF] from the recent Linux Security Summit session on fs-verity.)

Comments (7 posted)

IDA: simplifying the complex task of allocating integers

By Jonathan Corbet
September 4, 2018
It is common for kernel code to generate unique integers for identifiers. When one plugs in a flash drive, it will show up as /dev/sdN; that N (a letter derived from a number) must be generated in the kernel, and it should not already be in use for another drive or unpleasant things will happen. One might think that generating such numbers would not be a difficult task, but that turns out not to be the case, especially in situations where many numbers must be tracked. The IDA (for "ID allocator", perhaps) API exists to handle this specialized task. In past kernels, it has managed to make the process of getting an unused number surprisingly complex; the 4.19 kernel has a new IDA API that simplifies things considerably.

Why would the management of unique integer IDs be complex? It comes down to the usual problems of scalability and concurrency. The IDA code must be able to track potentially large numbers of identifiers in an efficient way; in particular, it must be able to find a free identifier within a given range quickly. In practice, that means using a radix tree (or, soon, an XArray) to track allocations. Managing such a data structure requires allocating memory, which may be difficult to do in the context where the ID is required. Concurrency must also be managed, in that two threads allocating or freeing IDs in the same space should not step on each other's toes.

A particular allocation space (usually just called an "IDA") is created with either of the usual kernel idioms. It can be defined and initialized with this macro:

    #include <linux/idr.h>

    DEFINE_IDA(name);

Alternatively, the two steps can be separated with:

    struct ida my_ida;
    /* ... */
    ida_init(&my_ida);

If an IDA is no longer needed, it can be freed by passing it to ida_destroy(); it is not necessary to free any allocated IDs first.

The old way

A longtime unwritten design pattern in the kernel community states that low-level utilities (such as IDA) should not perform their own locking; instead, protection against concurrent calls should be handled by the caller. The reasoning is that, often as not, the higher-level code has its own locking already; in such cases, doing more locking at a lower level would be wasteful. So the old IDA API expected (with an exception, see below) that locking would be handled by its callers.

That raises another issue, though. IDA is supposed to be fast, but the effort past kernel developers have put into squeezing out that last microsecond of execution time would go to waste if IDA calls were surrounded by heavyweight locking. IDA callers thus tend to use spinlocks; that choice may be forced by the context in which ID allocation is performed anyway. The use of spinlocks means that IDA allocation tends to be done in atomic context, where memory allocation can be less reliable. To deal with this problem, pre-4.19 IDA allocation is split into two phases. First, a call is made to:

    int ida_pre_get(struct ida *ida, gfp_t gfp);

This function tries to ensure that memory will be available to satisfy a subsequent allocation request, which is done using:

    int ida_get_new_above(struct ida *ida, int start, int *id);

The ida_pre_get() call can be done outside of the locking that serializes calls to ida_get_new_above(), so it may be more likely to succeed in allocating memory. Of course, there is no guarantee that the memory pre-allocated by one caller will not be snagged by a concurrent caller, so ida_get_new_above() can still return -EAGAIN. Thus, ID allocation is typically performed with a dance like:

    retry:
        ida_pre_get(&some_ida, GFP_KERNEL);
	spin_lock(&some_lock);
	r = ida_get_new_above(&some_ida, 0, &id);
	spin_unlock(&some_lock);
	if (r == -EAGAIN)
	    goto retry;

An identifier allocated this way could be freed with ida_remove(), which must also be called with the appropriate lock held.

This kind of pattern makes a simple task start to look a bit less simple, especially if the spinlock must be introduced for the sole purpose of calling IDA functions. It is also the sort of code that can be easy to get wrong in subtle ways. In the hopes of making life easier, a set of higher-level functions was added by Rusty Russell in 2011 for the 3.1 kernel release:

    int ida_simple_get(struct ida *ida, unsigned int start, unsigned int end,
		       gfp_t gfp_mask);
    void ida_simple_remove(struct ida *ida, unsigned int id);

These functions use their own lock and hide the ida_pre_get() dance from callers. As of 4.18, most IDA users use these functions, but there are still numerous sites using the more complex API.

The 4.19 way

One of the last pulls into the mainline before the release of 4.19-rc1 was Matthew Wilcox's IDA series, which makes a number of changes to the API. In particular, ida_pre_get() no longer exists as far as IDA users are concerned; all IDA functions now handle their own internal locking and memory allocation. Allocation and freeing of identifiers is done with:

    int ida_alloc(struct ida *ida, gfp_t gfp);
    void ida_free(struct ida *, unsigned int id);

There are a number of variants for callers needing more control over the allocated ID:

    int ida_alloc_min(struct ida *ida, unsigned int min, gfp_t gfp);
    int ida_alloc_max(struct ida *ida, unsigned int max, gfp_t gfp);
    int ida_alloc_range(struct ida *ida, unsigned int min, unsigned int max,
			gfp_t gfp);

ida_simple_get() and ida_simple_remove() still exist as wrappers around the above functions, avoiding the need to change large numbers of existing callers, but those functions should probably be seen as deprecated at this point. The lower-level functions have been removed entirely, instead, and all callers within the kernel have been changed to the new API.

The IDA interface has clearly been simplified quite a bit. The underlying implementation remains surprisingly complex for such a seemingly simple task, though; interested readers can find it in lib/idr.c. Nothing, it seems, is simple in kernel space.

Comments (8 posted)

An introduction to the Julia language, part 2

September 4, 2018

This article was contributed by Lee Phillips

Part 1 of this series introduced the Julia project's goals and development process, along with the language syntax, including the basics of control flow, data types, and, in more detail, how to work with arrays. In this part, user-defined functions and the central concept of multiple dispatch are described. It will also survey Julia's module and package system, cover some syntax features, show how to make plots, and briefly dip into macros and distributed computing.

Multiple dispatch

Many high-level languages come with a built-in opinion about how you should organize your code. You are free to ignore these opinions, but that involves going against the grain and failing to take full advantage of the language's features. For example, Python is a class-based object-oriented language, where code tends to be organized around classes that inherit from other classes; the Lisp family encourages the programmer to use macros and small, composable functions to create a "domain specific language" suitable to the problem at hand; APL programmers express their problems as operations on entire arrays and are loath to write loops. Julia is organized around a principle different from these. To learn what that is, we first need to learn how functions are created.

Define functions using the function keyword:

    julia> function pdiff(a, b)
	       if b > a
		   return b - a
	       else
		   return 0
	       end
	   end
    pdiff (generic function with 1 method)

Notice the slightly odd message returned by the read-eval-print loop (REPL) after it digests the function definition. If you keep the REPL open and type in a new function definition, such as:

    function pdiff(a::String, b::String)
	if length(b) > length(a)
	    return b[length(a)+1:end]
	else
	    return ""
	end
    end

Then you will get a slightly different message back:

    pdiff (generic function with 2 methods)

In our second definition, we've specified the types of the arguments: both of them must be a String. In the original definition, the types are left unspecified. We now have two versions of the generic function (or "method") called pdiff(). When we invoke the pdiff() function, the compiler will use the method most specific to the types of the arguments passed. If they are both strings, this will be the second version:

    julia> pdiff(3, 9)
    6

    julia> pdiff("xx", "abcdef")
    "cdef"

The compiler will always examine the types of all the arguments passed to a function and choose the method definition that is most specific to those types. This behavior is so important to Julia's design that its creators consider it the central organizing principle of the language. It's called "multiple dispatch", referring to the fact that the types of all the arguments determine the method called, not merely the first. Other object-oriented languages dispatch based on other attributes, such as the class of an object being referenced.

Multiple dispatch brings function definition in Julia closer to mathematical thinking, where the definition of a function or operator depends on the types of all of its arguments, rather than just some of them. It also gives the programmer a way to organize the ideas in their code; it is the way Julia itself (which is mainly written in Julia) is organized internally. Multiple dispatch allows using the same name for related operations, each with its own implementation under the hood.

For example, you might write a distance() function, that calculates the absolute value of the difference between two numbers. You could use the same function name to operate on two one-dimensional arrays, interpreted as vectors, that returns the Pythagorean distance between them. Later, you could add more methods to calculate the distance between two manifolds, two nodes on a network, or two samples of text, using whatever definitions are useful for your particular problem. These more complex cases would be enabled by Julia's type system, where user-defined types can be elaborate collections of other types and are treated the same as fundamental types by the compiler. By thoughtful combining of your own data types with the multiple dispatch mechanism, you can program using high-level concepts natural to your problem domain, without sacrificing efficiency.

As mentioned above, you don't need to know much about Julia's type system to use the language productively. However, you should understand multiple dispatch and how to specify types in function signatures to create different methods. The methods() function, typed into the REPL, will list all the methods defined under a particular name and their type signatures. This works for operators, as well, which are defined as functions: try typing methods(*) to see the list of 376 versions of the multiplication operator — this is how "*" can be used to multiply integers and floats, concatenate strings, perform matrix multiplication, and serve any other purpose that may be conceptually related to the concept of multiplication.

Modules and packages

In my opinion, Julia's package and module system is a major selling point for the language, so it's worthwhile to go into it in a little detail.

If you've maintained software projects of any complexity that depend on external libraries, you've probably run into some version of "dependency hell". Upgrading the language or any external library on the system, or installing the project on a different machine, may cause the program to stop working and lead to hours of unproductive work tracking down incompatibilities and bugs. Python addresses this problem through virtual environments and package managers, which, until recently, were third-party projects. Another approach is Docker, which creates entire virtual machines as containers, each with its own network interfaces — overkill for most users.

Julia solves the dependency problem through several mechanisms built into the language. A project, which is a directory tree with code and other assets, can include a manifest that details all of the external resources used by the code, including the version numbers. All these requirements can be automatically put in place whenever needed.

Code can load external packages, which are projects designed to export code resources. Functions for export/import are stored in modules, which are another type of named code block within the package. Unlike many other languages, there is no relationship between module names and filenames: a module can be split among many files and a file can contain many modules. A package can contain any number of modules, but must have at least one if it wants to make functions available for importing.

If a package is not already installed on your system, you can get it by calling Pkg.add("packagename"). This will download the necessary files from the official registry and install them. To use the functions from a module included with the package, type using modulename. This will import the functions marked for export in the module and precompile them. Their bare names will then be available in the local namespace. If, instead, you issue the statement import modulename, you will then use its exported functions under names like modulename.function(). Pkg itself is part of the standard library; before you use it you need to say using Pkg.

The REPL has a special mode for manipulating packages, that you enter by typing "]". In Pkg mode, you can simply type, for example, add packagename.

The Julia community has created over 1900 packages. You can explore the official registry, divided into categories, at the Julia Observer site. These projects run the range from mature and polished to unfinished experiments and cover a wide variety of fields. Unfortunately, many of these packages do not work at the moment and will not work until they are upgraded to conform to the current language version. As the Julia team mentioned in email, there is a wide array of packages that have been developed for Julia, many of which are far outside the mainstream numerical/scientific application domain. These include a web framework, code for scripting Minecraft, Sudoku-as-a-service, a music manipulation library, and more.

Other features for the numericist

Julia is replete with features that make the life of the numerical scientist more convenient. It includes all the usual math functions, special functions (Bessel, Hankel, Airy, etc.) are provided by an external library, statistical functions are part of the standard library, and so on.

Julia can work with complex numbers, using im for the imaginary unit. Note that sqrt() is not defined for negative numbers unless they are complex, as shown here:

    sqrt(Complex(-4)) == 0.0 + 2.0im

Julia allows writing nested loops without explicitly nested blocks, enabling more concise, less deeply indented code that more closely resembles the way summations are written on paper:

    for i = 1:2, j = 3:4
	println((i, j))
    end

This block produces the output

    (1, 3)
    (1, 4)
    (2, 3)
    (2, 4)

Another piece of syntactic convenience is the arrow operator, which allows the chaining of functions from left to right without a lot of parentheses:

    f(x) |> g |> h == h(g(f(x)))

Technical computing often involves some type of plotting of the results. The Julia community has embraced a plotting "metapackage" called "Plots". The idea behind it is to present a unified, powerful plotting language to the user that is independent of the actual backend. There are multiple Backends for the package, including for Matplotlib, the Plotly JavaScript plotting library, the LaTeX drawing system PGF/TikZ, and more. There is also a particularly nice backend that draws plots right on the console, called UnicodePlots. Before using Plots for the first time, you may need to add the package with Pkg.add("Plots"); most of the backends are in separate packages, as well.

One of the aims of Plots is to be intuitive and "smart", with the ability to figure out the plot that you want. Whether or not it has achieved this lofty goal, it is easy to use, and the ability to switch plotting backends without changing your code is a nice feature. The backend is selected in an interactive session by calling a function made of its name transformed to all lowercase. Before plotting, we must also import the Plots module with a using command:

    julia> using Plots

    julia> plotly()
    Plots.PlotlyBackend()

    julia> plot(sin, 0, 2π)

After entering this in the default REPL prompt, your web browser will open a new tab containing the plot, which has some interactive controls for scaling and panning that appear upon hovering:

[Julia with Plotly]

Here is how to draw the same plot with a different backend:

[Julia Unicode plot]

There are some rough edges at the moment: my attempt at using the Matplotlib backend "PyPlot" led to a segmentation fault with the REPL and failed to produce a plot from the Jupyter console, leaving the console in an unusable state. Also, the first plot that you make in a session using a particular backend takes a good long while to appear, but subsequent plots are fairly quick.

If you want to use a plotting system other than the ones that interface with Plots, there are many options. For example, there is an interface to gnuplot and a pure Julia graphics package called Gadfly.

Macros

Julia has extensive support for metaprogramming, including full, Lisp-like macros. In fact, Julia itself is quite Lisp-like under the hood. Expressions like 1 + 2 + 3 are represented internally in a way closer to s-expressions and the user can use this internal representation. Try typing +(1, 2, 3) at the REPL prompt.

I won't go into macros in detail this article, but you should know that they give you the power to use the language to rewrite its own syntax and to create new language features. If you want Julia to have, say, a kind of control flow not yet provided in the language, perhaps an until keyword, you can create it yourself with a macro. This is a level of power most commonly seen in the Lisp family of languages, though there are other languages with robust macro support.

In Julia, you define a macro with a code block using the keyword macro and invoke it with the syntax @macroname(). Here is a very simple and practically useless example, to give you a general idea of how the machinery works:

    macro dblefun(f, x)
	return :( $f($f($x)) )
    end

This block defines a macro called dblefun, that takes a function f and composes it once with itself, applying the resulting doubled function to the second argument x. The :( ...) syntax defines an "expression object", which is used to "quote" what's inside the parentheses without evaluating it. Inside an expression object, the "$" is used to interpolate a value, in this case from the arguments supplied to the macro. We can verify that this macro works as intended from the REPL:

    julia> @dblefun(sin, π/2)
    0.8414709848078965

    julia> sin(sin(π/2))
    0.8414709848078965

The power of macros comes from the ability to use the entire language to manipulate expressions inside the macro definition, which allows Julia to rewrite its own syntax.

Distributed computing

As befits a language designed for high-performance numeric programming, Julia has provisions for parallel and distributed computing. You don't need to reach for third-party libraries for this, as the capability is built into the language or uses the standard library. I'll try to provide a bird's-eye overview of the landscape. The official documentation covers this area in considerable detail.

Julia provides keywords and functions that allow the programmer to define coroutines, which are functions that communicate with each other through "channels". You can tell your functions to send data through a channel, or to wait for data to appear on a channel. Functions are automatically suspended and resumed as data on the channel becomes available. The coroutine mechanism is ideal for situations where the execution time of a function is unpredictable; for example, using coroutines your program can do something else while waiting for data from the internet.

Julia also makes it simple to perform parallel computation, either on multiple local compute cores or on networked machines. The same code can be used in both situations; whether multiple cores, multiple machines, or a combination of both are used just depends on the arguments used when invoking Julia. If you start the REPL with julia -p N then it will be started with N worker processes, which should be set equal to the number of CPU threads on your machine. This argument also automatically loads a module that makes the parallel processing commands and macros available.

One of these is a parallel map() command, called pmap(). The regular map() command works as in other languages in which it appears; map(f, a) applies the function f() over the array a, returning an array of results of the same shape as a. The parallel version is pmap(f, a), which automatically distributes the work over the available threads. There are a handful of other commands and macros, such as @distributed() and @spawn(), which ship out work to available processing resources for parallel execution and fetch(), which gathers the results.

If you invoke Julia with the argument --machine-file filename rather than -p n, then all of the parallel code will run on the networked machines listed in filename (using all the CPU cores on each machine) transparently, assuming that key-based SSH logins have been set up on each machine. The machine file can be as simple as a list of hostnames. Of course, the machines can be nodes on a supercomputing cluster, or a set of heterogeneous servers around the world. Using this mechanism, without changing any of the code, a program can be run on an infinite variety of parallel computing environments.

Parting words

Julia goes a long way toward solving the "two-language problem", since it is quick to develop in, while producing fast, native code. Its drawbacks are that it is not well suited to system scripting, because of a somewhat slow startup time; occasional sluggish response at the REPL prompt while the JIT compiler does its thing; an ecosystem that, while growing rapidly, can not yet compete with, for example, Python's; and an at-times idiosyncratic syntax that is not to everyone's taste.

Considering its youth, and the well-established alternatives available, Julia has seen an impressive degree of adoption by a wide variety of users. As the Julia team pointed out, Julia is already in use at more that 700 universities and has become part of the curriculum at many of them. A few years a go I speculated that Julia would eventually supplant Fortran as the language of choice for large-scale simulations and other demanding numerical applications. With the release of version 1.0 and Julia's rapidly increasing adoption, I'm feeling pretty sanguine about my prediction.

Comments (13 posted)

Learning about Go internals at GopherCon

September 5, 2018

This article was contributed by Josh Berkus

GopherCon is the major conference for the Go language, attended by 1600 dedicated "gophers", as the members of its community like to call themselves. Held for the last five years in Denver, it attracts programmers, open-source contributors, and technical managers from all over North America and the world. GopherCon's highly-technical program is an intense mix of Go internals and programming tutorials, a few of which we will explore in this article.

Internals talks included one on the scheduler and one on memory allocation; programming talks included why not to base your authorization strategy on hash-based message authentication codes (HMACs). But first, here's a little about upcoming changes to Go itself.

Go 2 features

On the morning of the first full conference day, a video from Go language team lead Russell Cox was played. In it, he announced upcoming planned changes to the next major iteration of the language, nicknamed "Go 2". Apparently attending the conference only by video is usual for Cox, who did the same in 2017.

The plans he announced included three items: improved error handling, adding generic types, and a new architecture for managing third-party modules and dependencies. They are primarily improving error handling by adding a built-in "check expression" that reduces boilerplate code when testing for failure conditions and printing an error message. The new generic variable types will be based on "contract-based specifications", where the user defines the generic as fulfilling specific conditions.

The modules announcement was more controversial. Modules are based on a Go fork called "vgo". One of the Go project's primary issues for the last couple years has been designing a better way to package third-party libraries. However, the new modules architecture wholesale replaces the "dep" system that a community team had worked on for more than a year. Members of that team are quite unhappy about how that change was managed, as Sam Boyer, one of the team members, outlined in a Twitter thread.

According to Cox, the Go 2 changes won't be one big replacement, but will be gradually added to the language as they are ready. You can follow along with them on the Go 2 Drafts page.

The scheduler saga

Kavya Joshi kicked off the conference with an explanation of how the Go scheduler works. To make it easier to understand, she presented it by talking through creating a new, simple scheduler from scratch and ended up with a simplified version of the real scheduler.

[Kavya Joshi]

As Go is primarily meant for running concurrent applications, each instance of a concurrently running function or module in Go is called a "goroutine", which is considered the basic unit of work. These goroutines run in lightweight, user-space threads which are smaller and cheaper than kernel threads. As an example, the initial goroutine stack is 2KB, compared with 8KB for kernel threads. This design supports fast creation and destruction of goroutines. Several goroutines are run in each kernel thread, managed by the Go scheduler.

In order to maximize work, the default limit on the number of threads is equal to the number of cores on the system. Instead of using a shared run queue of pending work, which would become a scaling bottleneck, each thread has its own run queue. The threads then independently take goroutines off their queue, and then put them back on whenever they are waiting on system calls (such as file reads).

However, this leads to a problem: how can we optimize work if some threads empty their queues faster than others? The solution that Go chose was to implement "work stealing", where a thread with an empty queue will "steal" half the tasks from another, randomly selected thread's queue.

Goroutines that execute long-running CPU-bound tasks are another problem. Go runs a watchdog called "sysmon" that looks for this, and if a task takes more than 10ms, it gets preempted and placed on a global, shared, run queue. This queue is lower priority than the threads' individual run queues, so they only take goroutines from it when they are done with their own queues.

The Go scheduler has some notable limitations. First, by design, all run queues are first-in-first-out (FIFO), with no support for priorities. Go also doesn't have strong preemption, sometimes causing latency, but this is the target of a current design proposal. Another design proposal addresses the lack of hardware topology (NUMA) support.

As you might expect, the real Go scheduler is more complex than the simple model presented by Joshi. Its core principles are the ones she presented, but there are additional caveats and special circumstances. However, even starting from a simple foundation using published architectures doesn't work out for everyone, as the team at Chain found out.

Don't make macaroons

Tess Rinearson, VP of engineering at Chain, shared how its experiment with "macaroons" for authorization failed, and why. She started by filling in the basic issues around authorization on web and API-driven platforms. When she started at Chain, they were using "basic auth", meaning username/password for authentication. Authorization — that is, checking what a user was allowed to do — was performed by checking a long list of specific permission "grants", on the server, every time the user makes an API call. However, this caused some limitations in the platform that the company was unhappy with.

As Rinearson explained, there are two approaches to authorization: capability-first and identity-first. Capability-first platforms use tokens, which behave like car keys, to authorize actions; if you have the token, you're authorized. The problem with these is that the token can be stolen and, like a stolen car key, can then be used any way the thief wishes. Identity-first platforms rely on checking the user's identity against an access-control list or policy server whenever the user performs an action. Chain's initial approach was identity-first.

[Tess Rinearson]

In addition to server load created by repeated policy checks, another problem with identity-first systems is an inability to delegate actions to third parties, even when the user wishes to. This is equivalent to being unable to have a valet park your car because it uses "face ID" to unlock. Based on a paper by Tyler Close, all systems of identity delegation are vulnerable to hijacking by causing a delegate to take a wrong action. This is known as the "confused deputy" problem.

In 2014, several Google researchers published a solution to the identity/authorization problem, called "macaroons", so named because they are "cookies with layers". Rinearson and the team at Chain decided to deploy macaroons in order to solve their problem with authorization for their developer API.

The idea behind a macaroon is to use HMAC cryptographically authenticated cookies to identify users. When the user wants to delegate a capability, such as allowing a spouse to share their photos on the site, the system would "layer" the HMAC token by encrypting it with a second HMAC key. This second key would contain limitations on the authorization in the token, such as "may only share if authenticated against this specific Google account". This allows a rich set of policy-based authorizations without any need to check against a central policy server.

Macaroons did not work out well for Chain. First, its implementation required every initial authentication to hit the "dashboard", a Rails application that previously hadn't been mission-critical, in order to obtain the core HMAC token. Second, the long chains of code required to unwrap the successive layers of HMAC-encrypted policies were too cumbersome for developers used to being able to write ad-hoc code for the Chain platform instead of depending completely on the SDK. Third, revoking tokens or even changing user roles was extremely slow since applications weren't checking the policy server regularly. There were also formatting and character encoding issues with the extremely long base64-encoded HMAC tokens.

The final objection came from the French members of Chain's staff, who pointed out that layered cookies are actually spelled "macaron", not "macaroon".

The team decided to remove macaroons from the code base, in favor of an enhanced basic auth implementation that Rinearson calls "pumpkin spice auth". However, macaroons proved much harder to remove than to implement. While they were deployed in two weeks originally, deprecating them took over five months due to outstanding tokens and SDK code.

Memory allocation

Eben Freeman shared what he has learned about the internals of Go memory management over the last year of working on improving performance at Honeycomb.io. Its service involves users creating ad-hoc queries to comb through hundreds of millions of log entries, looking for bugs. Freeman's team managed to improve throughput by about 2.5x via multiple techniques to reduce their application's use of memory.

He started by explaining how the Go memory allocator actually works. Go is a "managed memory" language, which means that it automatically allocates and deallocates memory for the programmer. This memory exists in two places: the stack, which is a small memory area that each goroutine gets for local objects in the current function, and the heap, where everything else is stored. Since the majority of memory usage is in the heap, that's also where most optimization is needed.

The Go allocator tries to satisfy three design goals:

  1. satisfy allocations of any given size while avoiding fragmentation
  2. avoid locking
  3. enable reclaiming memory

For example, to prevent fragmentation, the allocator tries to reserve memory for groups of like-sized objects in blocks. To avoid locking, Go uses per-CPU caches. Reclaiming memory involves running a garbage collector (GC) concurrently with the running Go program.

[Eben Freeman]

Memory is allocated in large blocks called "arenas", usually 64MB in size. These arenas, in turn, are divided up into multiple "spans", each of which is a run of memory pages containing objects of a fixed-size class, such as 64-byte objects. A CPU running a goroutine is given an arena with one span of each of the 70 different size classes, which is usually enough to store most variable operations.

Every time an object is allocated in a span, Go updates a bitmap in memory to show which slots are in use. These bitmaps are used by the garbage collector, which uses a "tricolor concurrent mark-sweep" algorithm to free memory. During the mark phase, all objects are marked grey; then they are marked black if they have referents and white if they don't. During the sweep phase, white objects are freed, generally just meaning that the memory is designated as available for reuse.

While the GC is designed to be highly concurrent, creative use of pointers to objects can mess up the marking system. For this reason, Go adds a write barrier on writes to pointers during the marking phase, which can add significant latency overhead. Another place that GC can cause a lot of overhead is in "mark assist". If the garbage collector is unable to keep up with concurrently running goroutines in order to mark memory for freeing, it can require the goroutines themselves do some memory marking before they can proceed with execution.

Freeman showed that you can measure the overhead of the allocator and the garbage collector by disabling them, using the Go execution parameters GOCG=off and GODEBUG=sbrk=1. A better way to see memory activity is to run a pprof CPU profile to see time spent in runtime.mallocgc. He suggested using the "flamegraph" visualization built into pprof's web interface. For example, time spent in runtime.(*mcache).refill shows time spent refilling span caches, and time spent in runtime.gcAssistAlloc shows time spent in mark assist.

Possibly the best tool for checking memory efficiency is the Go execution tracer. It collects granular runtime events over a short period of time. This is a lot of data, but you can drill down into what the garbage collector is doing, like mark assist periods. It also has a statistic called "minimum mutator utilization" that shows how much time the program spent doing things other than GC and allocation.

So, how did the Honeycomb staff learn to speed up execution? One key thing was to limit use of pointers in Go code. Sometimes this is obvious because it's your own pointer, but core library classes like "Time" have their own, built-in pointers. This can make it useful to use int64 instead of Time if you don't care about actual clock time. Particularly, avoid allocating a pointer inside a loop.

Another thing Honeycomb did was to approximate "slab allocation", which is when you allocate a large block of memory and then subdivide it. While allocating individual objects is fast, allocating many objects at once is faster per object. In Go, you can make large "slices" (resizable arrays) behave like slab allocations. Beware, though, that even a single variable still in use can keep the whole allocation from being garbage collected.

A more sophisticated technique that worked for Honeycomb was to recycle specific objects to hold new objects of an identical size. Honeycomb had a filter-then-aggregate process for running queries, and he realized that the row buffer structs (complex variables) could be reused to hold the aggregation data. This cut the amount of work the garbage collector had to do considerably. The sync.Pool Go library has a more general version of this solution.

Overall, these optimizations more than doubled the throughput for user queries while keeping overall memory usage constant.

Summary

As with every good conference, there were many other talks and activities at GopherCon, including a gobots (robots controlled in Go) hackathon, a community contributor day, and many hands-on workshops. There were even sessions that weren't about Go internals, such as ones for creating data charts and graphs in Go, or another in building interesting Textual User Interfaces (TUIs). GopherCon is also the annual gathering of Women Who Go, an international educational association for women and non-binary people using Go.

At the end of the conference, attendees were surprised to be told that, for the first time, GopherCon would be moving away from Denver. Next year, it will be held in San Diego in July. If you write Go regularly, you should probably consider attending.

[Josh Berkus is an employee of Red Hat.]

Comments (12 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds