|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 2, 2015

The Atom code editor turns 1.0

By Nathan Willis
July 1, 2015

GitHub recently announced the release of version 1.0 of Atom, its open-source code editor. Functionally, Atom is similar to Emacs, Vim, and many other extensible text editors aimed at developers. But it differs in other respects, starting with the fact that is a local application built on web technology. Naturally enough, Git (and GitHub) support is also integrated, and the size of the GitHub community means that the project already has a large fan base.

[Editing in Atom 1.0]

Development on Atom started in 2008 as a side project from GitHub co-founder Chris Wanstrath. In 2011, it became an official GitHub project. Although the early incarnations of Atom were an in-browser web application, the recent builds—including the 1.0 release—are packages that can be installed and run locally. Under the hood, however, they incorporate a modified version of the Chromium browser and Node.js. That makes the downloadable bundle on the hefty side: the Debian and RPM packages are around 70 MB. In addition to Linux, Atom runs on Windows and Mac OS X.

Installing and learning one's way around Atom 1.0 is straightforward. The bundle is self-contained, so there are no dependencies to worry about. The editor features built-in syntax highlighting, automatic indentation, a range of available UI color schemes (which can be further modified through a user stylesheet), and customizable keybindings. Hard and soft line wrapping are available, as are settings to display a variety of "invisibles" like white-space characters, carriage returns, and EOL characters.

That said, there are a few niceties in the basic editing functionality that go beyond the bare minimum. For instance, one can open up an HTML preview of Markdown-formatted text with a single keystroke, collapse or expand blocks of code, or "zoom" in and out (by changing the font size) with the mouse wheel. One of the most convenient features, though, is the "command palette." The keystroke Ctrl-Shift-P pops up a modal dialog window that can be used to search for commands. It is akin to command completion in Emacs, but with a live-updating list of matches.

[Editing with the minimap package]

The code for Atom itself is under the MIT license, as is the case with the Atom add-on packages developed at GitHub. There are, however, third-party packages which may be published under different terms. Atom is written in CoffeeScript, as are many of the available add-on packages. And it is the availability of add-on packages that makes Atom particularly noteworthy.

For general-purpose code editing, arguably the most direct competitor to Atom would be Adobe's Brackets or the the Ace editor from Cloud9. All have roots in web development and are written in a JavaScript-like language.

But both Brackets and Ace make a point of aiming for simplicity; one of the design goals for Atom is to provide the same level of extensibility found in Emacs and Vim. Indeed, there are clear similarities with Emacs to be found in Atom. The majority of the functionality found in the default setup is implemented with a suite of more than 70 individual packages. As is the case with Emacs, this architecture allows external packages to make significant changes to how Atom operates.

At the moment, there do not appear to be packages that bring major new, non-editor functionality to Atom (in the manner of the Gnus news-and-email reader for Emacs, to name one example). Although there are a few packages that provide minor new features, such as showing weather information. But there are more than 2,100 packages available, many offering improved support for specific programming languages, better integration with the host OS, and editing conveniences like whole-file overviews, navigation aids, and code-analysis tools. Atom includes a built-in package installer and search tool; each package description includes screenshots (which can be animated), home page links, and more.

[Atom's built-in package browser]

It should be noted for the record that a significant percentage of the Atom packages available so far are syntax-highlighting schemes. Such themes are supposed to be packaged and promoted separately from the main package archive, a distinction that does not seem to be enforced. This makes it hard to estimate how many genuinely useful packages are available at the moment, but it is clearly a large number.

The category with the most packages, though, is Git support. Atom comes with basic Git support included. The built-in tools let the user check out branches, track the status of files, and stage and commit changes. Interestingly enough, though, the basic Git functionality does not include pushing a commit, making a pull request, or a number of other features. There is also a set of built-in keybindings that implements a form of GitHub integration, but it relies almost entirely on jumping from the editor to the GitHub site in a separate browser.

For example, there is no built-in way to view git blame information in Atom itself, but the "Alt-g b" key combination will open the corresponding blame-annotated file at GitHub—assuming, that is, that the repository in question is hosted at GitHub. These limitations seem a bit odd, although perhaps users with less allegiance to GitHub will be happy to know that the Atom editor is not tightly bound to one particular service provider.

In any case, there are dozens of third-party Atom packages that extend the editor's Git functionality, starting with the popular git-plus package. Other packages provide enhancements to how diffs are displayed, generate graphs of commit activity, and so on. There are also several popular packages for integration with other services, including Travis CI as well as GitHub competitor Bitbucket. The package-development community is clearly active, and it would appear that GitHub is letting that community have a free rein.

IDE-like functionality is not yet as advanced as version-control support. There are several competing third-party packages for hooking into a compiler, but the vast majority are limited to use with CoffeeScript, LESS, and other web-development languages—in which the "compiler" is more akin to a pre-processor than anything else. [Installing a package in Atom]

Ultimately, whether or not Atom should find a place on one's desktop (alongside Emacs or Vim, much less replacing them) is an open question. The main selling point seems to be that Atom offers programmer-friendly extensibility in a language that is familiar to more of the "development community" at present. CoffeeScript is, after all, an extension of JavaScript, and there are far more individuals on the web today who have some level of JavaScript experience than there are individuals who have used Common Lisp.

But that is no guarantor of success. Emacs and Vim have remained as popular as they are today because their add-on communities enhance the editors' built-in functionality. Perhaps Atom will attract a sizable community of package developers, but one that never pushes the boundaries beyond somewhat-better Git integration and extensive theming. Whether or not Atom manages to grow beyond those boundaries will determine if it becomes a first-level programmer's editor or remains more of a user-friendly interface to Git and GitHub. Neither is a bad outcome, but they will have distinctly different influence on the development community.

Comments (23 posted)

News and updates from DockerCon 2015

July 1, 2015

This article was contributed by Josh Berkus


DockerCon

DockerCon on June 22 and 23 was a much bigger affair than CoreOSFest or ContainerCamp. DockerCon rented out the San Francisco Marriott for the event; the keynote ballroom seats 2000. That's a pretty dramatic change from the first DockerCon last year, with roughly 500 attendees; it shows the huge growth of interest in Linux containers. Or maybe, given that it's Silicon Valley, what you're seeing is the magnetic power of $95 million in round-C funding.

The conference was also much more commercial than the first DockerCon or CoreOSFest, with dozens of presentations by sponsoring partners, and a second-day keynote devoted entirely to proprietary products. Most notable among these presentations was the appearance of Mark Russinovich, CTO of Microsoft Azure, there to announce that Azure, ASP.NET, and Microsoft Visual Studio all support Docker containers now. This year's DockerCon was more of a trade show than a technical conference, with little or no distinction made between open-source and proprietary software.

However, there were a few good technical sessions, and the conference as a whole allows us to catch up with Docker technology and tools. Docker Inc. staff announced milestones in Swarm, Machine, and Compose, the advent of Docker Network and Plugins, and some new security initiatives. There were also some great hacking sessions by Jessie Frazelle and Bryan Cantrill. But before we explore those, it's time for a bit more container-world politics.

(As with earlier articles, "Docker" refers to the container technology and the open source project, and "Docker, Inc." refers to the company.)

Burying the hatchet

Solomon Hykes, CTO of Docker, Inc., took the stage to announce the creation of a new standard and foundation to govern the Docker container format. He said that users had said that it wasn't good enough for Docker to be a de-facto standard; it needs to be a real standard. Hykes was oddly careful not to mention CoreOS in this. It was not the last time in the conference where Docker, Inc. would respond to pressure from that competitor without mentioning it by name.

[Solomon Hykes]

Docker, Inc. separated the runC code that governs the container format from the rest of the Docker project. The engineers were surprised to find that it was only about 5% of the total code. This is distinct from the Docker Engine, which is the daemon that manages runC containers, and remains in the Docker project under the stewardship of the company.

According to Hykes, Docker, Inc. then asked the Linux Foundation to create a new non-profit as governance for the container standard in development, called the Open Container Project (OCP). It chose the Linux Foundation because, in Hykes's words, "The Linux developers are famous for doing what's right. There's no politics involved in developing Linux, as far as I know." OCP will take the de-facto standard of runC and work on developing the Open Container Format, "a universal intermediary format for operating system containers." This is specifically not just Linux containers; the new effort aims to incorporate Illumos, FreeBSD, and Windows as well.

Will this end the divisive rivalry between Docker, Inc. and CoreOS, Inc.? Hykes hopes so. "Standards wars are an ugly, terrible thing. They're also boring," he said. He invited CoreOS CEO Alex Polvi up onto stage to shake his hand. Hykes also said that all founding members of the appc specification have been given seats on the board of the OCP. Strangely, Hykes's statement contradicts what's in OCP FAQ, which says that only two appc maintainers are included in OCP. It's unclear whether this is a policy change or a misstatement.

Aside from all of these politics, some useful technology has already come out of separating runC from the Docker Engine. In the closing session of DockerCon, Michael Crosby, chief maintainer of Docker, demonstrated an experimental fork of runC that supports live copying of running containers between machines. He and a partner showed off the feature by playing Quake in a container that they then copied to data centers around the world—while continuing to play.

New From Docker

Hykes announced that Docker is now doing "experimental" releases. The experimental branch is experimental indeed: the majority of the on-stage demonstrations of the new technology crashed and had to be shown via a video. This branch includes a number of orchestration and management features from Docker that replace or supplement those offered by third-party projects.

[Ben Firshman]

Ben Firshman of Docker, Inc. created Fig, a tool for deploying multiple Docker images with links between them. Fig has now been incorporated into mainstream Docker as Compose, which Firshman demonstrated. Compose uses a declarative YAML syntax in order to define and start these containers, which can then be launched with "docker-compose up". Firshman demonstrated using Compose with Docker Machine to perform "auto-scaling". In the demo, he created a web application container backed by a MongoDB container, and then used the "scale" option for Compose to deploy 50 of each container to cloud host Digital Ocean.

The demo also relied on another new project: Docker Network. Since several containers run on each server or virtual server, and containers can be migrated between servers, container-based infrastructures require some form of software-defined networking or proxies to allow services to connect with each other. Currently, users fill this need with tools like Weave, Project Calico, and Kubernetes.

Thanks to the acquisition of networking startup SocketPlane, Docker Inc. is now offering its own virtual network overlay. In the process, it completely overhauled Docker Engine's issue-plagued container networking code. The goal is that networking should "just work" regardless of how many containers you have or where they are physically located in your cluster. There are also plans to implement security "micro-segmentation" including rules and firewalls between containers.

While Docker, Inc. has been working hard to replace third-party functionality in container management and orchestration using its own open-source tools, it is also making Docker Engine more open to third-party tools by implementing plugin APIs. Launched at DockerCon Europe in December, the plugins API currently supports four kinds of plugins: networking, volumes, schedulers, and service discovery. The company plans to add additional plugin APIs in the future. Docker, Inc. partner ClusterHQ has been a large part of the design of the API that allows its Flocker plugin to work with the officially supported Docker.

Docker's general plugin approach is intended to be as flexible as possible. The API relies on Unix sockets, permitting plugins to be loaded at runtime without restarting the Docker Engine. It also claims that multiple plugins of the same type can be loaded, with different plugins being used in different containers on the same system. All applications on Docker are supposed to be able to work with all plugins, but we will see how this actually works out in practice.

Image security and the GSA

Docker, Inc. has come under increasing criticism for the inherent insecurity of Docker Hub. Indeed, one of the chief reasons CoreOS gives to use its Quay.io instead of Docker Hub is its security features, including privacy and cryptographic signing for images. While Docker Inc. is building many security features into its proprietary Docker Trusted Registry product, one security-oriented project is being added to the open-source environment: Docker Notary.

Notary was demonstrated by Docker Security Lead Diego Monica. Interestingly, instead of just Docker images, Notary is a generic system designed to validate cryptographic signatures for any type of content, just by piping the content through it. Notary will be integrated with Docker Hub in order to enforce the verification of origin on images.

A second-day keynote talk from Nirmal Mehta of Booz Allen Hamilton explained why Notary is being implemented now. With assistance from Booz Allen Hamilton, Docker, Inc. has secured a contract to implement Docker-based development systems at the US General Services Administration (GSA), which is the agency that oversees much of the US government's entire contractor budget. The GSA has long believed in "security by provenance", so Docker Hub now needs to support it.

The GSA project is intended to consolidate massive numbers of inefficient, duplicative development stacks with slow development turnaround times and, by using Docker, streamline and modernize them. Mehta demonstrated the developer infrastructure that the GSA is already testing; he showed committing a change to an application, which then initiated an automated Docker image build, testing under Jenkins, and then automatically deploying the application to a large cloud of machines. This new infrastructure, dubbed Project Jellyfish, will deploy at the GSA in July 2015.

The GSA's adoption of Docker will be interesting. Unlike startups, government agencies are usually slow to adopt new technologies, and even slower to let go of them. This move could ensure that there are significant funds and jobs in the Docker ecosystem for years to come, even if it never catches on anywhere else, since the GSA has a huge yearly budget. Its goals are also different from web startups, as its main reason to want to speed up development is to have more time for security review, Mehta said.

Hackery: Docker desktops and debugging

Aside from the keynotes and product announcements, there were some fun hacking presentations at DockerCon. Two of the best were back-to-back, showing off Docker desktop hacks and large-scale application debugging using Docker.

[Jessie Frazelle]

Jessie Frazelle of Docker, Inc. demonstrated how to put every single application on your Linux desktop into containers. After making some jokes about closet organization and the Container Store, she launched her presentation, running on LibreOffice in a Docker container. Her base desktop consists only of what she called a "text user interface"; all graphical applications (or just about anything else) run in containers, usually one or more containers per application.

Frazelle demonstrated running Spotify in a container. Other applications, like Skype and VLC, were shown running in their own containers and connecting to another container running PulseAudio, for sound. More usefully, she showed Chrome running in a container that routed all internet traffic through another container running Tor, permitting secure, anonymous browsing, something Chrome doesn't normally support. The most difficult application she put in a container was Microsoft Visual Studio for Linux: "It didn't have any instructions. I had to strace it to figure out why it was failing."

[Bryan Cantrill]

All of this requires a lot of configuration and delving into how desktop programs interact. She has to make many Unix socket files, which are used for internal communication by these desktop programs, accessible from within the Docker containers. Frazelle also has extensive, heavily edited user and application configuration files (i.e. "dotfiles") to make this all work.

On the other end of the scale, Bryan Cantrill of Joyent explained how running containers allows debugging of failed applications at scale. He made a strong appeal for developers to try to debug crashes, saying: "Don't just restart the container. That's like the modern version of 'just reboot the PC'. You're an educated person, right? You need to understand the root cause of the failure."

Joyent's main tool to do this for failures that cause crashes is the core dump. A core dump from a containerized application is easier to analyze than one from a regular server or virtual machine, since the container runs only that application and it terminates when the application crashes. Cantrill showed how Triton, Joyent's cloud container environment, automatically sends core dumps of crashed containers to Manta, its large-scale object store. He used GDB to troubleshoot one such core dump from Joyent's live environment, tracing the crash to some bad Node.js code.

Conclusion

Of course, there were many other things covered at DockerCon, including announcements and product demonstrations by IBM, EMC, Amazon AWS, Microsoft Azure, and others. Several companies explained their Docker-based development pipelines, including Disney, PayPal, and Capital One. There were also hands-on tutorials on some of the new Docker tools, such as Docker, Inc.'s beta orchestration platform built with Swarm and Machine.

One topic that was almost absent from the agenda was discussion of how to handle persistent data in containers. Aside from the Flocker project and some proprietary products from EMC, nobody was presenting on how to handle database data or other aspects of the "persistent data problem". Nor was it mentioned as part of Docker, Inc.'s grand vision of where the Docker platform is going.

In any case, DockerCon made it clear that Docker and containers are going to be a substantial part of the application infrastructures of the future. Not only is the accelerated development of projects and tools in this space continuing, usage is spreading across the technology industry and around the world. At the end of DockerCon, the company announced the next DockerCon Europe in November in Barcelona, for which registration and proposals are now open.

Comments (38 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: A look at Rspamd; New vulnerabilities in cacti, chromium, pam, roundcubemail, ...
  • Kernel: 4.2 Merge window part 2; RCU-walk; Processor Trace.
  • Distributions: Previewing OpenWrt 15.05; DragonFly BSD, OpenMandriva, SteamOS, Ubuntu, ...
  • Development: Magit 2.1; Amazon's new TLS implementation; GnuPG 2.1.6; The R Consortium; ...
  • Announcements: AdaCamp cancellation, Oracle v. Google, events.
Next page: Security>>

Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds