The darktable project has unveiled the first release-candidate (RC) packages for its upcoming version 2.0 milestone. Darktable retains its focus as a high-end photo editor in the forthcoming release, with new features that target professional workflows and experienced users. But there are also improvements that will be appreciated by casual shutterbugs.
As a refresher, we previewed darktable's 1.6 release in late 2014. The 1.6 series has seen a set of incremental updates over the past year, generally to update the application's support for new camera models (as well as the occasional bug fix).
The 2.0 branch has seen substantial new work, however—including changes to the way edits are stored on disk. As always, darktable's edits are non-destructive to the original raw or JPEG file, but the updates to the on-disk file format break backward compatibility with the older releases. Users can open an image previously edited with darktable 1.6 in the 2.0 release, but after saving any changes, the edit file will not be readable in 1.6.
The new release is available for download in a source archive for Linux and as an installable OS X application. Compiling from source is straightforward, and the RC release is configured to install itself in /opt/darktable/ (as opposed to /usr/bin/ for stable releases), so it is possible to test out 2.0 while keeping 1.6 around for daily use.
The changes in 2.0 begin with a port from GTK+2 to GTK+3. As is often the case with specialty applications (especially those in the graphics domain), darktable uses its own custom widgets for most of the controls and on-canvas tools, so most users may not notice that a different version of the GUI toolkit is in use—except for those using high-DPI screens, on which darktable will correctly pick up GTK+3's high-DPI support. But it is an important update for the project, and rigorous testing is advisable.
The major new features available in 2.0 include a printing module. This is the first time that darktable has supported print output, which is a feature that photographers can be notoriously picky about. Darktable's print support is CUPS-based, but it includes a large bevy of color-adjustment tools, detailed previewing, and is fully color-managed. In conjunction with that feature, darktable has also added support for soft proofing, or simulating device-specific printer output on the screen. Assuming that the printer and display are properly set up, users can spot color shifts or out-of-gamut problem areas before they send an image out to the printer.
A related feature is the ability to export images directly to PDF. Previously, PDF output and printing have certainly been doable through external applications, of course; integrating them into the application is helpful for the sake of speed. All image-export operations now allow the user to upscale an image (which is primarily useful for print, where one might need a large physical size) and to change the image mode (e.g., to grayscale).
2.0 also allows users to place a watermark on their images before exporting or printing them. This is a feature generally only useful to professional photographers, who may want to add a copyright statement or a warning of some kind (e.g., PROOF). Right now, the feature only supports text watermarks, not images.
Another feature likely to be of keen interest to frequent darktable users is support for keyboard control. Notably, users can now use the keyboard to navigate through image collections, and keyboard accelerators have been added for several of the image adjustment tools. For instance, one can increase or decrease the size or opacity of the brush tool on the fly, without moving the cursor out of the image and over to the tool's control panel. Navigating large image collections is faster through the keyboard, and resizing on-canvas tools as one works is, similarly, a convenience for those who expect to spend a lot of time in the application.
Frequent darktable users may also appreciate some changes to the application's Lua scripting interface. The big update is that Lua scripts can now alter user-interface elements, a change that opens the door to better interactive scripts from outside developers. Personally, I have long been critical of darktable's UI style (in particular the indistinct, often abstract set of symbols used for filters). It remains to be seen whether the darktable community will push the project forward in this respect, but one can hope. The other Lua-related change is that the project has launched a separate GitHub repository for Lua scripts, including a section for contributed scripts.
Darktable's editing tools get a refresh in the 2.0 RC. Perhaps the flashiest addition is a new color-reconstruction module, which uses some clever trickery to re-generate image data in highlights that have been completely washed out. The gist is that a washed-out highlight often only maxes out one of the red, green, and blue channels on the sensor. With the sensor overloaded, the color data in the RGB tuple is not valid, but there may still be some detail that can be recovered from the non-washed-out channels.
In the past, darktable would use this data to recreate a bit of texture detail in an area that would otherwise be rendered as a featureless white blob. Now, the color-reconstruction module can artificially fill in color information for the washed out area by sampling the neighboring, non-white pixels. Highlight reconstruction is improved in the "shadows & highlights" module as well; as noted above, once any channel on the image sensor is spiked at 100%, the color information for that pixel is effectively lost. Highlight reconstruction techniques, therefore, are prone to producing results that are out of balance with the normally exposed pixels in the rest of the image. The "shadows & highlights" module now features a separate slider users can employ to adjust the balance of reconstructed highlights.
Some of the existing modules have seen usability improvements for the new release. The "Crop and Rotate" tool supports setting the desired aspect ratio, but in the new release, the list of ratios gains some helpful aliases—such as "4x5, 8x10" next to the "5:4" ratio entry. The list has also been re-sorted to put commonly used ratios at the top and less common ones (such as Cinemascope 2.35:1) at the bottom.
A more subtle change is that the curve tools (such as "base curve" and "tone mapping") now allow the user to add control nodes by Control-clicking. These nodes are the points that the user drags around to change the shape of the curve; precision is the name of the game when it comes to adjusting a curve for optimal output. Previously, it was only possible to add a node by clicking and dragging the curve; that meant that adding a node required messing with the shape of the curve. Now users can add nodes to, in effect, nail down areas of the curve that they do not want to see changed.
There are, of course, many other small changes to be found in the new darktable RC release. Image geotags now properly support the "altitude" field from GPS devices, the application's noise-profile data is now stored in a JSON file that can (hopefully) prove useful to external scripts and applications, and so on. On the whole, 2.0 looks to be a solid release from an increasingly predictable—in the good sense—project. If things follow their usual pace, a 2.0 final release should be available before the end of the year. It will be particularly interesting to watch future releases to see whether darktable successfully manages to cultivate a scripting community; pro users often enjoy customizing their tools, but they also demand a lot of out-of-the-box functionality. Thus far, darktable has served this demographic well.
Praveen Yalagandula gave a presentation at the Tokyo OpenStack Summit about some of the work that his company, Avi Networks, did in creating a network service to run atop OpenStack. His talk also focused on the OpenStack APIs that the company used. In the grand tradition of such talks, it was billed as "the good, the bad, and the ugly" of those APIs. In implementing "load balancing as a service" (LBaaS) using OpenStack, the company found APIs of each type, which Yalagandula described in his talk.
The application that the company is building is an "enterprise-grade scalable network service" to do load balancing. It needs to be able to scale out and scale in as demand changes, be highly available, and have high performance as well. It provides tenant isolation, as well, so that multiple customers can all use the service at the same time; they are also able to self-allocate resources from the service without affecting other customers.
One of the main goals of the project was to build on top of OpenStack, rather than in or alongside the cloud service. So that means only using the APIs provided by OpenStack components; it is a layered design somewhat akin to running a program in user space. Running in OpenStack would mean adding another component running in the cloud-management layer that accessed the OpenStack core message queue directly or perhaps adding the functionality directly into the Neutron network component. Running LBaaS alongside OpenStack would mean creating a component that ran outside of the framework, but that still understood and could use the virtualized networking set up by an OpenStack cloud.
Running the service on top of OpenStack was chosen because it provides flexible deployment models for the service, Yalagandula said. There are multiple ways to deploy the various components of the service, which is well-supported by OpenStack. In addition, OpenStack provides easy management for the compute nodes and underlying network virtualization that the service can use.
He then did a quick introduction to load balancing. Essentially, the idea is to balance the traffic from users to multiple web servers so that each server doesn't get overloaded. In addition, load balancers monitor the servers and stop sending traffic to those that have failed. For many web applications, though, there are several tiers (web servers, application servers, database servers), each of which has multiple servers that are fronted by its own load balancer.
So, Yalagandula said, that sounds like it could be done with a simple "packet sprayer" that could be handled by the switch, the router, or in Neutron. But there is more expected of a load balancer in these kinds of enterprise deployments—they act more as an "application delivery controller". For example, the web is not really stateless, so all of the traffic for a user's session should be sent to the same backend server, which requires more than just routing at the packet level.
In addition, there is an element of intelligence that may be required by load balancers. If a user is simply browsing the inventory at a site, then a set of regular servers could be used. But if they have something in their shopping cart, perhaps premium servers should be used, which requires the load balancer to make a decision based on the URL in the request.
Another feature for load balancers is SSL termination. That allows putting all of the SSL handshake, encryption, and decryption handling on a smaller set of servers that can be more tightly controlled in terms of policies for acceptable ciphers, protocol versions, key lengths, and so on. SSL also uses lots of CPU and memory, so moving it to dedicated servers and allowing the backend servers to handle regular unencrypted traffic makes sense.
The legacy architecture for LBaaS is one that runs alongside OpenStack and must be managed separately from the OpenStack cloud. The approach Avi Networks has taken is to run "service engines" that handle the load balancing in OpenStack virtual machines (VMs). The "Avi controllers" then allocate resources using the Nova compute component and Neutron networking component to run those service engines. It is similar to software-defined networking in some ways, Yalagandula said. One of the strengths of the architecture is that the controllers can easily add more service engines as demand increases and destroy them when they are no longer needed.
Having provided that background, Yalagandula shifted to the kinds of APIs required by the service. It needs to access the elasticity features to create and delete VMs as well as to attach and move them to the right networks. The high-availability features are required to detect VM failures or network connectivity disruptions and quickly switch over to replacements. The multi-tenancy features are needed to support multiple users of the service. It also needs high performance, especially from the networking, so that it could support high packet rates.
He started by mentioning the OpenStack APIs that worked well for the LBaaS application. The Nova VM creation and deletion APIs were solid; it was easy to create VMs and plug them into the networks as needed. That allows scaling in and scaling out as needed. But the options for placing VMs on certain hosts could be better, he said. For non-admin users (which is how the Avi controller sometimes runs), there is limited support for VM placement.
The multi-tenancy support from the Keystone identity service and the integration of that with Nova and Neutron is "very good and very solid", he said. When compared to other infrastructure-as-a-service offerings, OpenStack stands out here. At the basic level, Neutron's CRUD API for handling networks, ports, subnets, and so on is "pretty solid". It makes it easy to create multiple application tiers with the proper isolation between them.
He then moved on to some of the problem areas, but he noted that he wasn't trying to "bash" OpenStack—these were areas for improvement. The fact that a service like LBaaS can be written on top of OpenStack is a testament to the quality of the APIs overall.
Notifications are one such area; they are lacking from the core OpenStack services. If a VM dies, there is no way to get notified. It is the same if a port gets deleted in Neutron and the same goes for Keystone, he said. He would like to have a way to subscribe to alerts specific to a particular resource. Since there isn't one, the system has to periodically check on the status of each resource, which generates lots of traffic. Using alerts from the Ceilometer telemetry service might be possible, but it appears to not be popular with customers so they do not enable it.
There is a mismatch between Nova "interfaces" (virtual NICs) and Neutron "ports" (virtual switch ports) that stems from an improper separation when Neutron was split out from Nova. The result is that there is no way to move an existing interface from one network to another. Instead, the interface for the VM must be destroyed and a new one in a different network needs to be attached, which is a pretty heavyweight operation. In the physical world, it is the equivalent of moving a wire from one switch port to another, but it can't be done that simply in OpenStack. A better separation between Nova and Neutron would have allowed that, Yalagandula said.
There is also inconsistency in the semantics of the security group APIs. Depending on which component implements those APIs, the behavior can be different. Both Nova and Neutron implement the APIs but, for Nova, policies (e.g. firewall rules) apply across all interfaces in a VM, while in Neutron they apply per-port. That means users of the API need to know which component is implementing it to get the behavior they need, but there is no way to query for which is doing so in a given installation.
The OpenStack APIs do not allow for customization, in general. For example, source IP address spoofing is not allowed even on the local network. But that is an essential primitive for building high-availability servers. There are some ad hoc Neutron extension APIs that alleviate the problem, but they are not core APIs so there is no guarantee they will be present on any given installation.
The last issue he described was not really an API issue, but was more of a problem with the reference implementation of OpenStack. Network performance can be fairly poor, though it has been getting better over time. There can be many layers between the VM and the physical network and that number is dependent on the plugins and installation configuration. Each of those layers imposes a cost. Those are fixable issues and things are getting better, but it is a problem Avi Networks ran into when trying to build LBaaS on OpenStack.
To summarize, Yalagandula said that the basic APIs are really good, but the advanced APIs for building highly available, scalable network services still need some work. That meant that it only took roughly a month for the team to get something up and running, but then took another one and a half years before it was able to get the service running on customers' OpenStack deployments.
[I would like to thank the OpenStack Foundation for travel assistance to Tokyo for the summit.]
Think back, a moment, to the dim and distant past — April 1999, to be specific. An analyst company named Mindcraft issued a report showing that Windows NT greatly outperformed Red Hat Linux 5.2 and Apache for web-server workloads. The outcry from the Linux community, including from a very young LWN, was swift and strong. The report was a piece of Microsoft-funded FUD trying to cut off an emerging threat to its world-domination plans. The Linux system had been deliberately configured for poor performance. The hardware chosen was not well supported by Linux at the time. And so on.
Once people calmed down a bit, though, one other fact came clear: the Mindcraft folks, whatever their motivations, had a point. Linux did, indeed, have performance problems that were reasonably well understood even at the time. The community then did what it does best: we sat down and fixed the problems. The scheduler got exclusive wakeups, for example, to put an end to the thundering-herd problem in the acceptance of connection requests. Numerous other little problems were fixed. Within a year or so, the kernel's performance on this kind of workload had improved considerably.
The Mindcraft report, in other words, was a much-needed kick in the rear that got the community to deal with issues that had been neglected until then.
The Washington Post article seems clearly slanted toward a negative view of the Linux kernel and its contributors. It freely mixes kernel problems with other issues (the AshleyMadison.com breakin, for example) that were not kernel vulnerabilities at all. The fact that vendors seem to have little interest in getting security fixes to their customers is danced around like a huge elephant in the room. There are rumors of dark forces that drove the article in the hopes of taking Linux down a notch. All of this could well be true, but it should not be allowed to overshadow the simple fact that the article has a valid point.
We do a reasonable job of finding and fixing bugs. Problems, whether they are security-related or not, are patched quickly, and the stable-update mechanism makes those patches available to kernel users. Compared to a lot of programs out there (free and proprietary alike), the kernel is quite well supported. But pointing at our ability to fix bugs is missing a crucial point: fixing security bugs is, in the end, a game of whack-a-mole. There will always be more moles, some of which we will not know about (and will thus be unable to whack) for a long time after they are discovered and exploited by attackers. These bugs leave our users vulnerable, even if the commercial side of Linux did a perfect job of getting fixes to users — which it decidedly does not.
The point that developers concerned about security have been trying to make for a while is that fixing bugs is not enough. We must instead realize that we will never fix them all and focus on making bugs harder to exploit. That means restricting access to information about the kernel, making it impossible for the kernel to execute code in user-space memory, instrumenting the kernel to detect integer overflows, and all the other things laid out in Kees Cook's Kernel Summit talk at the end of October. Many of these techniques are well understood and have been adopted by other operating systems; others will require innovation on our part. But, if we want to adequately defend our users from attackers, these changes need to be made.
Why hasn't the kernel adopted these technologies already? The Washington Post article puts the blame firmly on the development community, and on Linus Torvalds in particular. The culture of the kernel community prioritizes performance and functionality over security and is unwilling to make compromises if they are needed to improve the security of the kernel. There is some truth to this claim; the good news is that attitudes appear to be shifting as the scope of the problem becomes clear. Kees's talk was well received, and it clearly got developers thinking and talking about the issues.
The point that has been missed is that we do not just have a case of Linus fending off useful security patches. There simply are not many such patches circulating in the kernel community. In particular, the few developers who are working in this area have never made a serious attempt to get that work integrated upstream. Getting any large, intrusive patch set merged requires working with the kernel community, making the case for the changes, splitting the changes into reviewable pieces, dealing with review comments, and so on. It can be tiresome and frustrating, but it's how the kernel works, and it clearly results in a more generally useful, more maintainable kernel in the long run.
Almost nobody is doing that work to get new security technologies into the kernel. One might cite a "chilling effect" from the hostile reaction such patches can receive, but that is an inadequate answer: developers have managed to merge many changes over the years despite a difficult initial reaction. Few security developers are even trying.
Why aren't they trying? One fairly obvious answer is that almost nobody is being paid to try. Almost all of the work going into the kernel is done by paid developers and has been for many years. The areas that companies see fit to support get a lot of work and are well advanced in the kernel. The areas that companies think are not their problem are rather less so. The difficulties in getting support for realtime development are a clear case in point. Other areas, such as documentation, tend to languish as well. Security is clearly one of those areas. There are a lot of reasons why Linux lags behind in defensive security technologies, but one of the key ones is that the companies making money on Linux have not prioritized the development and integration of those technologies.
There are signs that things might be changing a bit. More developers are showing interest in security-related issues, though commercial support for their work is still less than it should be. The reaction against security-related changes might be less knee-jerk negative than it used to be. Efforts like the Kernel Self Protection Project are starting to work on integrating existing security technologies into the kernel.
We have a long way to go, but, with some support and the right mindset, a lot of progress can be made in a short time. The kernel community can do amazing things when it sets its mind to it. With luck, the Washington Post article will help to provide the needed impetus for that sort of setting of mind. History suggests that we will eventually see this moment as a turning point, when we were finally embarrassed into doing work that has clearly needed doing for a while. Linux should not have a substandard security story for much longer.
Page editor: Jonathan Corbet
Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds