User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for October 10, 2013

DRI3 and Present

By Jake Edge
October 9, 2013
X.Org Developers Conference

The direct rendering infrastructure (DRI) allows applications to access the system's accelerated graphics hardware without going through the display server (e.g. X or Wayland). That interface has seen two revisions over the years (DRI1, DRI2) and is poised for another. Keith Packard described DRI3 (or DRI3000 as he likes to call it) at this year's X.Org Developers Conference (XDC), along with the status and plans for the related "Present" extension. Present is a way to get new window content from a pixmap to the screen in a VBLANK-synchronized way, as Packard put it in August.

Packard noted that he had rewritten his presentation three times over the past week or so due to conversations he had at the Linux Plumbers Conference, LinuxCon North America, and XDC. There will likely be more revisions to DRI3 and Present based on the answers to some questions that he had for those in the audience, he said. But the Present extension, which comes from the verb "present", will be keeping that name, though he had looked in vain for something better.

Requirements

DRI3 is the "simplest extension I have ever written", he said, and it makes all of the direct rendering pain that came with DRI1 and DRI2 "just go away". It is designed to make the connection between dma-bufs and pixmaps. A dma-buf is a kernel DMA buffer, while a pixmap is an X rendering target. The extension "just links those two together" so that an application can draw into a dma-buf, hand it to the X server, and it becomes a pixmap with the same contents. Or the reverse.

DRI3 will also allow an application and display server to share synchronization objects. That way, the server can know when the application has finished writing to a buffer before it starts putting it on the screen. In current Intel graphics hardware, this is trivial because the kernel will serialize all access to dma-bufs, but that is not true of other graphics hardware. So there is a requirement to be able to share sync objects between applications and the server.

The third requirement for DRI3 is that it needs to tell the client which rendering device it should be talking to. With DRI2, a filename was passed to the client over the X protocol. The client opened the device and the "lovely DRI2 authentication dance" followed. DRI3 will do things differently, Packard said.

Status

"I think the DRI3 extension is done", he said, noting that there is nothing significant that he knows of left to be done in the extension itself. It has an X protocol in the server along with C bindings (XCB). There are no plans for an Xlib binding as it should all be hidden by the direct rendering libraries (e.g. Mesa). There is a Mesa DRI3 loader and a GLX API, so one can run Mesa with DRI3 today without any trouble. In fact, performance is a lot better because the Present extension is more efficient than DRI2 in swapping buffers. In addition, the Intel Mesa and 2D drivers are "all hooked up".

There are a few things still to be done, including an EGL API on the application side. He lamented the fact that the choice of window system API necessitates a different API for GL, so that he has to write another implementation in Mesa to bind the EGL API to DRI3/Present. "Thanks GL." It turns out that due to Adam Jackson's GLX rewrite, Packard won't have to write a GL loader for DRI3 for the X server. The server has yet another, separate binding to the window system APIs, Packard said, that really should switch to DRI3, but Jackson is removing all that code with his rewrite.

There is a need for more test cases as well. DRI3 passes all the tests he has now, but there are a number of OpenGL extensions that DRI3 will support (e.g. OML sync, OML swap), which have not been supported before. There are few tests in Piglit for those. There are three for OML sync, which pass for DRI3 (and fail for DRI2), he said. It is "100% improvement", but "as usual, more testing is needed".

DRI3 extension details

In an "eye-chart slide" (slide 5 in his deck [PDF]), Packard listed the requests provided by the DRI3 extension. There are only four: Open, PixmapFromBuffer, BufferFromPixmap, and FenceFromFD. The Open request opens a direct rendering manager (DRM) device and prepares it for rendering. One can select a particular rendering provider if there is more than one available. The X server will pass back a file descriptor to the opened device. Currently that means the X server can do the authentication dance, but once DRM render nodes are available in the kernel, it can transparently switch to using those. There is no other information provided to the client beyond the file descriptor; there is a function in libdrm that takes a file descriptor and figures out the proper Mesa driver to load, so there was no need for additional information.

PixmapFromBuffer and BufferFromPixmap are symmetrical operations. For PixmapFromBuffer, the client creates a dma-buf and passes a file descriptor of the buffer to the X server. The server then creates a pixmap referencing that dma-buf. BufferFromPixmap is the reverse operation, where the server maps a pixmap DRM object to a dma-buf and sends the file descriptor of the dma-buf to the client.

The FenceFromFD request creates an XSyncFence object on the server based on the file descriptor sent by the client. That descriptor refers to a page of shared memory that contains a futex. The synchronization is only in one direction: clients can wait for the server to signal using the futex. It would be nice to be able to go both ways, but he doesn't know how to make the single-threaded X server wait for the futex without blocking everything else that it is doing. In addition, Intel graphics hardware doesn't need this synchronization mechanism, so Packard needs other vendors to "tell me how to make it actually useful".

Present details

The Present extension provides three requests (PresentPixmap, PresentNotifyMSC, and PresentSelectInput) and three events (PresentIdleNotify, PresentConfigureNotify, and PresentCompleteNotify). The PresentPixmap request takes a pixmap and makes it the contents of a window. PresentNotifyMSC allows requesting a notification when the current "media stream counter" (MSC)—a frame counter essentially—reaches a particular value, while PresentSelectInput chooses which Present events the client will receive.

Those events include a notification when a pixmap is free for reuse (PresentIdleNotify), when the window size changes (PresentConfigureNotify), or when the PresentPixmap operation has completed (PresentCompleteNotify). One would think PresentConfigureNotify would be redundant because the core protocol ConfigureNotify event gives the same indication. But there is no way for the Present extension to know whether the client has requested ConfigureNotify events itself, so that it can figure out whether to pass the core event along or not. Thus a new event.

There is a long list of parameters for PresentPixmap that appear on Packard's slide #7. It includes a serial number that is returned in a matching PresentCompleteNotify event, the valid and update areas of the pixmap (which are the regions with correct and changed pixels respectively), x and y offsets, an XSyncFence object, options, and so on. The interface supports VBLANK-synchronized updates for sub-window regions and it will allow page flips even for small updates. In addition, it separates the presentation completion event from the buffer idle event, which allows for more flexibility.

Discussion

After that, much of Packard's presentation took the form of a discussion of various features and how they might be implemented. For example, the media stream counters were something of an "adventure". The MSC is a monotonically increasing counter of the number of frames since power on, but that model doesn't necessarily work well for applications that can move between different monitors, for example. Suspend/resume, virtual terminal (VT) switches, and display power management signaling (DPMS) also add to the complexity.

In the current design, there is a counter for each window and those counters still run at 60Hz when the monitor is turned off. That's not ideal, he said, as the counters should slow down, but not stop because applications are not prepared to deal with stopped counters. He wondered how fast the fake counters should run. Jackson suggested deriving something from the kernel timer slack value, which was met with widespread approval.

Another discussion topic was DRM SyncFences, which are currently implemented using futexes. That may not be ideal as futexes are not select()/poll()-friendly. As mentioned earlier, the X server cannot wake up when a fence is signaled, so there may need to be a kernel API added to get a signal when the futex is poked, he said. Someone from the audience suggested using socketpair() instead, which he plans to investigate.

Currently, protocol version 1 of Present has been implemented (which is different than what he presented). There are XCB and Xlib bindings and it is in the Intel Mesa and 2D drivers. Packard said that he would love to see Present added to a non-Intel driver to prove (or disprove) the synchronization mechanism it provides.

To close, there was also some discussion of the Present Redirection feature, which would pass pixmaps for redirected windows directly to a compositor. The "simple plan" of forwarding PresentPixmap requests directly to the compositor is what he has been working on, but something more complicated is probably required. Redirection seems targeted at improving XWayland performance in particular.

The overall plan is to get DRI3 and Present into the X server 1.15 release, which is currently scheduled for late December.

[I would like to thank the X.Org Foundation for travel assistance to Portland for XDC.]

Comments (7 posted)

The Thing System

By Nathan Willis
October 9, 2013

"The Internet of Things" is a tech buzzword that provokes a lot of eye-rolling (right up there with "infotainment"), but the concept that it describes is still an important one: the pervasive connectivity of simple devices like sensors and switches. That connectivity makes it possible to automate the monitoring and control of physical systems like houses, offices, factories, and the like—from any computing device that has access to the internet. Getting to that blissful automated future remains a challenge, however; one that a new project called The Thing System is addressing with free software.

Sensors and devices with built-in connectivity are widespread and affordable: wireless weather stations, remote-controllable light bulbs and switches, web-accessible thermostats, and so on. The hurdle in deploying these products in a "smart home" scenario is that rarely do any two speak the same protocol—much less provide any APIs that would allow them to interact with each other. In many cases, they do not even use the same bits of spectrum for signaling; it is only in recent years that WiFi and Bluetooth Low Energy have become the norm, replacing proprietary protocols over RF.

Due to the difficulty of merging products from a host of different vendors, most whole-home automation packages rely on a central server process that monitors the state of all of the devices, queues and processes events, and provides a way to create macros or scripts (activating one device based on the status of another, for example). But it is often at the scripting interface that such projects begin to inflict pain on the user. MisterHouse and Linux MCE, for example, both support a wide variety of hardware, but they are often criticized for the difficulty of defining home automation "scenes" (which is home automation slang for a collection of lighting or device settings to be switched on as a group).

Control all the things

The complexity problem is an area where the developers of The Thing System believe they can make significant strides. The project's mantra is that "things should work like magic," a notion that it defines in a fairly specific way. Server-side logic should recognize patterns and trigger events to solve a problem without requiring intervention from the user.

The example provided on the wiki is that an environmental sensor would log carbon dioxide levels, and the master process would run the ventilation system's fans until the levels return to normal—without any user intervention whatsoever. Clearly, such logic requires a detailed knowledge of the sensors and devices that are available in the building as well as a semantic understanding of what they can do—e.g., not just that there is (for example) a Nest thermostat, but that the thermostat activating the fan can lower the carbon dioxide level. The project defines a taxonomy of ten device types: climate, lighting, switch, media, presence, generic sensor, motive (that is, devices that implement some sort of physical motion, like raising or lowering blinds), wearable, indicator, and gateway.

Several of these categories are self-explanatory, although it is important to note that they may include both monitoring and control. For instance, the climate category includes weather sensors and thermostat controllers. Others are bit more abstract: gateways are pseudo-devices that implement the Thing System APIs but connect to a remote Internet service in the background, and indicators are defined to be simple status lights.

Still, the device categories are detailed where it is appropriate; lighting devices can implement simple properties like brightness percentage (from zero to 100%), less-common properties like RGB color (a feature found in the Phillips Hue smart-bulb product line), and even minutia like the number of milliseconds it takes to transition from one state to another.

The Thing System's master process is called the steward; it runs on Node.js and is purported to be lightweight enough to function comfortably on a small ARM system like the Raspberry Pi or BeagleBone Black. When it starts up, the steward reads in a JavaScript module for each type of device that is present in the building. There is a sizable list of supported devices at present, although not all implement every API. After loading in the necessary modules, the steward then attempts to discover each device actually present on the network. It does so via Simple Service Discovery Protocol (SSDP), scanning for Bluetooth LE devices, looking at attached USB devices, and scanning for devices offering services over known TCP ports.

The steward's basic interface is a web-based client; it presents a grid of buttons showing each discovered device, and for the well-supported devices there is a usually a simple web control panel that the user can bring up by clicking on the device's button. The project's documentation does consistently refer to these interfaces as "simple clients," however—meaning, for example, that they let the user switch lamps on and off or adjust the light's color, but that they do not implement magic.

The magic house

Sadly, The Thing System's magic seems to still be a ways from making its public debut. However, the project has developed several interesting pieces of the puzzle along the way. First, there is the Simple Thing Protocol, which defines a WebSockets interface for the steward to talk to programmable devices that (out of the box) offer only a vendor-specific API. That makes supporting new hardware a bit easier, and it establishes a generic interface with which it is significantly easier to define scenes and events. In contrast, scripting in other home automation systems like MisterHouse typically requires knowledge of low-level configuration details for each device.

The project also defines the Thing Sensor Report Protocol, which is a simpler, unidirectional protocol for read-only sensors to send data to the steward. Here again, most of the other well-known home-automation projects add direct support for reading data for each sensor type. This is difficult to support in the long run, particularly when one considers that many simple sensors (say, thermometers or ambient light sensors) change configuration every time the manufacturer releases an update. Many home automation enthusiasts simply build such sensors themselves from off-the-shelf electronic components or Arduino-like hardware. Either way, if the scene descriptions and scripts assume details about the sensors, the result is a hard-to-maintain configuration that eventually falls into disuse.

Finally, The Thing System's scripting syntax is abstract enough to be understandable to novices, without being too vague. The key notion is activities, which bind a status reading from one device to an action performed on another device. For example, when a thermometer's temperature property exceeds a pre-set value, the air conditioning device is sent the on command. This temperature event might be defined as:

    { path      : '/api/v1/event/create/0123456789abcdef'
    , requestID : '1'
    , name      : 'gettin too hot'
    , actor     : 'climate/1'
    , observe   : 'comfortthreshold'
    , parameter : 'temperature'
}

It is not quite as simple as If This Then That, but by comparison to the competition, it is straightforward.

The Thing System will probably be more interesting to revisit when more advanced client applications (with lots of magic) become available, but the overall design is good. Being web-based may be a turn-off to some, but it is clearly the most future-proof way to offer support for commercial hardware. Like it or not, most of the devices that manufacturers release in the next few years will be accessible over HTTP or (hopefully) HTTPS. Similarly, whether one loves or hates JavaScript, the fact remains that is it far easier to find JavaScript programmers who might contribute device drivers or clients to the project than it is to find Perl programmers willing to do the same (as MisterHouse requires).

Ultimately, what makes a home automation system viable is its ease of use and its ability to support the hardware that is actually available in stores today. The Thing System looks to be making progress on both of those fronts.

Comments (19 posted)

Shumway lands in Firefox

By Nathan Willis
October 7, 2013

Mozilla has merged the code for Shumway, its JavaScript-based Flash runtime, into the latest builds of Firefox. The feature must be switched on manually, but it still marks a milestone for a project that Mozilla initially described as an R&D venture. Shumway is still a work in progress, but it brings Firefox users one step closer to eliminating a plugin that few will miss.

We first looked at Shumway in November 2012, shortly after it first went public. The goal of the project has always been to implement a "web-native" runtime for Flash—that is, a virtual machine to run Shockwave/Flash (SWF) files not through a browser plugin, but by translating SWF content into standard HTML, CSS, and JavaScript. In 2012, Shumway was an effort branded with the "Mozilla Research" moniker (though whether that designation makes it an official project or a personal investigation is not clear), and the initial announcement came with the caveat that Shumway was "very experimental." The project's GitHub README still says that "integration with Firefox is a possibility if the experiment proves successful," although it now seems that Mozilla is moving forward from the research stage to testing in the wild.

Mozilla certainly has valid reasons for pursuing a project like Shumway: no matter how much people claim that they hate Flash, there remains a lot of Flash content on the web, and several of Mozilla's target platforms will soon have no official Flash implementation. For starters, Adobe announced its intention to drop Linux support from its traditional Netscape Plugin API (NPAPI) plugin. Google, however, is pressing forward with its own Flash support for Chrome on Linux, which would leave Firefox with a noticeable feature gap by comparison.

Then, too, there is the mobile device sector to consider: Adobe dropped Flash for mobile browsers in 2011, and so far Google's Chrome team says it has no plans to implement support for the format on Android. If Mozilla were able to deliver reasonable performance in a plugin-less Flash VM for the Android builds of Firefox, then there would certainly be an interested audience. There are already third-party browsers for Android that do support Flash, and some users are evidently willing to jump through hoops to install the old, 2011-era plugin on today's Android releases.

But regardless of whether or not Flash support is vital to any person's web browsing experience, Shumway being made available in Firefox itself is news in its own right. Previously, the Shumway VM was installable as a Firefox extension [XPI], although interested users could also see an online demo of the code running in a test page. The Shumway code was merged on October 1. Builds are available from the Firefox Nightly channel; if all goes according to schedule, the feature will arrive for the general public in Firefox 27 in January 2014.

In order to test the nightly build's Shumway implementation, users must open about:config and change the shumway.disabled preference to false. With Shumway activated, users can test drive Flash-enabled sites or open SWF files. There are five demos linked to at the project's "Are we Flash yet?" page, although the links actually attempt to open the files in the "Shumway Inspector" debugging tool—which, in my tests, reports all of the files as corrupt and renders none of them.

[The box demo in Shumway]

It is straightforward enough to download the files from the examples directory of the Shumway GitHub repository and open them directly, however. The "pac" demos are a simple Pac-Man character that can be navigated with the keyboard arrow keys; pac.swf is implemented in ActionScript 2 (Flash's scripting language) and pac3.swf in ActionScript 3. The "racing" demos are an animated car-racing video game (again available for both ActionScript 2 and 3); there is also an MP3 player demo and a 2D falling-boxes demo that shows off a simple physics engine. Together, these and the other demos in the Shumway GitHub repository demonstrate basic interactivity, media playback, and vector graphics.

It can also be instructive to open up some of the SWF demos and tests from the other free software Flash players Gnash and Lightspark. Shumway is a more ambitious project in some respects than either Lightspark or Gnash, in that it targets support for both ActionScript 2 and ActionScript 3. Adobe introduced ActionScript 3 with Flash 9, and with it incorporated a new virtual machine model. Neither of the other free software projects has had the time or the resources to tackle both; Gnash implements the older standard and Lightspark the newer. Shumway's ActionScript support is built in to the Tamarin engine that is also used as Firefox's JavaScript interpreter. The non-ActionScript contents of a Flash file are parsed by Shumway and translated to standard HTML5 elements.

That said, Shumway is currently considerably slower than the other two projects. In the Firefox nightly, it runs its own demos, but does not even run them at anything approaching a usable speed (for instance, there is significant lag between key presses in the racing game and actual movement on screen).

Performance aside, the other important factor in replacing Adobe's binary Flash plugin is correctness—whether or not Shumway can render arbitrary Flash content found in the wild in a usable fashion. This is a tricky question, the answer to which varies considerably based on each individual experiment. I tested Shumway against a random assortment of free Flash games, several business web sites, and an interactive house-painting visualizer from a home-improvement store (don't ask...). To my surprise, Shumway had no trouble with the home-improvement app and played most of the games, albeit slowly enough to prevent me from becoming competitive, and with occasional visual artifacts. YouTube, however, did not work—which might also be attributable to other issues, such as the video codecs used. The canonical response to that is that YouTube is best used in HTML5 <video> mode, of course, putting it outside of Shumway's purview entirely.

[The racing demo in Shumway]

The project has made considerable progress since we last looked at it in 2012, but there is also a lot of work remaining. For starters, despite Adobe's periodic insistence that the Flash specification is open, there are evidently a significant number of undocumented APIs. Flash is also, to put it mildly, a large and complex format that incorporates a lot of (literally) moving parts; the testing process would be long under even the best circumstances. But the same argument could be made for PDF, which Firefox has been interpreting with a built-in viewer since Firefox 19. That project, too, started off as an in-house experiment at Mozilla Labs (which does not appear to be synonymous with Mozilla Research).

Perhaps the success of PDF.js bodes well for the future of Shumway—after all, a full year after Shumway's initial debut, the world does not seem to be any closer to reaching the much-ballyhooed end of the Flash era. Then again, if Flash files can be rendered into normal web content without a buggy, proprietary plugin, perhaps its continued existence is not that big of a problem.

Comments (41 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Security: Two LSS talks; New vulnerabilities in aircrack-ng, kfreebsd-9, nginx, torque, ...
  • Kernel: Kernel address space layout randomization; CPU hotplug locking; Android graphics.
  • Distributions: Fedora's working groups; OpenBSD, Oracle, PC-BSD, ...
  • Development: The Concord outliner; Make 4.0; tar 1.27; Firefox developer tools; ...
  • Announcements: GSoC 2014, nominations open for Free Software Awards, Intel powers Arduino, LTSI, ...
Next page: Security>>

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds