February 13, 2013
This article was contributed by Martin Michlmayr
In July 2012, Richard Fontana started the GPL.next project to
experiment with modifications to version 3 of the GNU General Public License (GPLv3). The name was quickly changed
to the more neutral "copyleft-next" and the license has evolved into a
"radically different text" compared to the GPLv3 since project inception.
Fontana gave a talk in the FOSDEM legal devroom on February 3 that
presented the current status of the project and his reasons for exploring
new ideas about copyleft licensing.
Fontana explained that he initially described the project as a fork of the
GPLv3 but admitted that it "sounded more negative than I intended". He
actually co-authored the GPLv3, LGPLv3, and AGPLv3 licenses together with
Richard
Stallman and Eben Moglen during his time at the Software Freedom Law
Center. Fontana, who is now Red Hat's open-source licensing counsel,
stressed that copyleft-next is his personal project and not related to his
work for SFLC, FSF, or Red Hat, although "these experiences had a personal
influence".
The complexity of the GPLv3
Allison Randal's 2007 essay "GPLv3,
Clarity and Simplicity" is a powerful critique of the GPLv3 and was
deeply influential on his thinking, Fontana said. The essay argued that everyone
"should be enabled to comprehend the terms of the license".
Based on the (then) near-finished draft of the GPLv3, Randal observed that
it's unlikely that clarity and simplicity had been a priority during the
drafting process.
Fontana feels that the complexity of the GPL had a side effect of creating
an "atmosphere of unnecessary inscrutability and hyper-legalism"
surrounding the GPL. Additionally, he perceives that legal
interpretation of the license is lacking. Richard Stallman has withdrawn
from active
license interpretation and Brett Smith, for a long time FSF's "greatest
legal authority" according to Fontana, left his position as FSF's License
Compliance Engineer in May 2012. He wonders whether the complexity of
the GPL, together with FSF's withdrawal from an active interpretive role,
has contributed to a shift to non-copyleft licenses. He also believes that
developer preference for licensing minimalism is rising.
Another reason for the creation of copyleft-next is Fontana's desire to
experiment with new ideas and forms of licensing. He pointed out that every
license (proprietary or free) is imperfect and could benefit from
improvements. He feels strongly that license reform should not be
monopolized. Due to concerns about license proliferation, the OSI has
discouraged the creation of new licenses, effectively creating a monopoly
for the stewards of existing OSI-approved licenses. Fontana downplayed
concerns of license proliferation, partly because GPL-compatible licenses
should also be compatible with copyleft-next and because copyleft-next
offers one-way compatibility with the GPL. Finally, he views copyleft-next
as a "gradual, painless successor to GPLv2/GPLv3".
Fontana also expressed his disappointment in the way open source licenses
have historically been developed. While the drafting process for GPLv3 was
very advanced and transparent compared to other efforts, it seems
insufficiently transparent to him by present-day standards. He
pointed to the Project Harmony contributor
agreements as another example of a
non-transparent process since it employed the Chatham House Rule
during parts of the drafting process.
Contribution norms
Unsurprisingly, copyleft-next's development process is very different and
follows the "contemporary methodology of community projects". The license
is hosted on Gitorious,
and there is a public mailing list
and IRC channel—a bug tracker will be added in the near future.
Fontana acts as the sabd(nnfl)—the self-appointed benevolent
dictator (not necessarily for life).
The project has participation guidelines (informally known as the Harvey
Birdman Rule, after a US cartoon
series featuring lawyers). The norms reflect Fontana's intention to
involve developers and other community members in the development process.
They encourage transparency in license drafting and aim to
"prevent the undue influence of interest groups far removed from
individual software developers" (in other words, lawyers).
The guidelines disallow closed mailing lists as well
as substantive private
conversations about the development of the project. The latter can
be remedied by posting a summary to the public mailing list. Fontana is
true to his word and posted summaries
of discussions he had at FOSDEM.
Finally, the Harvey Birdman Rule forbids contributions in the form of
word-processing documents and dictates that mailing list replies using
top-posting shall be ignored.
The copyleft-next license
The copyleft-next license is a strong copyleft license. The word "strong"
refers to the scope of the license. The Mozilla Public License (MPL), for
example, is a weak copyleft license in this sense since its copyleft only
applies to individual files. While modifications to a file are covered by
MPL's copyleft provisions, code under the MPL may be distributed as part of
a larger proprietary piece of software. The GPL and copyleft-next, on the
other hand, have a much broader scope and make it difficult to make
proprietary enhancements of free software.
Copyleft-next was initially developed by taking the GPLv3 text and removing
parts from it. For each provision, Fontana asked whether the incremental
complexity associated with the provision is necessary and worthwhile. For
many provisions, he concluded they weren't—this includes provisions
in the GPLv3 that no other open source license has needed, obscure clauses,
and text that should be moved to a FAQ. The GPL has a lot of historical
baggage, and Fontana believes that the reduction in complexity of copyleft-next
has led to a license that developers and lawyers alike can read and
understand. Those readers interested in verifying this claim can find the
current draft on Gitorious.
In order to show the drastic reduction in complexity, Fontana compared the
word and line counts of several popular open source licenses. The word
counts were as follows:
| License |
Words |
| copyleft-next 0.1.0 |
1423 |
| Apache License 2.0 |
1581 |
| GPLv1 |
2063 |
| MPL 2.0 |
2435 |
| GPLv2 |
2968 |
| GPLv3 |
5644 |
For comparison, the MIT license consists of 162 words and the BSD 3-clause
license has 212 words.
Copyleft-next has a number of interesting features. It offers outbound
compatibility with the GPLv2 (or higher) and AGPLv3 (or higher), meaning
that code
covered by copyleft-next can be distributed under these licenses. This
allows for experimentation in copyleft-next, Fontana explained.
The license also simplifies compliance: when the source code is not
shipped with a physical product, distributors do not have to give
a written offer to supply the source code on CD or a similar medium.
They can simply point to a URL where the source code can be found for
two years.
Like the GPLv3, copyleft-next allows
license violations to be remedied within a certain time period (although
compared to GPLv3 the provision has been simplified). In contrast to
GPLv3, the current draft of copyleft-next doesn't contain an
anti-Tivoization clause.
The copyleft-next license also takes a stance against certain practices
detested by many community members. The license includes a
proprietary-relicensing "poison pill": if the copyright holders offer
proprietary relicensing, the copyleft requirements evaporate—the
project effectively becomes a permissively licensed one, meaning that no
single entity has a monopoly on offering proprietary versions. This
provision was inspired by the Qt/KDE
treaty, which says that the KDE Free Qt Foundation can release Qt under
a BSD-style license if Qt is no longer offered under the LGPL 2.1.
Furthermore, copyleft-next has an anti-badgeware provision: it explicitly
excludes logos from the requirement to preserve author attributions.
While copyleft-next started as an exercise to simplify the GPLv3, it has
incorporated ideas and concepts from other licenses in the meantime. For
example, several provisions, such as the one explicitly excluding trademark
grants, were inspired by or directly borrowed from MPL 2.0.
Fontana made the first release of copyleft-next, 0.1.0, just before FOSDEM
and released version 0.1.1 in the interim. He mentioned during the talk
that he is thinking of
creating an Affero flavor of copyleft-next as well. He would like to see more
participation from community members. The mailing list provides a good way
to get started and the commit logs explain the rationale of changes in
great detail.
Comments (50 posted)
By Nathan Willis
February 11, 2013
Linux.conf.au 2013 in Canberra
provided an interesting window into the world of
display server development with a pair of talks about the X Window
System and one about
its planned successor Wayland (a talk which will be the subject of its own
article shortly). First, Keith Packard discussed coming
improvements to compositing and rendering. He was followed by David
Airlie, who talked about recent changes and upcoming new features for
the Resize, Rotate and Reflect Extension (RandR), particularly to cope
with multiple-GPU laptops. Each talk was entertaining enough in
its own right, but they worked even better together as the speakers
interjected their own comments into one another's Q&A period (or, from
time to time, during the talks themselves).
Capacitance: sworn enemy of the X server
Packard kicked things off by framing recent work on the X server as
a battle against capacitance—more specifically, the excess power
consumption that adds up every time there is an extra copy operation
that could be avoided. Compositing application window contents and
window manager decorations together is the initial capacitance sink,
he said, since historically it required either copying an
application's content from one scanout buffer to another, or
repainting an entirely new buffer then doing a page-flip between the
back (off-screen) buffer and the front (on-screen) buffer. Either
option requires significant memory manipulation, which has steered the
direction of subsequent development, including DRI2, the rendering
infrastructure currently used by the X server.
But DRI2 has its share of other problems needing attention, he
said. For example, the Graphics Execution Manager (GEM) assigns its
own internal names called global GEM handles to the graphics memory it
allocates. These handles are simply integers, not references to
objects (such as file descriptors) that the kernel can
manage. Consequently, the kernel does not know which applications are
using any particular handle; it instead relies on every application to
"remember to forget the name" of each handle when it is
finished with it. But if one application discards the handle while
another application still thinks it is in use, the second application
will suddenly get whatever random data happens to get placed in the
graphics memory next—presumably by some unrelated application.
GEM handles have other drawbacks, including the fact that they bypass
the normal kernel security mechanisms (in fact, since the handles are
simple integers, they are hypothetically guessable). They are also
specific to GEM, rather than using general kernel infrastructure like
DMA-BUFs.
DRI2 also relies on the X server to allocate all buffers, so
applications must first request an allocation, then wait for the X
server to return one. The extra round trip is a problem on its own,
but server allocation of buffers also breaks resizing windows, since
the X server immediately allocates a new, empty back buffer. The
application does not find out about the new allocation until it
receives and processes the (asynchronous) event message from the
server, however, so whatever frame the application was
drawing can simply get lost.
The plan is to fix these problems in DRI2's successor, which
Packard referred to in slides as "DRI3000" because, he said, it
sounded futuristic. This DRI framework will allow clients, not the X
server, to allocate buffers, will use DMA-BUF objects instead of
global GEM handles, and will incorporate several strategies to reduce
the number of copy operations. For example, as long as the client
application is allocating its own buffer, it can allocate a little
excess space around the edges so that the window manager can draw
window decorations around the outside. Since most of the time the
window decorations are not animated, they can be reused from
one frame to the next. Compositing the window and decoration will thus
be faster than in the current model, which copies the application
content on every frame just to draw the window decorations around it.
Under the new scheme, if the client knows that the application state has not
changed, it does not need to trigger a buffer swap.
Moving buffer management out of the X server and into the client
has other benefits as well. Since the clients allocate the buffers
they use, they can also assign stable names to the buffers (rather than the
global GEM handles currently assigned by the server), and they can be
smarter about reusing those buffers—such as by marking the
freshness of each in the EGL_buffer_age extension. If the X
server has just performed a swap, it can report back that the previous
front buffer is now idle and available. But if the server has just
performed a blit (copying only a small region of updated pixels), it
could instead report back that the just-used back buffer is idle
instead.
There are also copy operations to be trimmed out in other ways,
such as by aligning windows with GPU memory page boundaries. This
trick is currently only doable on Intel graphics hardware, Packard
said, but results in about a 50% improvement gain from the status quo
to the hypothetical upper limit. He already has much of the
DRI-replacement work functioning ("at least on my
machine") and is targeting X server 1.15 for its release. The
page-swapping tricks are not as close to completion; a new kernel
ioctl() has been written to allow exchanging chunks of GPU
pages, but the page-alignment code is not yet implemented.
New tricks for new hardware
Airlie's talk focused more on supporting multiple displays and
multiple graphics cards. This was not an issue in the early days, of
course, when the typical system had one graphics card tied to one
display; a single Screen (as defined by the X Protocol) was
sufficient. The next step up was simply to run a separate Screen for
the second graphics card and the second display—although, on the
down side, running two separate screens meant it was not possible to
move windows from one display to the other. Similar was "Zaphod"
mode, a configuration in which one graphics card was used to drive two
displays on two separate Screens. The trick was that Zaphod mode used
two copies of the GPU driver, with one attached to each screen. Here
again, two Screens meant that it was not possible to move windows
between displays.
Things started getting more interesting with Xinerama, however.
Xinerama mode introduced a "fake" Screen wrapped around the
two real Screens. Although this approach allowed users to
move windows between their displays, it did this at the high cost of
keeping two copies of every window and pixmap, one for each real
Screen. The fake Screen approach had other weaknesses, such as the
fact that it maintained a strict mapping to objects on the real,
internal Screens—which made hot-plugging (in which the
real objects might appear and disappear instantly) impossible.
Thankfully, he said, RandR 1.2 changed this, giving us for the
first time the ability to drive two displays with one graphics card,
using one Screen. "It was like ... sanity,"
he concluded, giving people what they had long wanted for
multiple-monitor setups (including temporarily connecting an external
projector for presentations). But the sanity did not last, he continued,
because vendors started manufacturing new hardware that made his life
difficult. First, multi-seat/multi-head systems came out of the
woodwork, such as USB-to-HDMI dongles and laptop docking stations.
Second, laptops began appearing with multiple GPUs, which "come
in every possible way to mess up having two GPUs." He has one
laptop, for example, which has the display-detection lines connected
to both GPUs ... even though only one of the GPUs could actually
output video to the connected display.
RandR 1.4 solves hot-plugging of displays, work which required
adding support for udev, USB, and other standard kernel interfaces to
the X server, which up until then had been used its own methods for
bus probing and other key features. RandR 1.4's approach to USB
hotplugging worked by having the main GPU render everything, then
having the USB GPU simply copy the buffers out, performing its own
compression or other tricks to display the content on the USB-attached
display. RandR also allowed the X server to offload drawing part of
the screen (such as a game) in one window on the main display.
The future of RandR includes several new functions, such as
"simple" GPU switching. This is the relatively
straightforward-sounding action of switching display rendering from
running on one GPU to the other. Some laptops have a hardware switch
for this function, he said, while others do it in software.
Another new feature is what Airlie calls "Shatter," which splits up
rendering of a single screen between multiple GPUs.
Airlie said he has considered several approaches to getting to
this future, but at the moment Shatter seems to require adding a
layer of abstraction he called an "impedance" layer between X server
objects and GPU objects. The impedance layer tracks protocol objects
and damage events and converts them into GPU objects. "It's
quite messy," he said, describing the impedance layer as a
combination of the X server's old Composite Wrapper layer and the
Xinerama layer "munged" together. Nevertheless, he said, it is
preferable to the other approach he explored, which would rely on
pushing X protocol objects down to the GPU layer. At the moment, he
said, he has gotten the impedance layer to work, but there are some
practical problems, including the fact that so few people do X
development that there are only one or two people who would be
qualified to review the work. He is likely to take some time off to
try and write a test suite to aid further development.
Marking the spot
It is sometimes tempting to think of X as a crusty old
relic—and, indeed, both Packard and Airlie poked fun at the
display server system and its quirks more than once. But what both
talks made clear was that even if the core protocol is to be replaced, that
does not reduce window management, compositing, or rendering to a
trivial problem. The constantly-changing landscape of graphics
hardware and the ever increasing expectations of users will certainly
see to that.
Comments (115 posted)
By Nathan Willis
February 13, 2013
Collabora's Daniel Stone presented the final piece of the
linux.conf.au 2013 display server triptych, which started with a pair of talks from Keith Packard and
David Airlie. Stone explained the concepts behind Wayland and how it
relates to X11—because, as he put it, "everything you read on
the Internet about it will be wrong."
The Dark Ages
Stone, who said that he was "tricked into" working on X
about ten years ago, reviewed X11's history, starting with the
initial assumption of single-keyboard, single-mouse systems
with graphics hardware focused on drawing rectangles, blitting images,
and basic window management. But then, he continued, hardware got
complicated (from multiple input devices to multiple GPUs), rendering
got complicated (with OpenGL and hardware-accelerated video decoding),
and window management got awful (with multiple desktop environments,
new window types, and non-rectangular windows). As time passed, things
slowly got out of hand for X; what was originally a well-defined
mechanism swelled to incorporate dozens of protocol extensions and
thousands of pages of specifications—although on the latter
point, Packard chimed in to joke that the X developers never wrote
anything that could be called specifications.
The root of the trouble, Stone said, was that—thanks to
politics and an excessive commitment to maintaining
backward compatibility even with ancient toolkits—no one was
allowed to touch the core protocol or the X
server core, even as the needs of the window system evolved and
diverged. For one thing, the XFree86 project, where much of the
development took place, was not itself the X Consortium. For another,
"no one was the X Consortium; they weren't doing
anything." As a result, more and more layers got wrapped
around the X server, working around deficiencies rather than fixing
them. Eventually, the X server evolved into a operating system: it
could run video BIOSes, manage system power, perform I/O port and PCI
device management, and load multiple binary formats. But in spite of
all these features, he continued, it was "the dumbest OS you've
ever seen." For example, it could generate a
configuration file for you, but it was not smart enough to just
use the correct configuration.
Light at the end of the tunnel
Things did improve, he said. When the X.Org Foundation was formed,
the project gained a cool domain name, but it also undertook some
overdue development tasks, such as modularizing the X server. The
initial effort may have been too modular, he noted, splitting into
345 git modules, but for the most part it was a positive. Using
autotools, the X server was actually buildable. Modularization
allowed X developers to excise old and unused code; Stone said the
pre-refactoring xserver 1.0.2 release contained 879,403 lines,
compared to 562,678 lines today.
But soon they began adding new features again; repeating the
pile-of-extensions model. According to his calculations, today X
includes a new drawing model (XRender), four input stacks (core X11,
XInput 1.0, 2.0, and 2.2), five display management extensions (core
X11, Xinerama, and the three generations of RandR that Airlie spoke
about), and four buffer management models (core X11, DRI, MIT-SHM, and
DRI2). At that point, the developers had fundamentally changed how X
did everything, and as users wanted more and more features, those
features got pushed out of X and into the client side (theming, fonts,
subwindows, etc.), or to the window manager (e.g., special effects).
That situation leaves the X server itself with very little to do.
Client applications draw everything locally, and the X server hands
the drawing to the window manager to render it. The window manager
hands back the rendered screen, and the X server "does what it's
told" and puts it on the display. Essentially, he said, the X
server is nothing but a "terrible, terrible, terrible"
inter-process communication (IPC) bus. It is not introspectable, and
it adds considerable (and variable) overhead.
Wayland, he said, simply cuts out all of the middleman steps
that the X server currently consumes CPU cycles performing. Client
applications draw locally, they tell the display server what they have
drawn, and the server decides what to put onto the display and where.
Commenters in the "Internet peanut gallery" sometimes argue that X is
"the Unix way," he said. But Wayland fits the "do one thing, do it
well" paradigm far better. "What one thing is X doing, and what is it
doing well?"
The Wayland forward
Stone then turned his attention to providing a more in-depth
description of how Wayland works. The first important idea is that in
Wayland, every frame is regarded as "perfect." That is, the client
application draws it in a completed form, as opposed to X, where
different rectangles, pixmaps, and text can all be sent separately by
the client, which can result in inconsistent on-screen behavior. DRI2
almost—but not quite—fixed this, but it had limitations
(chiefly that it had to adhere to the core X11 protocol).
Wayland is also "descriptive" and not "prescriptive," he said. For
example, in X, auxiliary features like pop-up windows and screensavers
are treated exactly like application windows: they grab keyboard and
window input and must be positioned precisely on screen. Unpleasant
side effects result, such as being unable to use the volume keys when
a screensaver is active, and being unable to trigger the screensaver
when a menu is open on the screen. With Wayland, in contrast, the
application tells the server that a frame is a pop-up and lets the
compositor decide how to handle it. Yes, he said, it is possible that
someone would write a bad compositor that would mishandle such a
pop-up—but that is true today as well. Window managers are also
complex today; the solution is to not run the bad ones.
Wayland also uses an event-driven model, which simplifies (among
other things) listening for input devices. Rather than asking the
server for a list of initial input devices which must be parsed (and
is treated separately from subsequent device notifications), clients
simply register for device notifications, and the Wayland server sends
the same type of message for existing devices as it does for any
subsequent hot-plugging events. Wayland also provides "proper object
lifetimes," which eliminates X11's fatal-by-default and
hard-to-work-around BadDevice errors. Finally, it side-steps
the problem that can occur when a toolkit (such as GTK+ or Clutter)
and an application support different versions of the XInput
extension. In X, the server only gets one report from the application
about which version is supported; whether that equals the toolkit or
the application's version is random. In Wayland, each component
registers and listens for events separately.
Go Weston
Stone capped off the session with a discussion about Weston, the
reference implementation of a Wayland server, its state of readiness,
and some further work still in the pipeline. Weston is reference
code, he explained. Thus it has plugin-based "shells" for common
desktop features like docks and panels, and it supports existing X
application clients. It offers a variety of output and rendering
choices, including fbdev and Pixman, which he pointed out to refute
the misconception that Wayland requires OpenGL. It also supports
hardware video overlays, which he said will be of higher quality than
the X implementation.
The GNOME compositor Mutter has an out-of-date port to Wayland, he
continued, making it in essence a hybrid X/Wayland
compositor as is Weston. GNOME Shell used to run on Mutter's Wayland
implementation, he said, or at least "someone demoed it once in
July ... so it's ready for the enterprise." In fact, Stone is
supposed to bring the GNOME Shell code up to date, but he has not yet
had time. There are implementations for GTK+, Clutter, and Qt all in
upstream git, and there is a Gstreamer waylandvideosink
element, although it needs further work. In reply to a question from
the audience, Stone also commented that Weston's touchpad driver is
still incomplete, lacking support for acceleration and scrolling.
Last but clearly not least, Stone addressed the state of Wayland
support for remoting. X11's lousy implementation of IPC, he said, in
which it acts as a middleman between the client and compositor, hits
its worst-case performance when being run over the Internet.
Furthermore, the two rendering modes every application uses (SHM and
DRI2), do not work over the network anyway. The hypothetical "best"
way to implement remoting support, he explained, would be for the
client application to talk to the local compositor only, and have that
compositor speak to the remote compositor, employing image compression
to save bandwidth. That, he said, is precisely what VNC does, and it
is indeed better than X11's remote support. Consequently, Wayland
developer Kristian Høgsberg has been experimenting with implementing
this VNC-like remoting support in Weston, which has its own branch
interested parties can test. "We think it's going to be better
at remoting than X," Stone said, or at least it cannot be worse
than X.
For end users, it will still be a while before Wayland is usable on
Linux desktops outside of experimental circumstances. The protocol
was declared 1.0 in October 2012, as
was Weston, but Weston is still a reference implementation (lacking
features, as Stone described in his talk). It may be a very long time
before applications are ported from X11 to Wayland, but by providing a
feature-by-feature comparison of Wayland's benefits over X, Stone has
crafted a good sales pitch for both application developers and end
users.
Comments (331 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Recent Java vulnerabilities; LWN security survey; New vulnerabilities in drupal, gnutls, kernel, vlc, ...
- Kernel: Some 3.8 development statistics; Rationalizing CPU hotplugging; Zswap.
- Distributions: Apache OpenOffice in Fedora; ROSA, Webconverger, ...
- Development: Chrome OS open firmware; LibreOffice 4.0; SystemTap 2.1; Goodnight, Parrot; ...
- Announcements: GSoC 2013, bricking Samsung laptops, ...
Next page:
Security>>