On the first day of Akademy 2013,
Marco Martin gave a status report on the Plasma 2 project.
Plasma is the umbrella term for KDE's user experience layer, which
encompasses the window manager (KWin) and desktop shell.
In his talk, Martin looked at where things stand today and where they are
headed for the
future.
Martin began by noting that much of the recent planning for the next few
years of
Plasma development was done at a meeting in Nuremberg earlier this year. His
talk was focused on reporting on those plans, but also explaining which
parts had been implemented and what still remains to be done.
Plasma today
The existing Plasma is a library and five different shells that are
targeted at specific kinds of devices (netbook, tablet, media center,
desktop, and KPart—which is used by KDevelop for its dashboard, but is
not exactly a "device"). Plasma is not meant to be a "one size fits all"
model, but to be customized for different devices as well as for different
types of users.
It is "very easy to build very different-looking desktop interfaces" with
Plasma, by assembling various plugins (called "plasmoids") into the
interface. He counted 71 plasmoids available in the latest KDE Software
Compilation (SC) and there are many more in other places.
As far as features go, "we are pretty happy right now" with Plasma. After
the 4.11 KDE SC release, feature
development for Plasma 1 will cease and only bug fixes will be made
for the next several years. That will be a good opportunity to improve the
quality of Plasma 1, he said.
Plasma tomorrow
Though the team is happy with the current feature set, that doesn't mean
that it is time to "go home" as there are many ways to improve Plasma for
the future, Martin said. More flexibility to make it easier for
third parties to create their own plasmoids and user experiences is one
area for improvement. Doing more of what has been done right—while
fixing things that haven't been done right—is the overall idea. But there
is a "big elephant in the room"—in fact, there are four of them.
The elephants are big changes to the underlying technology that need
to be addressed by Plasma 2: Qt 5, QML 2, KDE
Frameworks 5, and Wayland.
All of the elephants are technical, "which means fun", he said. Of the
four, the switch to
QML 2 will require the most work. Wayland requires quite a bit of
work in KWin to adapt to the new display server, but the QML switch is the
largest piece. QML is the JavaScript-based language that can be used to
develop Qt-based user interface elements.
Given that everything runs in QML 1 just fine, he said, why switch to
QML 2? To start with, QML 2 has support for more modern
hardware. In addition, it has a better JavaScript engine and can use C++
code without requiring plugins. Beyond that, though, QML 1 is "on
life support" and all of the development effort is going into QML 2. There is
also a "promising ecosystem" of third-party plugins that can be imported
into QML 2 code,
which means a bigger toolbox is available.
Another change will be to slim down the libplasma library by moving all
of the user-interface-related features to other components. That is how it
should have been
from the beginning, Martin said. What's left is a logical description of
where the graphics are on the screen, the asynchronous data engines,
runners, and services, and the logic for loading the shell. All of the
QML-related code ends up in the shell. That results in a libplasma
that went from roughly 3M in size to around 700K.
One shell to rule them all
Currently, there are separate executables for each shell, but that won't be
the case for Plasma 2. The shell executable will instead just have
code to load
the user interface from QML files. So, none of the shell will be in C++,
it will be a purely runtime environment loaded from two new kinds of
packages: "shell" and "look and feel". The shell package will describe the
activity switcher, the "chrome" in the view (backgrounds, animations,
etc.), and the configuration interface for the desktop and panels.
The look and feel package defines most of the functionality the user
regularly interacts with including the login manager, lock and logout
screens, user switching, desktop switching, Alt+Tab, window decorations,
and so on. Most of those are not managed by the shell directly, but that
doesn't really matter to the user as it is all "workspace" to them. All of
those user interface elements should have a consistent look and feel that
can be changed through themes.
Different devices or distributions will have their own customized shell and
look and feel packages to provide different user experiences. All
of that will be possible without changing any of the C++ code. In
addition, those packages can be changed on the fly to switch to a different user
experience. For example, when a tablet is plugged into a docking station,
the tablet interface could shut down and start a desktop that is geared toward
mouse and keyboard use. What that means for the applications and plasmoids
running at the time of the switch is up in the air, Martin said in response
to a question from the audience.
Current status
So far, the team has gotten a basic shell running that uses Qt 5,
QML 2, and Frameworks 5. The libplasma restructuring is nearly
done, so the library is smaller and more manageable. Some QML 2
plasmoids, containments, and shell packages have been started, but the
existing Plasma 1 code will need to be ported. For pieces written in
QML, the port will not require much work, but those written in C++ will
need some work to port them to Plasma 2. Martin summed it up by
saying that the "ground work is done", but there is still plenty of work to
do.
[Thanks to KDE e.V. for travel assistance to Bilbao for Akademy.]
Comments (2 posted)
By Nathan Willis
July 17, 2013
Releasing early and often has its drawbacks, even for those who
dearly love free software. One of those drawbacks is the tiresome
and often thankless duty of packaging up the releases and pushing them
out to users. The more frequently one does this, the greater the
temptation can be to gloss over some of the tedium, such as entering
detailed or informative descriptions of what has changed. Recently,
Fedora discussed that very topic, looking for a way to improve the
information content of RPM package updates.
Michael Catanzaro raised the subject on the fedora-devel
list in late June, asking that package maintainers make an effort to
write more meaningful descriptions of changes when they roll out
updates. Too many updates, he said, arrive with no description beyond
"update to version x.y.z" or, worse, the placeholder text "Here is
where you give an explanation of your update." Since the update
descriptions in RPM packages are written for the benefit of end users
(as opposed to the upstream changelog, which may be read only by
developers), the goal is for the description to explain the purpose of
the update, if not to actually go into detail. Instances such as the
ones Catanzaro cited are not the norm, of course, and presumably no
packager intends to be unhelpful. The trick is figuring out how to
drive the community of volunteers who publish updates in the right direction.
Copy-on-package
Not everyone perceives there to be a problem, of course. Till Maas
disagreed that terse update descriptions are harmful, suggesting, for example, that updates
that fix bugs are already informative enough if the bug fixed is
clearly indicated in the "bugs" field. But Adam Williamson responded that even in such simple cases,
the update description ought to point the end user in the right
direction:
"This update simply fixes the bugs listed" is an okay description - it
tells the reader what they need to know and re-assures them that the
update doesn't do anything *else*. Of course, if it does, you need to
explain that: "This update includes a new upstream release which fixes
the bugs listed. You can find other changes in the upstream
description at http://www.blahblah.foo".
Richard Jones argued that the current
tool support is inadequate, which forces people to duplicate change
messages in multiple
places, from the source repository to RPM package files to the
update description field in Bodhi, Fedora's
update-publishing tool. "In short my point is: don't moan
about bad update messages when the problem is our software
sucks," Jones said. When asked what the software should do,
Jones proposed that RPM could be
pointed toward the upstream changelog and release notes:
%changelog -f <changelog_file>
%changelog -g <git_repo>
%release_notes -f <release_notes_file>
The subsequent tools in the update-release process could simply
extract the information from RPM. Björn Persson challenged that proposal as unworkable,
however, saying that attempting to extract changelog information
automatically would require adding flags for Subversion, CVS,
Monotone, Mercurial, Arch, Bazaar, and every other revision control
system. Furthermore, automatically parsing the
release_notes_file is hardly possible either, given that it
can be in any format and any language.
Later, Sandro Mani proposed a
somewhat more complex method for automatically filling the description
field: pulling in the upstream changelog URL for updates that are
derived from upstream releases, and pre-populating the description
with bug numbers if the "bugs" field is non-zero. That suggestion was
met with no discussion; perhaps because it would often result in a
slightly longer (although hopefully more descriptive) placeholder.
Details, details, details
But Williamson and others also took issue with Jones's original
premise, that changelog information makes for suitable update descriptions in the
first place. After all, the argument goes, the description is in
addition to the "bugs" field and other more technical metadata; its
purpose is to be displayed to the user in the software update tool.
Catanzaro asked for "some
minimal level of quality to what we present to users." That
statement might suggest a set of guidelines, but the discussion
quickly turned to how Bodhi could be modified to catch unhelpful
update descriptions and discourage them.
As T.C. Hollingsworth noted, Bodhi
has two interfaces: web-based and command line. But while the
command-line interface will complain if the update description is left
blank, the web front end automatically inserts the placeholder text,
so Bodhi does not see a blank field, and thus does not complain.
Williamson commented that Bodhi
should reject the placeholder text, too. But either way, Bodhi cannot
fully make up for the human factor. Michael Schwendt pointed out that
no matter what rules are in place, a packager who wants to "cheat"
will cheat. He then cited a long list
of (hopefully intentionally) humorous update descriptions, such as
"This is one of the strong, silent updates" and
"Seriously, if I tell you what this update does, where is the
surprise?"
Williamson had also suggested that other Fedora project members
could vote down an update with an empty or meaningless description
field, using Bodhi's "karma" feature. But the tricky part of that
idea is that karma is currently used as a catch-all for all problems,
including far more serious issues like an update not actually
fixing the bug it claims to. Simply subtracting karma points does not
communicate the specific issue. On top of that, the way karma is
implemented, an update can still get pushed out if it has a sufficient
positive karma score—which it presumably would if enough people
vote for it without considering an unhelpful update description to be
problematic.
The only real solution, then, might be one that works (at least in
part) by changing the community's expected behavior. That is often
the nature of solutions in open course community projects, but it is
usually a slow course to pursue. Catanzaro originally asked if a set
of guidelines should be written, before the conversation shifted to
implementing changes in the packaging software itself. On the plus side, as Panu Matilainen observed, there are other projects that
have achieved an admirable measure of success. The Mageia and
Mandriva distributions, for example, have guidelines
in place for update descriptions, in addition to pulling in some
information from changelogs.
Then again, since the ultimate goal
of update descriptions is to communicate important information to the
end user, it may be better to ask someone other than packagers to look
at the description fields. Ryan Lerch suggested granting write access to the
update descriptions to others—namely the documentation team.
In a sense, update descriptions are akin to release notes in
miniature, and release notes are a perpetual challenge for many
software projects. They come at the end of long periods of
development, merging, and testing, so they can feel like extra work
that provides minimal added value. But as Catanzaro said in his
original email, poor update descriptions can blemish a project's
otherwise professional-looking image. More so, perhaps, if they
continue to arrive with every additional update.
Comments (14 posted)
By Nathan Willis
July 17, 2013
In the never-ending drive to increase the perceived speed
of the Internet, improving protocol efficiency is
considerably easier than rolling out faster cabling. Google
is indeed setting up fiber-optic networks in a handful of cities,
but most users are likely to see gains from the company's protocol
experimentation, such as the recently-announced QUIC. QUIC stands for "Quick UDP Internet
Connection." Like SPDY before
it, it is a Google-developed extension of an existing protocol designed
to reduce latency. But while SPDY worked at the application layer
(modifying HTTP by multiplexing multiple requests over one
connection), QUIC works at the transport layer. As the name
suggests, it implements a modification of UDP, but that does not tell
the whole story. In fact, it is more accurate to think of QUIC as a
replacement for TCP. It is intended to optimize connection-oriented
Internet applications, such as those that currently use TCP, but in
order to do so it needs to sidestep the existing TCP stack.
A June post on the Chromium development blog outlines the
design goals for QUIC, starting with a reduction in the number of round
trips required to establish a connection. The speed of light being
constant, the blog author notes, round trip times (RTTs) are
essentially fixed; the only way to decrease the impact of round trips
on connection latency is to make fewer of them. However, that turns
out to be difficult to do within TCP itself, and TCP implementations
are generally provided by the operating system, which makes
experimenting with them on real users' machines difficult anyway.
In addition to side-stepping the problems of physics, QUIC is designed to
address a number of pain
points uncovered in the implementation of SPDY (which ran over TCP).
A detailed design document goes into the specifics.
First, the delay of a single TCP packet introduces "head of line"
blocking in TCP, which undercuts the benefits of SPDY's
application-level multiplexing by holding up all of the
multiplexed streams. Second, TCP's congestion-handling throttles
back the entire TCP connection when there is a lost packet—again,
punishing multiple streams in the application layer
above.
There are also two issues that stem from running SSL/TLS over TCP:
resuming a disconnected session introduces an extra handshake due
solely to the protocol design (i.e., not for security reasons, such as
issuing new credentials), and the decryption
of packets historically needed to be performed in order (which can
magnify the effects of a delayed packet). The design document notes
that the in-order decryption problem has been largely solved in
subsequent revisions, but at the cost of additional bytes per packet.
QUIC is designed to implement TLS-like encryption in the same protocol
as the transport, thus reducing the overhead of layering TLS over TCP.
Some of these specific issues have been addressed
before—including by Google engineers. For example, TCP Fast
Open (TFO) reduces round trips when
re-connecting to a previously visited server, as does TLS Snap
Start. In that sense, QUIC aggregates these approaches and rolls
in several new ones, although one reason for doing so is the project's
emphasis on a specific use case: TLS-encrypted connections carrying
multiple streams to and from a single server, like one often does when
using a web application service.
The QUIC team's approach has been to build connection-oriented
features on top of UDP, testing the result between QUIC-enabled
Chromium builds and a set of (unnamed) Google servers, plus some
publicly available server test tools. The specifics
of the protocol are still subject to change, but Google promises to
publish its results if it finds techniques that result in clear
performance improvements.
QUIC trip
Like SPDY, QUIC multiplexes several streams between the
same client-server pair over a single connection—thus reducing
the connection setup costs, transmission of redundant information, and
overhead of maintaining separate sockets and ports. But much of the
work on QUIC is focused on reducing the round trips required when
establishing a new connection, including the handshake step,
encryption setup, and initial requests for data.
QUIC cuts into the round-trip count in several ways. First,
when a client initiates a connection, it includes session negotiation
information in the initial packet. Servers can publish a static
configuration file to host some of this information (such as
encryption algorithms supported) for access by all clients, while
individual clients provide some of it on their own (such as an initial
public encryption key). Since the lifetime of the server's static
configuration ought to be very long, requesting it the first time only
takes one round-trip in many weeks or months of browsing. Second, when servers respond to an initial connection
request, they send back a server certificate, hashes of a
certificate chain for the client to verify, and a synchronization
cookie. In the best-case scenario, the client can check the validity
of the server certificate and start sending data
immediately—with only one round-trip expended.
Where the savings really come into play, however, are on subsequent
connections to the same server. For repeat connections within a
reasonable time frame, the client can assume that the same server
certificate will still be valid. The server, however, needs a bit
more proof that the computer attempting to reconnect is indeed the
same client as before, not an attacker attempting a replay. The
client proves its identity by returning the synchronization cookie
that the server sent during the initial setup. Again, in the
best-case scenario, the client can begin sending data immediately
without waiting a round trip (or three) to re-establish the connection.
As of now, the exact makeup of this cookie is not set in stone. It
functions much like the cookie in TFO, which was also designed at
Google. The cookie's contents are opaque to the client, but the
documentation suggests that it should at least include proof
that the cookie-holder came from a particular IP address and port at a
given time. The server-side logic for cookie lifetimes and under what circumstances to
reject or revoke a connection is not mandated. The goal is that by
including the cookie in subsequent messages, the client demonstrates
its identity to
the server without additional authentication steps. In the event that
the authentication fails, the system can always fall back to the
initial-connection steps. An explicit goal of the protocol design is
to better support mobile clients, whose IP addresses may change
frequently; even if the zero-round-trip repeat connection does not
succeed every time, it still beats initiating both a new TCP and a new
TLS connection on each reconnect.
Packets and loss
In addition to its rapid-connection-establishment goals, QUIC
implements some mechanisms to cut down on retransmissions. First, the
protocol adds packet-level forward-error-correcting (FEC) codes to the
unused bytes at the end of streams. Lost data retransmission is the
fallback, but the redundant data in the FEC should make it possible to
reconstruct lost packets at least a portion of the time. The design
document discusses using the bitwise sum of a block of packets as the
FEC; the assumption is that a single-packet loss is the most common,
and this FEC would allow not only the detection of but the
reconstruction of such a lost packet.
Second, QUIC has a set of techniques under review to avoid
congestion. By comparison, TCP employs a single technique, congestion
windows, which (as mentioned previously) are unforgiving to
multiplexed connections. Among the techniques being tested are packet
pacing and proactive speculative retransmission.
Packet pacing, quite
simply, is scheduling packets to be sent at regular intervals.
Efficient pacing requires an ongoing bandwidth estimation, but when it
is done right, the QUIC team believes that pacing improves resistance
to packet loss caused by intermediate congestion points (such as
routers). Proactive speculative retransmission amounts to sending
duplicate copies of the most important packets, such as the initial
encryption negotiation packets and the FEC packets. Losing either of
these packet types triggers a snowball effect, so selectively
duplicating them can serve as insurance.
But QUIC is designed to be flexible when it comes to congestion
control. In part, the team appears to be testing out several
good-sounding ideas to see how well they fare in real-world
conditions. It is also helpful for the protocol to be able to adapt
in the future, when new techniques or combinations of techniques prove
themselves.
QUIC is still very much a work in progress. Then again, it can
afford to be. Unlike SPDY, which eventually evolved into HTTP 2.0,
the team behind QUIC is up front about the fact that the ideas they
implement, if proven successful, would ultimately be destined for
inclusion in some future revision of TCP. Building the system on UDP
is a purely practical compromise: it allows QUIC's
connection-management concepts to be tested on a protocol that is
already understood and accepted by the Internet's routing
infrastructure. Building an entirely new connection-layer protocol
would be almost impossible to test, but piggybacking on UDP at least
provides a start.
The project addresses several salient questions in its FAQ,
including the speculation that QUIC's goals might have been easily met
by running SCTP (Stream Control Transmission Protocol) over DTLS
(Datagram Transport Layer Security). SCTP provides the desired
multiplexing, while DTLS provides the encryption and authentication.
The official answer is that SCTP and DTLS both utilize the old,
round-trip–heavy semantics that QUIC is interested in dispensing
with. It is possible that other results from the QUIC experiment will
make it into later revisions, but without this key feature, the team
evidently felt it would not learn what it wanted to. However, as the
design document notes: "The eventual protocol may likely
strongly resemble SCTP, using encryption strongly resembling DTLS,
running atop UDP."
The "experimental" nature of QUIC makes it difficult to predict
what outcome will eventually result. For a core Internet protocol, it
is a bit unusual for a single company to guide development in house
and deploy it in the wild,
but then again, Google is in a unique position to do so with
real-world testing as part of the equation: the company both runs
web servers and produces a web browser client. So long as the testing
and the eventual result are open, that approach certainly has its
advantages over years of committee-driven debate.
Comments (34 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: NSA surveillance and "foreigners"; New vulnerabilities in apache, libxml2, libzrtpcpp, php, ...
- Kernel: The 3.11 merge window closes; Some stable tree grumbles; On kernel mailing list behavior.
- Distributions: Fedora wrestles with ARM as a primary architecture; Fedora, RebeccaBlackOS, Slackware, ...
- Development: Mock objects in C; Wayland and Weston 1.2; Why mobile web apps are slow; GitHub's license chooser; ...
- Announcements: Videos from Linaro Connect, FSF joins EFF against NSA, ...
Next page:
Security>>