The GTK+ application toolkit is most closely associated with the
GNOME desktop, but it is used by a variety of non-GNOME environments
and applications as well. It even runs on non-Linux operating systems.
That level of diversity has at times fostered an unease about the
nature and direction of GTK+: is it a GNOME-only technology, or it is
a system-neutral tool with GNOME as its largest consumer? The subject
came up in several talks at GUADEC 2013, mingled in with other
discussions of the toolkit's immediate and long-term direction.
GTK+ 3.10
Matthias Clasen delivered a talk on the new features that are set
to debut in GTK+ 3.10 later this year, and he did so with an unusual
approach. Rather than build a deck of slides outlining the new
widgets and properties of the 3.10 release—slides which would
live in isolation from the canonical developer's
documentation—he wrote a tutorial in the GNOME
documentation and provided code samples in the GTK+
source repository.
The tutorial walked through the process of creating a "GNOME 3–style"
application using several of the newer classes, widgets, and support
frameworks. Clasen's example application was a straightforward
text-file viewer (essentially it could open files and count the number
of lines in each), but it integrates with GNOME 3's session bus, newer APIs (such as
the global "App Menu"), GSettings settings framework, animated
transition effects, and more.
Many of the widgets shown in the tutorial are new for 3.10, and up until now have
only been seen by users in previews of the new "core" GNOME
applications, like Clocks,
Maps,
or Web.
These new applications tend to be design-driven utilities, focusing on
presenting information (usually just one type of information) simply,
without much use for the hierarchical menus and grids of buttons one
might see in a complex editor.
But the stripped-down design approach has given rise to several new
user interface widgets, such as GtkHeaderBar,
the tall menubar featuring centered items that is visible in all of
the new core applications. Also new is the strip of toggle-buttons
that lets
the user switch between documents and views. These toggle-buttons are
a GtkStackSwitcher,
which is akin to the document tabs common in older applications. The
switcher is bound to a GtkStack,
which is a container widget that can hold multiple child widgets, but
shows only one at a time. In his example, Clasen showed how to open
multiple files, each as a child of the GtkStack. The
GtkStack–GtkStackSwitcher pair is not all that different than the
tabbed GtkNotebook of earlier GTK+ releases, but it has far fewer
properties to manage, and it can take advantage of new animated
transitions when switching between children. Clasen showed sliding
and crossfading transitions, and commented that they were made
possible by Owen Taylor's work on frame
synchronization.
He also showed the new GtkSearchBar
widget, which implements a pre-fabricated search tool that drops out
of the GtkHeaderBar, thus making adding search functionality to an
application simpler (and more unified across the spectrum of GTK+
applications). He also added a sidebar to his example, essentially just to show off
the other two new widgets, GtkRevealer
and GtkListBox.
GtkRevealer is the animated container that slides (or fades) in to
show the sidebar, while GtkListBox is the sortable list container
widget.
The talk was not all widgets, though; Clasen also demonstrated how
the GSettings preferences system works, setting font parameters for
his example application, then illustrating how they could be changed
from within the UI of the application itself, or with the
gsettings command-line tool. He also showed how
glib-compile-resources can be used to bundle application
resources (such as icons and auxiliary files) into the binary, and how
GtkBuilder templates can simplify the creation of user interfaces.
All in all, the application he created from scratch was a simple one,
but it was well-integrated with GNOME's latest features and, he said,
only about 500 lines of C in total, with an additional 200 lines (of
XML) describing the user interface.
What about Bob?
Clasen's talk brought application developers up to speed on the
latest additions to GTK+ itself, while two other sessions looked
further out, to the 3.12 development cycle and beyond. Emmanuele
Bassi is the maintainer of the Clutter toolkit, which is
used in conjunction with GTK+ by a few key projects, most notably
GNOME Shell and the Totem video player. His session dealt with the
recurring suggestions he hears from users and developers: either what
"Clutter 2.0" should do, or that Clutter should be merged into GTK+.
"This talk is less of a presentation, and more of an intervention" he
said.
Clutter uses OpenGL or OpenGL ES to render a scene graph;
interface elements are "actors" on the application's
ClutterStage. Actors can be easily (even implicitly)
animated, with full hardware acceleration. Clutter has been able to
embed GTK+ widgets, and GTK+ applications have been able to embed
Clutter stages, for several years. Nevertheless, as Bassi explained,
Clutter was never meant to be a generic toolkit, and he is not even
sure there should ever be a Clutter 2.0.
Originally, he said, Clutter was designed as the toolkit for a
full-screen media center application; it got adapted for other
purposes over its seven-year history, but most people who have used it
have ended up writing their own widget toolkit on top of Clutter
itself. That should tell you that it isn't done right, he said.
Today Clutter is really only used by GNOME Shell, he said. But
being "the Shell toolkit" is not a great position to be in, since
GNOME Shell moves so quickly and "plays fast and loose with the APIs."
There are two other Clutter users in GNOME, he added, but they use it
for specific reasons. Totem uses Clutter to display GStreamer videos
only "because putting a video on screen with GStreamer is such a
pain"—but that really just means that GStreamer needs to get its
act together. The virtual machine manager Boxes also uses Clutter, to
do animation of widgets.
So when it comes to Clutter's future, Bassi is not too interested
in creating a Clutter 2.0, because the current series already
implements all of the scene graph and animation features he wants it to (and the things it doesn't do yet would require
breaking the API). But the most common alternative
proposal—merging Clutter into GTK+—is not all that
appealing to him either. As he pointed out, other applications have implemented
their own widget toolkits on top of Clutter with little in the way of widespread success, using
libraries to "paper over" Clutter's problems. If you want to do
another, he said, "be my guest." At the same time, compositors like
GNOME Shell's Mutter have to "strip out a bunch of stuff" like the
layout engine. In addition, GTK+ already has its own layout, event
handling, and several other pieces that are duplicated in Clutter.
Offering both systems to developers would send a decidedly mixed
message.
Ultimately, though, Bassi does think that GTK+ needs to start
adding a scene graph library, which is the piece of Clutter that
everyone seems to want. But, he said, there is no reason he needs to
call it Clutter. "We can call it Bob," he suggested. But Bob needs
design work before it can be implemented, and he had several
suggestions to make. It should have some constraints, such as being
confined to GDK ("which sucks, but is still better than Qt" he
commented) as the backend. It should avoid input and event
handling, which do not belong in the scene graph. It should be 2D
offscreen surfaces blended in 3D space—using OpenGL "since
that's all we've got." It should not have a top-level actor (the
ClutterStage), since that was an implementation decision made
for purely historical reasons. And it must not introduce API breaks.
Considering those constraints separately, Bassi said, the scene
graph in Clutter is actually okay. Porting it over would require some
changes, but is possible. He has already started laying the
groundwork, he said, since the April 2013 GTK+ hackfest. He implemented a
GtkSceneGraph-3.0 library in GTK+, which passed its initial tests but
was not really doing anything. He also implemented the first steps of
adding OpenGL support to GDK: creating a GLContext and passing it down
to Cairo. There is much more to come, of course; several other core
GNOME developers had questions about Bassi's proposal, including how it
would impact scrolling, input events, custom widgets, and GTK+ support
on older GPU hardware. Bassi explains a bit more
on the GNOME wiki, but the project is certain to remain a hot topic
for some time to come.
Whose toolkit is it anyway?
Last but definitely not least, Benjamin Otte presented a session on
the long-term direction of GTK+, in particular the technical features
it needs to add and the all-important question of how it defines its
scope. That is, what kind of toolkit is GTK+ going to be? How will
it differ from Qt, HTML5, and all of the other toolkits?
On the technical front, he cited two features which are
repeatedly requested by developers: the Clutter-based scene graph
based on Bassi's work mentioned above, and gesture support. The scene
graph is important because GTK+'s current drawing functions make it
difficult to tell what element the cursor is over at any moment.
Making each GTK+ widget a Clutter-based actor would make that
determination trivial, and provide other features like making widgets
CSS-themable. Gesture support involves touch detection and gesture
recognition itself (i.e., defining a directional "swipe" that can be
bound to an action); Otte noted that GTK+'s existing input support is
essentially just XInput.
The bigger part of the talk was spent examining what Otte called
the "practical" questions: defining what GTK+ is meant to be and what
it is not. His points, he stated at the outset, do not
represent what he personally likes, but are the result of many
conversations with others. They already form the de-facto
guidance for GTK+ development, he said; he was simply putting them out
there.
The first point is OS and backend support: which OSes will GTK+
support, and how well? The answer is that GTK+ is primarily intended
to be used on the GNOME desktop, using X11 as the backend. Obviously
it is transitioning to Wayland while supporting X11, which has forced
developers to work in a more abstract manner than they might have
otherwise. That makes this a good time for any
interested parties to write their own backends (say, for Android or
for something experimental). But the fact remains that in the absence of new developers, the
project will make sure that features work right on X11 and Wayland,
and will do its best to support them on other platforms. For
example, Taylor's frame synchronization is written native to X11, and
the timer mechanism can only be approximated on Mac OS X, but it
should work well enough.
Similarly, he continued, GTK+ is targeting laptops as the device
form factor, with other form factors (such as phones, or development
boards without FPUs) often requiring some level of compromise.
Desktops are "laptop-like," he said, particularly when it comes to CPU
power and screen size.
Laptops also dictate that "keyboard and mouse" are the target input
devices. Touchscreen support will hopefully arrive in the future, he
said, but that is as touchscreens become more common on laptops.
These decisions lead into the bigger question of whether GTK+ seeks
to be its own platform or to be a neutral, "integrated" toolkit. For
example, he said, should a GTK+ app running on KDE be expected to look
like a native KDE app? His answer was that GTK+ must focus on being
the toolkit of the GNOME platform first, and tackle integration
second. The project has tried to keep cross-platform compatibility,
he said. For example, the same menus will work in GNOME, Unity, and
KDE, but the primary target platform is GNOME.
Finally, he said, people ask whether GTK+ is focused on creating
"small apps" or "large applications," and his answer is "small apps."
In other words, GTK+ widgets are designed to make it easy and fast to
write small apps for GNOME: apps like Clocks, rather than GIMP
or Inkscape. The reality of it is, he said, that large applications
like GIMP, Inkscape, Firefox, and LibreOffice typically write large
amounts of custom widgets to suit their particular needs. If GTK+
tried to write a "docking toolbar" widget, the odds are that GIMP
developers would complain that it did not meet their needs, Inkscape
developers would complain that it did not meet their needs either, and
no one else would use it at all.
An audience member asked what Otte's definitions of "small" and
"large" are, to which he replied that it is obviously a spectrum and
not a bright line. As a general rule, he said, if the most
time-consuming part of porting an application to a different platform
is porting all of the dialog boxes, then it probably qualifies as
"large." Then again, he added, this is primarily a matter of
developer time: if a bunch of new volunteers showed up this year
wanting to extend GTK+ to improve the PiTiVi video editor, then a year
from now GTK+ would probably have all sorts of timeline widgets.
People often ask why they should port an application from GTK2 to
GTK3, Otte said. His answer historically was that GTK3 is awesome and
everyone should port, but he said he has begun to doubt that. The
truth is that GTK2 is stable and unchanging, even boring—but
that is what some projects need. He cited one project that targets
RHEL5 as its platform, which ships a very old version of GTK2.
Creating a GTK3 port would just cost them time, he said. The real
reason someone should port to GTK3 today, he concluded, is to take
advantage of the new features that integrate the application with
GNOME 3—but doing so means committing to keeping up with GNOME
3's pace of change, which is intentionally bold in introducing new features.
Eventually, he said, he hopes that GTK+ will reach a point where
the bold experiments are done. This will be after the scene graph and
gesture support, but it is hard to say when it will be. Afterward,
however, Otte hopes to make a GTK4 major release, removing all of the
deprecated APIs, and settling on a GTK2-like stable and unchanging
platform. The project is not there yet, he said, and notably it will
keep trying to be bold and add new things until application developers
"throw enough rocks" to convince them to stop. The rapidly-changing
nature of GTK3 is a headache for many developers, he said, but it has
to be balanced with those same developers' requests for new features
like gesture recognition and Clutter integration.
Otte's statements that GTK+ was a "GNOME first" project were
frequently a topic for debate at the rest of GUADEC. One audience
member even asked him during his talk whether this stance left out
other major GTK+-based projects like LXDE and Xfce. Otte replied that
he was not trying to keep those projects out; rather, since GNOME
developers do the majority of the GTK+ coding, their decisions push
GTK+ in their direction. If other projects want to influence GTK+, he
said, they need to "participate in GTK+ somehow," at the very least by
engaging with the development team to communicate what the projects want.
"What is GTK+" is an ongoing question, which is true of most free
software projects (particularly of libraries). There is no simple
answer, of course, but the frank discussion has benefits of its own,
for the project and for GTK+ developers. As the 3.10 releases of GTK+
and GNOME approach, at least both projects are still assessing how
what they do can prove useful to other application developers.
[The author wishes to thank the GNOME Foundation for assistance
with travel to GUADEC 2013.]
Comments (24 posted)
August 8, 2013
This article was contributed by Josh Berkus
Through me pass into the site of downtime,
Through me pass into eternal overtime
Through me pass and moan ye in fear
All updates abandon, ye who enter here.
A decade ago, software deployments were something you did fairly
infrequently; at most monthly, more commonly quarterly, or even
annually. As such, pushing new and updated software was not something
developers, operations (ops) staff, or database administrators (DBAs) got much practice with. Generally, a deployment was a major downtime event, requiring the kind of planning and personnel NASA takes to land a robot on Mars ... and with about as many missed attempts.
Not anymore. Now we deploy software weekly, daily, even
continuously. And that means that a software push needs to become a
non-event, notable only for the exceptional disaster. This means that
everyone on the development staff needs to become accustomed to and familiar with the deployment drill and their part in it. However, many developers and ops staff — including, on occasion, me — have been slow to make the adjustment from one way of deployment to another.
That's why I presented "The Seven Deadly Sins of
Software Deployment [YouTube]" at OSCON Ignite on July 22. Each of the "sins" below is a chronic bad habit I've seen in practice, which turns what should be a routine exercise into a periodic catastrophe. While a couple of the sins aren't an exact match to their medieval counterparts, they're still a good check list for "am I doing this wrong?".
Sloth
Why do you need deployment scripts?
That's too much work to get done.
I'll just run the steps by hand,
I know I won't forget one.
And the same for change docs;
wherefore do you task me.
For info on how each step works,
when you need it you just ask me.
Scripting and documenting every step of a software deployment process
are, let's face it, a lot of work. It's extremely tempting to simply
"improvise" it, or just go from a small set of notes on a desktop
sticky. This works fine — until it doesn't.
Many people find out the hard way that nobody can remember a 13-step process in their head. Nor can they remember whether or not it's critical to the deployment that step four succeed, or whether step nine is supposed to return anything on success or not. If your code push needs to happen at 2:00AM in order to avoid customer traffic, it can be hard to even remember a three-step procedure.
There is no more common time for your home internet to fail, the VPN server to lose your key, or your pet to need an emergency veterinary visit than ten minutes before a nighttime software update. If the steps for the next deployment are well-scripted, well-documented, and checked into a common repository, one of your coworkers can just take it and run it. If not, well, you'll be up late two nights in a row after a very uncomfortable staff meeting.
Requiring full scripting and documentation has another benefit; it makes developers and staff think more about what they're doing during the deployment than they would otherwise. Has this been tested? Do we know how long the database update actually takes? Should the ActiveRecord update come before or after we patch Apache?
Greed
Buy cheap staging servers, no one will know:
they're not production, they can be slow.
They need not RAM, nor disks nor updates.
Ignore you QA; those greedy ingrates.
There's a surprising number of "agile" software shops out there who either lack staging servers entirely, or who use the old former production servers from two or three generations ago. Sometimes these staging servers will have known, recurring hardware issues. Other times they will be so old, or so unmaintained, they can't run the same OS version and libraries which are run in production.
In cases where "staging" means "developer laptops", there is no way to check for performance issues or for how long a change will take. Modifying a database column on an 8MB test database is a fundamentally different proposition from doing it on the 4 terabyte production database. Changes which cause new blocking actions between threads or processes also tend not to show up in developer tests.
Even when issues do show up during testing, nobody can tell for certain
if the issues are caused by the inadequate staging setup or by new
bugs. Eventually, QA staff start to habitually ignore certain kinds of
errors, especially performance problems, which makes doing QA at all an
exercise of dubious utility. Why bother to run response time tests if
you're going to ignore the results because the staging database is known to
be 20 times slower than production?
The ideal staging system is, of course, a full replica of your production setup. This isn't necessarily feasible for companies whose production includes dozens or hundreds of servers (or devices), but a scaled-down staging environment should be scaled down in an intelligent way that keeps performance at a known ratio to production. And definitely keep those staging servers running the exact same versions of your platform that production is running.
Yes, having a good staging setup is expensive; you're looking at spending at least ¼ as much as you spent on production, maybe as much. On the other hand, how expensive is unexpected downtime?
Gluttony
Install it! Update it! Do it ASAP!
I'll have Kernel upgrades,
a new shared lib or three,
a fat Python update
and four new applications!
And then for dessert:
Sixteen DB migrations.
If you work at the kind of organization where deployments happen relatively infrequently, or at least scheduled downtimes are once-in-a-blue-moon, there is an enormous temptation to "pile on" updates which have been waiting for weeks or months into one enormous deployment. The logic behind this usually is, "as long as the service is down for version 10.5, let's apply those kernel patches." This is inevitably a mistake.
As you add additional changes to a particular deployment, each change increases the chances it will fail somehow, both because each change has a chance of failure, and because layered application and system changes can mess each other up (for example, a Python update can cause an update to your Django application to fail due to API changes). Additional changes also make the deployment procedure itself more complicated and thus increase the chances of an administrator or scripting error, and you make it harder and more time-consuming to test all of the changes both in isolation or together. To make this into a rule:
The odds of deployment failure approach 100% as the number of distinct change sets approaches seven.
Obviously, the count of seven is somewhat dependent on your
infrastructure, nature of the application, and testing setup. However, even
if you have an extremely well-trained crew and an unmatched staging
platform, you're really not going to be able to tolerate many more distinct
changes to your production system before making failure all but certain.
Worse, if you have many separate "things" in your deployment, you've also made rollback longer and more difficult — and more likely to fail. This means, potentially, a serious catch-22, where you can't proceed because deployment is failing, and you can't roll back because rollback is failing. That's the start of a really long night.
The solution to this is to make deployments as small and as frequent as possible. The ideal change is only one item. While that goal is often unachievable, doing three separate deployments which change three things each is actually much easier than trying to change nine things in one. If the size of your update list is becoming unmanageable, you should think in terms of doing more deployments instead of larger ones.
Pride
Need I no tests, nor verification.
Behold my code! Kneel in adulation.
Rollback scripts are meant for lesser men;
my deployments perfect, as ever, again.
Possibly the most common critical deployment failure is when developers and administrators don't create a rollback procedure at all, let alone rollback scripts. A variety of excuses are given for this, including: "I don't have time", "it's such a small change", or "all tests passed and it looks good on staging". Writing rollback procedures and scripts is also a bald admission that your code might be faulty or that you might not have thought of everything, which is hard for anyone to admit to themselves.
Software deployments fail for all sorts of random reasons, up to and including sunspots and cosmic rays. One cannot plan for the unanticipated, by definition. So you should be ready for it to fail; you should plan for it to fail. Because when you're ready for something to fail, most of the time, it succeeds. Besides, the alternative is improvising a solution or calling an emergency staff meeting at midnight.
You don't need to be complicated or comprehensive. If the deployment is
simple, the rollback may be as simple as a numbered list of steps on a
shared wiki page. There are two stages to planning to roll back properly:
- write a rollback procedure and/or scripts
- test that the rollback succeeds on staging
Many people forget to test their rollback procedure just like they test the original deployment. In fact, it's more important to test the rollback, because if it fails, you're out of other options.
Lust
On production servers,
These wretches had deployed
all of the most updated
platforms and tools they enjoyed:
new releases, alpha versions,
compiled from source.
No packages, no documentation,
and untested, of course.
The essence of successful software deployments is repeatability. When
you can run the exact same steps several times in a row on both development
and staging systems, you're in good shape for the actual deployment, and if it fails, you can roll back and try again. The cutting edge is the opposite of repeatability. If your deployment procedure includes "check out latest commit from git HEAD for library_dependency", then something has already gone wrong, and the chances of a successful deployment are very, very low.
This is why system administrators prefer known, mainstream packages and
are correct to do so, even though this often leads to battles with
developers. "But I need feature new_new_xyz, which is only in the
current beta!" is a whine which often precipitates a tumultuous staff
meeting. The developer only needs to make their stack work once (on their
laptop) and can take several days to make it work; the system administrator
or devops staff needs to make it work within minutes — several times.
In most cases, the developers don't really need the
latest-source-version of the platform software being updated, and this can be settled in the staff meeting or scrum. If they really do need it, then the best answer is usually to create your own packages and documentation internally for the exact version to be deployed in production. This seems like a lot of extra work, but if your organization isn't able to put in the time for it, it's probably not as important to get that most recent version as people thought.
Envy
I cannot stand meetings,
I will not do chat
my scripts all are perfect,
you can count on that.
I care only to keep clean my name
if my teammates fail,
then they'll take the blame.
In every enterprise, some staff members got into computers so that they wouldn't have to deal with other people. These antisocial folks will be a constant trial to your team management, especially around deployment time. They want to do their piece of the large job without helping, or even interacting with, anyone else on the team.
For a notable failed deployment at one company, we needed a network administrator to change some network settings as the first step of the deployment. The administrator did this, logging in, changing the settings, and logging back out, and telling nobody what he'd done. He then went home. When it came time for step two, the devops staff could not contact the administrator, and nobody still online had the permissions to check if the network settings were changed. Accordingly, the whole deployment had to be rolled back, and tried again the following week.
Many software deployment failures can be put down to poor communication between team members. The QA people don't know what things they're supposed to test. The DBA doesn't know to disable replication. The developers don't know that both features are being rolled out. Nobody knows how to check if things are working. This can cause a disastrously bad deployment even when every single step would have succeeded.
The answer to this is lots of communication. Overdetermine that everyone knows what's going to happen during the deployment, who's going to do it, when they're going to do it, and how they'll know when they're done. Go over this in a meeting, follow it up with an email, and have everyone on chat or VoIP conference during the deployment itself. You can work around your antisocial staff by giving them other ways to keep team members updated such as wikis and status boards, but ultimately you need to impress on them how important coordination is. Or encourage them to switch to a job which doesn't require teamwork.
Wrath
When failed the deployment,
again and again and again they would try,
frantically debugging
each failing step on the fly.
They would not roll back,
but ground on all night,
"the very next time we run it
the upgrade will be all right."
I've seen (and been part of) teams which did everything else right. They scripted and documented, communicated and packaged, and had valid and working rollback scripts. Then, something unexpected went wrong in the middle of the deployment. The team had to make a decision whether to try to fix it, or to roll back; in the heat of the moment, they chose to press on. The next dawn found the devops staff still at work, trying to fix error after error, now so deep into ad-hoc patches that the rollback procedure wouldn't work if they tried to follow it. Generally, this is followed by several days of cleaning up the mess.
It's very easy to get sucked into the trap of "if I fix one more thing, I can go to bed and I don't have to do this over again tomorrow." As you get more and more into overtime, your ability to judge when you need to turn back gets worse and worse. Nobody can make a rational decision at two in the morning after a 15-hour day.
To fight this, Laura Thompson at Mozilla introduced the "three strikes" rule. This rule says: "If three or more things have gone wrong, roll back." While I was working with Mozilla, this saved us from bad decisions about fixing deployments on the fly at least twice; it was a clear rule which could be easily applied even by very tired staff. I recommend it.
Conclusion
To escape DevOps hell
avoid sin; keep to heart
these seven virtues
of an agile software art.
Just as the medieval seven deadly sins have seven virtues to counterbalance them, here are seven rules for successful software deployments:
- Diligence: write change scripts and documentation
- Benevolence: get a good staging environment
- Temperance: make small deployments
- Humility: write rollback procedures
- Purity: use stable platforms
- Compassion: communicate often
- Patience: know when to roll back
You can do daily, or even "continuous", deployments if you
develop good practices and stick to them. While not the totality of what
you need to do for more rapid, reliable, and trouble-free updates and
pushes, following the seven rules of good practice will help you avoid some of the common pitfalls which turn routine deployments into hellish nights.
For more information, see the video of my "The Seven Deadly Sins of
Software Deployment" talk, the slides
[PDF],
and verses. See
also the slides
[PDF] from Laura Thompson's excellent talk "Practicing Deployment", and
Selena Deckelmann's related talk: Mistakes Were Made [YouTube].
Comments (18 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Prompt-free security for GNOME; New vulnerabilities in phpMyAdmin, putty, strongswan, vlc, ...
- Kernel: Pondering 2038; KPortReserve and the multi-LSM problem; Optimizing preemption.
- Distributions: Of rings and things; Elementary OS, Debian, Fedora, ...
- Development: GNOME usability research; glibc 2.18; Plasma 4.11; a dither; ...
- Announcements: Conference videos, GitHub, events.
Next page:
Security>>