One of the more frustrating things to try and figure out on Linux systems
is how much memory is actually being used by a process. The ps
command offers something of a view into memory usage, but adding up the
numbers for various types of memory never yields a sensible result. It is
against this backdrop that Matt Mackall presented his smem tool at this year's Embedded
There is an "accounting problem" when users try to look at the
memory usage in their systems, according to Mackall. The kernel saves lots
of memory by sharing various pages between processes, but then when it
reports the memory usage, it counts these shared pages multiple times. The
also allocate more memory than is actually available, "in the belief
that it won't be used". This means that users and developers can't
get a good sense of how the memory is used which leads them to "just
throw more memory at the problem".
In 2007, Mackall attacked the
problem from the kernel side by creating a set of patches that
implemented the pagemap file for each process in /proc.
This binary file "exposes the mapping from virtual to
physical" memory, which can be used to get a better look at memory
He also created some user space tools to read the pagemap files
(along with the related /proc/kpagemap for the kernel). As part
of that, he "developed a pair of concepts to give meaningful
measures" to memory usage.
One of those measures is proportional set size (PSS) which
represents a process's "fair share" of shared pages. If a
page is shared by five processes, each gets one-fifth of a page added to
its PSS. The other measure is the unique set size (USS) which is the
memory devoted exclusively to the process—how much would be
returned to the system if that process were killed.
He then submitted the pagemap code for inclusion into the mainline. As part of
that process, he got "lots of help" from various folks, added
a direct PSS calculation, and redesigned the code and its interface. Linus
Torvalds was not very impressed, and called the code "crap",
but Mackall was able to convince him to include it by listing all of
the people that had assisted as proof that it was a desired feature.
Unfortunately, the changes that were made to pagemap on its way into the
mainline broke all of the
user-space tools he had written and no one else released any tools based on
So, now, in "take 2", Mackall is trying to "write a useful
tool and hope it catches on". The idea behind smem is to
integrate information from multiple sources to provide useful memory usage
information for developers, administrators, and users. In addition to the
expected textual output, Mackall included visualization aids in the form of
pie and bar charts.
With that introduction and history out of the way, Mackall went on to
demonstrate the smem program. At its simplest, without any
arguments, it produces a list of processes running on the system showing the
process id, user, and
command, along with four measures of memory used for each. Those measures
are the amount of swap, USS, PSS, and resident set size (RSS), with the list
being sorted by PSS. But, as Mackall showed, that output can be
rearranged, sorted, and filtered by a variety of parameters.
In addition to looking at memory from the perspective of processes,
smem can look at memory usage by mapping or user, and all three
can be used in regular expression filters. As he was showing various
options, Mackall commented on a few
programs running on his laptop, noting that gweather used 5M for "32
square pixels on the screen", and that tomboy is "useful, but
I'm not sure it's 6.9M of useful".
Since the target audience was embedded developers—and conference sponsor
CE Linux Forum funded the work—Mackall turned to describing ways to
use smem in embedded environments. The program itself is a Python
application, which is "not that huge, but not small", so
"[you] don't want to run it on your phone". What is needed is
a way to capture the data, so that it can be pulled over to another machine
to "slice and dice it" there.
To that end, smem will read a tar file that has been collected
from the /proc filesystem on the target machine. Mackall has
created a simple script to grab the relevant pieces from /proc and
create a .tgz file.
Mackall also demonstrated a system-wide view of memory that would be useful
for embedded developers who are trying to size the memory requirements for
their device. By passing arguments that give the amount of installed
memory, along with the path to an uncompressed, unstripped kernel image,
smem can produce output like:
$ ./smem -R 2G -K ~/linux-2.6/arch/x86/boot/compressed/vmlinux -k -w -t
Area Used Cache Noncache
firmware/hardware 35.2M 0 35.2M
kernel image 6.1M 0 6.1M
kernel dynamic memory 1.5G 1.3G 189.6M
userspace memory 283.5M 85.8M 197.7M
free memory 188.7M 188.7M 0
5 2.0G 1.6G 428.6M
This shows that with the current workload on this machine, 428M of memory
is required. If this workload is known to be fixed, 512M of RAM could
reliably be specified for the system.
All of the smem output can be converted to rudimentary pie and bar
charts, which can be saved in a variety of formats (PNG, SVG, JPG, EPS, and
more). As Mackall explained, there are still lots of tweaks to be made to the
output, but it is basically functional and allows some interaction (zooming
in for example).
A better GUI is one of things on the wish list for further
smem development. First off, Mackall would like to get some users
for the tool that are reporting bugs and hopefully providing patches as
well—interested folks are directed at the download page or the project page for additional info.
In addition, better capture tools (capturing via TCP for example), adding
more sources of data (CPU usage, dirty memory, ...), adding support for
better data from the kernel, and improved visualization are all things he
would like to see added. It is functional and useful now, but could become
something far better down the road.
Comments (20 posted)
a few weeks ago of the preliminary plans for GNOME 3.0 catapulted the GNOME Shell and GNOME Zeitgeist into the
spotlight. Previously little-known, these programs are now identified as
the basis of a new user experience in GNOME 3.0. Meanwhile, both are in
their early stages, and few have tried them, with the result that they are
surrounded by question marks.
What exactly are these programs? What vision do they share in common?
Most importantly of all, are they capable of bearing the expectations
placed upon them? Any answers to these questions must be tentative, because
both projects are in rapid development, and certain to change dramatically
by the time GNOME 3.0 is released. All the same, those in search of
preliminary answers can find them with a bit of quick compiling.
The GNOME Shell
The GNOME Shell is now intended as the replacement for the current
panel, window manager, and desktop. The project site gives detailed instructions
for building the latest version of the application. These are relatively
straightforward, although you might need to add ~/bin to your path
to complete the compile. You should also know that the instructions
apparently assume that you are using Metacity, the current version of
GNOME's default window manager, since they do not work with any other.
After compiling, you can install Xephyr, a nested X server, to run the
GNOME Shell in a window on your current desktop. Alternatively, you can
temporarily replace Metacity with the GNOME Shell, following the
instructions provided by the project. In my experience, using Xephyr is
more likely to be successful.
However you start GNOME Shell, its differences from the GNOME 2 series
of releases is immediately obvious. Not only the layout but the logic with
which you use it is radically different from any GNOME desktop you have
Across the top is a simplified panel, with the time and user on the
right and a button marked "Activities" on the left. It contains no applets,
menu, or system notification, and the taskbar is on a separate panel on the
The Activities button is the key to the GNOME Shell. As in KDE 4, in
the GNOME Shell, "activities" refers to virtual workspaces, and that term
was selected to indicate how to use them. In fact, when you start the
GNOME Shell, you are looking at a full-screen workspace with the
applications xeyes, xlogo, and xterm on it. Click the Activities button,
and the workspace shrinks to reveal the complete desktop.
That desktop is as simple as the panel. On the left is a list of
recently used applications that can be expanded by clicking the link marked
"More". Recent documents have a similar arrangement below. Each expands into
a complete list in a second column of menu items if necessary, with
To the right are large thumbnails of available workspaces. These
thumbnails change size as their number increases or decreases, or a menu
expands into a second column. When you select an application or document,
it opens full-screen. Click the Activities button, and it repositions
itself as a thumbnail on the current activity, sized and arranged so as not
to overlap with anything else on the activity. If you want to use a
thumbnailed application, you either click on it or on its taskbar listing to
run it full-sized. In effect, workspaces are launchpads for applications,
rather than places that you actually work upon.
As a desktop, the GNOME Shell is extremely economical with space, and
well-suited for giving the currently active application a maximum amount of
space. However, if monitor space is not your concern, then the GNOME Shell
can quickly become irritating. You are continually clicking to expose one
item and hide another. Nor is the user experience helped by the fact that
you currently have to make frequently wide sweeps with the mouse up to the
Activities button, although no doubt keybindings will eventually remove
Nor is there any easy way to work with two items side by side (although
you can do so from the taskbar), nor to track the activity that an
application is performing without making it active, nor to jump to a
particular activity in a single click. These limitations may be reduced or
eliminated later, but, for now, they give the GNOME Shell the appearance of
an interface intended for mobile devices, where such features are less
The GNOME Shell may put the desktop into a strong position for the
future by providing a common interface for all the platforms it might be
installed upon. Given the rapid growth of mobile devices, having them as
the main basis for interface design may be an inevitable
evolution. However, it risks short-changing workstation users, whose
computing can be more demanding than that of mobile users.
GNOME Zeitgeist is reminiscent of Nemo, in that both replace standard
file managers based on the directory tree and the desktop with ones based
upon a calendar and other criteria. Both seem to assume that users do not
want to know where their files are, or to hunt for them visually — they
just want their files when they need them. What you think of GNOME
Zeitgeist will probably depend on how much you agree with that assumption.
Unlike the case with the GNOME shell, the Zeitgeist project offers
little assistance to downloaders. Fortunately, all you need to do is install
Bazaar Version Control, and run the command bzr branch
lp:gnome-zeitgeist while having an Internet connection to download.
Once downloaded, there is no need to compile. Instead, just go
to the download directory and enter sh ./zeitgeist-daemon.sh to
start the service (probably in a separate window
or in the background),
followed by sh ./zeitgeist-journal.sh to run
the main graphical interface.
GNOME Zeitgeist opens on a three day calendar, showing yesterday,
today, and tomorrow, and a list of files accessed on each day. This is the
view offered when you click the "Recent" icon in the toolbar. You can also
click the "Older" or "Newer" icons to change the dates in the three-pane
display, or the "Calendar" to change to the view to one appropriate for a
Other ways of viewing files include Bookmarks, Tags, and Filters for
file types, all of which are available in at least one existing file
manager, although not with the same ease of use as in GNOME Zeitgeist.
If you return to the download directory, you will also find two
additional pieces of GNOME Zeitgeist that have yet to be integrated into
the main interface: zeitgeist-timeline.sh, which looks as though it
presents a longer, alternative view of files created each day, and
zeitgeist-project.sh, which presumably groups related files together. Other
criteria for finding files, such as by location, are due to be added later.
As a collection of features in a traditional file manager, Zeitgeist
would be a welcome enhancement. However, having Zeitgeist as a default file
manager raises numerous questions. Is its assumption of the average users'
preferences correct? Or will it create another barrier between desktop
users and the command line by promoting a different concept of how files
are accessed? Would users be better off if they were encouraged to organize
their files, instead of just dumping them in their home directories?
From one perspective, GNOME Zeitgeist might be seen as the equivalent
of a word processor that favors manual formatting over the creation of
styles — as an application that encourages sloppy computer habits. Others,
however, might argue that such programs are simply being realistic about
users' work habits.
Pain or paradise?
Neither the GNOME Shell nor GNOME Zeitgeist should be judged on speed
or looks yet. Both projects are still at the stage of adding
functionality. However, enough functionality exists in both that a few
preliminary comments are possible.
First, even together, the GNOME Shell and GNOME Zeitgeist seem slight
to build an entire new desktop around. Although each is an interesting
innovation, are the two enough to "revamp" the user experience, as the
announcement of GNOME 3.0 promises? So far, it is uncertain that they are.
Moreover, each is primarily a change at the interface level. To what extent
either will require other GNOME applications to be rewritten, and to what
extent GNOME's back end libraries will need to be overhauled is still being
determined. So far, the news about GNOME 3.0 plans suggests that the
rewriting of the backend may be fairly minimal.
Just as importantly, whether the two will create a common experience is
still up in the air. So far, the two application seem to be proceeding
along different lines of thought about usability. In particular, while the
GNOME Shell is all about economical use of desktop space, GNOME Zeitgeist
works best in a large window.
And while the GNOME Shell radically changes how users interact with the
desktop, GNOME Zeitgeist's interface is much more like the applications to
which they are accustomed.
At some point, there will probably have to be an agreement on
standard designs if the two are going to integrate well.
Finally, while few would claim that the user experience on any computer
desktop is perfected, will users accept such radical rethinking? Both
projects are attempting to make the user experience easier, but both depart
strongly from everything that users have become accustomed to over the last
two decades. Considering that KDE 4.0 was roughly received, despite the
fact that it was an evolution of the existing desktop, not a complete
departure, GNOME 3.0 may run the risk of provoking its own user revolt.
Of course, these are early days, and the validity or absurdity of such
concerns will become clearer as both projects progress. How GNOME 3.0 is
marketed and documented will also affect its reception. But, so far, the
GNOME Shell and GNOME Zeitgeist arouse as much apprehension for GNOME 3.0
as hope. We'll have to wait to see which was more justified.
Comments (64 posted)
While Ubuntu has been able to attract an impressive "market share" as a
GNU/Linux distribution in just a couple of years, this success has been
limited mainly to the desktop. Canonical has made it clear that it has
ambitions for the server market, but at the moment Ubuntu
Server Edition does not stand apart from enterprise distributions like
Red Hat Enterprise Linux and SUSE Linux Enterprise. With the newest
edition, Ubuntu 9.04 ("Jaunty Jackalope"), Canonical has tried to outsmart
its competitors by focusing on cloud computing.
"Cloud computing" comes down to providing computing resources as a
dynamically scalable service. In practice this means that customers rent
computers to run their own software. The best known cloud computing system
is Amazon Elastic Compute Cloud
(EC2). Customers can create virtual machines (which EC2 calls server
instances) and run them on Amazon's servers. They are charged for each hour
the virtual machine runs, and for the bandwidth used. Amazon distributes a
set of (proprietary) EC2 tools to manage a cloud. With these command line
programs, users can create, launch or terminate virtual servers, as well as
any other imaginable task.
The most innovative part in Ubuntu 9.04 is found in the Eucalyptus project, which brings an
Amazon EC2-style private cloud within the reach of every Ubuntu 9.04
user. At the moment it's still a technology preview, which will not be
considered production-ready until Ubuntu 9.10 later this year. Eucalyptus
makes it possible to investigate cloud possibilities inside a company,
without the need to deploy the applications on external servers at
Amazon. Because Eucalyptus is interface-compatible with the EC2 APIs, the
same EC2 tools can be used. This means that working with virtual machines
on Eucalyptus is almost identical to working with virtual machines on
Amazon EC2, and a company wanting to use cloud
computing on Ubuntu has the choice between Ubuntu
Server on Amazon EC2 and Ubuntu Server on Eucalyptus, what Canonical is
The Eucalyptus project is important because it 'frees' cloud
computing. Traditionally, cloud computing systems have been the playground
of large companies from Google, Amazon and IBM to Microsoft. Vendor lock-in
is a serious issue in this emerging market. However, with Eucalyptus there
is a technology which allows anyone to set up their own cloud system on
their own hardware. The framework essentially implements what is commonly
referred to as "Infrastructure as a Service": a system with the
ability to run and control collections of virtual machine instances
deployed across a variety of physical servers.
Development of Eucalyptus
The name Eucalyptus is an acronym for "Elastic Utility Computing
Architecture for Linking Your Programs To Useful Systems". The software was
originally a research project in the Computer Science Department at the
University of California, Santa Barbara (UCSB). The research question that
was investigated concerned the combined use of U.S. National Science
university research machines and the public clouds for large-scale science
applications. Through that project, called VGrADS, the researchers designed and
coded what has been released as Eucalyptus 1.0 in May 2008. This gave them
an environment to host a cloud for themselves.
At that time, the only commercial cloud was Amazon's EC2. The
researchers ported their grid system to Amazon EC2, and at the same time
built Eucalyptus as an EC2-compatible cloud system. Because the researchers
didn't have the resources to support more than one cloud API, Eucalyptus
has been written as a drop-in replacement for EC2, such that a system
running on EC2 could also run on Eucalyptus in the same way. The developers
have only read Amazon's API specifications that are published free of
charge. Thus, internally, Eucalyptus works completely differently than
EC2, but it faithfully reproduces EC2 functionality.
While Eucalyptus was designed as a tool to support research, the
developers clearly saw that it had potential in the non-academic world
too. Therefore, they released the code as open source under a BSD
license. However, for the moment, the developers are restricting external
contributions to bug fixes, because they want to keep the code base stable
in this early phase of development. According to project lead Rich Wolski,
this policy will change during this year:
Once we finalize the implementation of the Amazon Web
Services API (there remain a few missing API features that we wish to
include for completeness), we'll begin to recruit contributors. As of now,
we think that will be some time in the mid July time frame, but it will
depend on our release schedule.
There was a close collaboration between the Eucalyptus and Ubuntu
developers to get Eucalyptus in Ubuntu 9.04. For example, originally
Eucalyptus used Xen as its virtualization platform, but because Ubuntu
favors KVM it has been integrated with KVM in Ubuntu. However,
Eucalyptus isn't tied to Ubuntu or KVM. As Rich Wolski says:
The biggest challenge at the moment is to extend the
generality of Eucalyptus while maintaining the integrity of the system. As
we expand the number of platforms on which it runs, the hypervisors it
supports, and the APIs it supports, the complexity associated with
maintaining its core abstractions is increasing. Fortunately, we did the
original design with this generality in mind, choosing to focus initially
only on AWS, Xen and RPM packaging, but with a modularity of design that
would allow us to expand the infrastructure in an orderly way.
Creating your own private cloud with Eucalyptus
So how does Eucalyptus work? It's essentially a set of web services: the
user makes a request to a front-end web service, the cloud controller. If
the request is for storage, it is forwarded to Walrus,
the storage front-end that is compatible with Amazon S3. The storage request is then
forwarded to storage controllers running at the cluster level. If the
request to the cloud controller is not about storage, it is forwarded to
web services on the cluster level and next to the individual compute
Eucalyptus consists of three parts, which come in Ubuntu as three packages:
- eucalyptus-cloud: the cloud controller, implementing the EC2
and S3 APIs. A Eucalyptus system needs only one cloud controller.
- eucalyptus-cc: the cluster controller, which is the master
server and implements the virtual network. A Eucalyptus system normally
needs only one cluster controller.
- eucalyptus-nc: the node controller, which controls the KVM
hypervisor and manages the virtual machines on a node. Each physical
server in the cloud needs a node controller.
The three components can also be installed on one computer. This can be
done for example if one wants to evaluate Eucalyptus on an Ubuntu 9.04
system for the first time.
Installing and deploying Eucalyptus on Ubuntu 9.04 is still somewhat
complicated, but the Ubuntu community
documentation is an excellent guide for the installation. The user
trying to install Eucalyptus will definitely meet some rough edges. For
example, he can add a cluster to the cloud in the web interface, but adding
nodes has to be done on the command line. And the EC2 tools bundled with
Ubuntu 9.04 are not compatible with Eucalyptus, hence users have to
download another version of the EC2 tools manually. Moreover, while trying
to set up a Eucalyptus system on a fresh Ubuntu 9.04 install, your author
discovered that Eucalyptus is extremely sensitive to the virtual or
physical network setup. If something is wrong with the network, the error
messages of Eucalyptus and the EC2 tools are not helpful.
And so on; one assumes these difficulties will be ironed out over time.
Ubuntu in the cloud
Ubuntu's cloud computing possibilities don't stop with Eucalyptus. For
the last few months, Canonical has had Ubuntu machine images for Amazon EC2 in beta,
and, last week, the Ubuntu EC2 team announced the availability of public
Ubuntu EC2 images for the 8.10 and 8.04 (LTS) releases. This provides a
stable Ubuntu platform that allows users to run their applications in an
EC2 environment. Meanwhile, the Ubuntu EC2 team is working on EC2 images
It's interesting to note that an Amazon machine image can be converted
to a Eucalyptus machine image, even if Amazon is using Xen and Eucalyptus
on Ubuntu is using KVM. The key difference is that Xen accepts an ext3
filesystem for use as a root filesystem, while KVM expects a disk
image. The Eucalyptus developers have some internal tools for making this
conversion and are using it frequently during the development and Q/A for
each release. According to Wolski, they are planning to add the tools to a
future Eucalyptus release. For now, converting images requires a little bit
of an understanding of the different requirements each hypervisor has.
Canonical and Eucalyptus have worked together to make it as easy as
possible to set up and manage a private cloud. Therefore, the Eucalyptus
web interface has a button to register for a RightScale account, which is available
in a free
developer edition to try it out, and some pay editions with extra
features. By following the link on the configuration web
page of the Eucalyptus web interface, the user is ready to manage a cloud from
within a RightScale dashboard. Users can see their virtual machine
instances on their private cloud or on EC2 in one dashboard.
management is also included, for example a user can be allowed to launch
his own cloud server on someone else's cloud. RightScale is now working
with Canonical to ensure that the official Ubuntu 9.04 Amazon Machine
Images will work out-of-the-box with RightScale. According to the official
RightScale blog, this will work out as following:
This means that if you launch one of the 9.04 AMIs
from the RightScale dashboard then all the RightScale goodness will work:
server templates, monitoring, automation, etc. If you launch the same AMI
using the API or from a different console, then they'll work as if
RightScale didn't exist.
An interesting note: RightScale first focused on CentOS, but switched to
Ubuntu as its primary supported distribution because of Canonical's cloud
When Ubuntu Server first appeared, a lot of people didn't believe it
could be a real competitor in the enterprise Linux market. However, with
Ubuntu 9.04 a clear focus is arising. Canonical wants to do for cloud
computing just the same thing it has done with its desktop operating
system: make it work out-of-the-box and make it easy to deploy. The
collaboration between Canonical, Amazon, Eucalyptus and RightScale is an
important step in this direction. While working with Eucalyptus in Ubuntu
9.04 has still its rough edges, it's interesting to preview this flexible
technology that will hopefully be mature in Ubuntu 9.10 at the end of the
year. The name "Karmic Koala" for the 9.10 release at least gives a nice
insight in what a core role Eucalyptus will play in Ubuntu Server.
Comments (6 posted)
We recently found time for a bit of site-code hacking, resulting in a
couple of new features. First: our long, dark period as the only site on
the net without a Twitter feed has now come to an end. Interested parties
can follow article posts on either Twitter
. We are just beginning to
experiment with these channels; please let us know if you have any ideas
for how we can use them better.
Meanwhile, as the comment volume increases, keeping up with new comments
has gotten harder. We're pondering a number of changes to help in that
regard. But one thing which has been implemented is the (subscriber-only)
page at http://lwn.net/Comments/unread.
This page will display all comments posted on LWN since the last time you
visited it (it shows comments for 24 hours on the first visit), organized
for readability. The page still has a couple of rough
edges, but it's useful now. Again, comments are welcome.
Comments (42 posted)
Page editor: Jonathan Corbet
Next page: Security>>