One of the more frustrating things to try and figure out on Linux systems
is how much memory is actually being used by a process. The ps
command offers something of a view into memory usage, but adding up the
numbers for various types of memory never yields a sensible result. It is
against this backdrop that Matt Mackall presented his smem tool at this year's Embedded
There is an "accounting problem" when users try to look at the
memory usage in their systems, according to Mackall. The kernel saves lots
of memory by sharing various pages between processes, but then when it
reports the memory usage, it counts these shared pages multiple times. The
also allocate more memory than is actually available, "in the belief
that it won't be used". This means that users and developers can't
get a good sense of how the memory is used which leads them to "just
throw more memory at the problem".
In 2007, Mackall attacked the
problem from the kernel side by creating a set of patches that
implemented the pagemap file for each process in /proc.
This binary file "exposes the mapping from virtual to
physical" memory, which can be used to get a better look at memory
He also created some user space tools to read the pagemap files
(along with the related /proc/kpagemap for the kernel). As part
of that, he "developed a pair of concepts to give meaningful
measures" to memory usage.
One of those measures is proportional set size (PSS) which
represents a process's "fair share" of shared pages. If a
page is shared by five processes, each gets one-fifth of a page added to
its PSS. The other measure is the unique set size (USS) which is the
memory devoted exclusively to the process—how much would be
returned to the system if that process were killed.
He then submitted the pagemap code for inclusion into the mainline. As part of
that process, he got "lots of help" from various folks, added
a direct PSS calculation, and redesigned the code and its interface. Linus
Torvalds was not very impressed, and called the code "crap",
but Mackall was able to convince him to include it by listing all of
the people that had assisted as proof that it was a desired feature.
Unfortunately, the changes that were made to pagemap on its way into the
mainline broke all of the
user-space tools he had written and no one else released any tools based on
So, now, in "take 2", Mackall is trying to "write a useful
tool and hope it catches on". The idea behind smem is to
integrate information from multiple sources to provide useful memory usage
information for developers, administrators, and users. In addition to the
expected textual output, Mackall included visualization aids in the form of
pie and bar charts.
With that introduction and history out of the way, Mackall went on to
demonstrate the smem program. At its simplest, without any
arguments, it produces a list of processes running on the system showing the
process id, user, and
command, along with four measures of memory used for each. Those measures
are the amount of swap, USS, PSS, and resident set size (RSS), with the list
being sorted by PSS. But, as Mackall showed, that output can be
rearranged, sorted, and filtered by a variety of parameters.
In addition to looking at memory from the perspective of processes,
smem can look at memory usage by mapping or user, and all three
can be used in regular expression filters. As he was showing various
options, Mackall commented on a few
programs running on his laptop, noting that gweather used 5M for "32
square pixels on the screen", and that tomboy is "useful, but
I'm not sure it's 6.9M of useful".
Since the target audience was embedded developers—and conference sponsor
CE Linux Forum funded the work—Mackall turned to describing ways to
use smem in embedded environments. The program itself is a Python
application, which is "not that huge, but not small", so
"[you] don't want to run it on your phone". What is needed is
a way to capture the data, so that it can be pulled over to another machine
to "slice and dice it" there.
To that end, smem will read a tar file that has been collected
from the /proc filesystem on the target machine. Mackall has
created a simple script to grab the relevant pieces from /proc and
create a .tgz file.
Mackall also demonstrated a system-wide view of memory that would be useful
for embedded developers who are trying to size the memory requirements for
their device. By passing arguments that give the amount of installed
memory, along with the path to an uncompressed, unstripped kernel image,
smem can produce output like:
$ ./smem -R 2G -K ~/linux-2.6/arch/x86/boot/compressed/vmlinux -k -w -t
Area Used Cache Noncache
firmware/hardware 35.2M 0 35.2M
kernel image 6.1M 0 6.1M
kernel dynamic memory 1.5G 1.3G 189.6M
userspace memory 283.5M 85.8M 197.7M
free memory 188.7M 188.7M 0
5 2.0G 1.6G 428.6M
This shows that with the current workload on this machine, 428M of memory
is required. If this workload is known to be fixed, 512M of RAM could
reliably be specified for the system.
All of the smem output can be converted to rudimentary pie and bar
charts, which can be saved in a variety of formats (PNG, SVG, JPG, EPS, and
more). As Mackall explained, there are still lots of tweaks to be made to the
output, but it is basically functional and allows some interaction (zooming
in for example).
A better GUI is one of things on the wish list for further
smem development. First off, Mackall would like to get some users
for the tool that are reporting bugs and hopefully providing patches as
well—interested folks are directed at the download page or the project page for additional info.
In addition, better capture tools (capturing via TCP for example), adding
more sources of data (CPU usage, dirty memory, ...), adding support for
better data from the kernel, and improved visualization are all things he
would like to see added. It is functional and useful now, but could become
something far better down the road.
to post comments)