Brief items
The current development kernel is 3.4-rc2,
released on April 7. It includes a lot
of fixes; Linus also decided to pull
the DMA
mapping rework preparatory patches. That should be the end of the
significant merges for this development cycle, though: "
I'm going to
be stricter about pulls from here on out, there was a lot of 'noise', not
just pure fixes." The short-form changelog can be found in the
announcement.
Stable updates: there have been no stable updates released in the
last week. The
3.0.28,
3.2.15, and
3.3.2 updates are in the review process as
of this writing. Each contains a long list of fixes; the final release can
be expected on or after April 13.
Comments (2 posted)
More than eight years after the 2.6.0 release, Willy Tarreau has announced
that he will no longer be releasing updates to the 2.4 series. For those
who really are unable to move on, he may maintain a git tree with an
occasional fix, "
but with no guarantees."
Full Story (comments: 4)
The kernel security developers have stopped waiting for the kernel.org wiki
system to return to service. So kernel-related security development
information can now be found on
kernsec.org. Restoring the kernel.org wiki content there
may be an ongoing process for a little while.
Comments (1 posted)
By Jonathan Corbet
April 11, 2012
Deadline scheduling has been discussed on these pages a number of times.
In short: rather than using priorities, a deadline scheduler characterizes
each process with a maximum CPU time required and the deadline by which it
must receive that CPU time. A properly-written deadline scheduler can
ensure that every process meets its deadlines while refusing to take work
that would cause any deadlines to be missed. Patches adding a deadline
scheduler to Linux have existed for a few years, but their progress toward
the mainline has been slow; recently that progress has been
very
slow since the principal developer involved has moved on to other projects.
The deadline scheduler patches now have a new developer in the form of Juri
Lelli; Juri has posted a new version of the
patches to restart the discussion. The changes from the last time
around are mostly minor: review comments addressed and an improved
migration mechanism added. The plan is to continue to fill in the gaps
required to make the deadline scheduler production-worthy and, eventually,
to get it into the mainline. To that end, there is a new git repository,
an application designed to test deadline scheduling, and a new mailing list.
One assumes Juri would be most appreciative of testing and patches.
Comments (6 posted)
Kernel development news
By Jonathan Corbet
April 10, 2012
"Containers" can be thought of as a lightweight form of virtualization.
Virtualized guests appear to be running on their own dedicated hardware;
containers, instead, run on the host's kernel, but in an environment where
they appear to have that kernel to themselves. The result is more
efficient; it is typically possible to run quite a few more containerized
guests than virtualized guests on a given system. The cost, from the
user's point of view, is flexibility; since virtualized guests can run
their own kernel, they can run any operating system while containerized
guests are stuck with what the host is running.
There is another cost to containers that is not necessarily visible to
users: their implementation in the kernel is, in many ways, far more
complex. In a
system supporting containers, any globally-visible resource must be wrapped
in a namespace layer that provides each container with its own view. There
are many such resources on a Linux system: process IDs, filesystems,
and network interfaces, for example. Even the system name and time can
differ from one container to the next. As a result, despite years of
effort, Linux still lacks proper container support, while virtualization
has been a solid feature for a long time.
Work continues on the Linux containers implementation; the latest piece is
a new set of user namespace patches from
Eric Biederman. The "user namespace" can be thought of as the
encapsulation of user/group IDs and associated privilege; it allows the
owner of a container to run as the root user within that container while
isolating the rest of the system from the in-container users. The
implementation of a proper user namespace has always been a hard problem
for a number of reasons. Kernel code has been written with the
understanding that a given process has one, globally-recognized user ID;
what happens when processes can have multiple user IDs in different
contexts? It is not hard to imagine developers making mistakes with user
IDs - mistakes leading to serious security holes in the host system. Fears
of this kind of mistake, along with the general difficulty of the problem,
have kept a full user namespace out of the kernel so far.
Eric's patch takes a bit of a different approach to the problem in the hope
of making user namespaces invisible to most kernel developers while
catching errors when they are made. To that end, the patch set creates a
new type to represent "kernel UIDs" - kuid_t; there is also a
kgid_t type. The kernel UID is meant to describe a process's
identity on the base system, regardless of any user IDs it may adopt within
a container; it is the value used for most privilege checks. A process's
kernel IDs may (or may not) be equal to the IDs it sees in user space;
there is no way for that process to know. The kernel IDs exist solely to
identify a process's credentials within the kernel itself.
In the patch, kuid_t is a typedef for a single-field structure;
its main reason for existence is to be type-incompatible with the simple
integer user and group IDs used throughout the kernel. Container-specific
IDs, instead, retain the old integer (uid_t and gid_t)
types. As a result, any attempt to mix
kernel IDs with per-container IDs should yield an error from the compiler.
That should eliminate a whole class of potential errors from kernel code
that deals with user and group ID values.
The kuid_t type, being an opaque cookie to in-kernel users, needs
a set of helper functions. Comparisons can be done with functions like
uid_eq(), for example; similar functions exist for most arithmetic
tests. For many purposes, that's all that is needed. Regardless of its
namespace, a process's credentials are stored in the global kuid_t
space, so most ID tests just do the right thing.
There are times, though, where it is necessary to convert between a kernel
ID and a user or group ID as seen in a specific namespace. Perhaps the
simplest example is a system call like getuid(); it should return
the namespace-specific ID, not the kernel ID. There is a set of functions
provided to perform these conversions:
kuid_t make_kuid(struct user_namespace *from, uid_t uid);
uid_t from_kuid(struct user_namespace *to, kuid_t kuid);
uid_t from_kuid_munged(struct user_namespace *to, kuid_t kuid);
bool kuid_has_mapping(struct user_namespace *ns, kuid_t uid);
(A similar set of functions exists for group IDs, of course.) Conversion
between a kernel ID and the namespace-specific equivalent requires the use
of a mapping stored within the namespace, so the namespace of interest must
be supplied when calling these functions. For code that is executing in
process context, a call to current_user_ns() is sufficient to get
that namespace pointer. make_kuid() will turn a
namespace-specific UID into a kernel ID, while from_kuid() maps a
kernel ID into a specific namespace. If no mapping exists for the given
kernel ID, from_kuid() will return -1; for cases where a
valid ID must be returned, a call to from_kuid_munged() will
return a special "unknown user" value instead. To simply test whether it is
possible to map a specific kernel ID into a namespace,
kuid_has_mapping() is available.
Actually setting up the mapping is a task that must be performed by the
administrator, who will likely set aside a range of IDs for use within a
specific container. The patch series adds a couple of files under
/proc/pid/ for this purpose; setting up the mapping is just
a matter of writing one or more tuples of three numbers to
/proc/pid/uid_map:
first-ns-id first-target-id count
The first-ns-id is the first valid user ID in the given process's
namespace. It is likely to be zero - the root ID is valid and harmless
within user namespaces. That first ID will be mapped to
first-target-id as it is defined in the parent namespace, and
count is the number of consecutive IDs that exist in the mapping.
If multiple levels of namespaces are involved, there may be multiple layers
of mapping, but those layers are flattened by the mapping code, which only
remembers the mapping directly to and from kernel IDs.
Establishing mappings clearly needs to be a privileged operation,
so the process performing this operation must have the CAP_SETUID
capability available. A process running as root within its own namespace
may well have that capability, even though it has no access outside of its
namespace. So processes in a user namespace can set up their own
sub-namespaces with arbitrary mappings - but those mappings can only access
user and group IDs that exist in the parent namespace. As an additional
restriction, the namespace ID mapping can only be set once; after it has
been established for a given namespace, it is immutable thereafter.
These mapping functions find their uses in a number of places in the core
kernel. Any system call that deals in user or group ID numbers must
include the conversions to and from the kernel ID space. The ext* family
of filesystems allows the specification of a UID that can use reserved
blocks, so the filesystem code must make sure it's working with kernel IDs
when it does its comparisons. So the patch is, like much of the namespace
work, mildly intrusive, but Eric has clearly worked to minimize its
impact. In particular, he has taken care to ensure that the runtime
overhead of the ID-mapping code is zero if user namespaces are not
configured into the kernel, and as close as possible to zero when the
feature is used.
The user namespace feature, he says, now
has a number of nice features to offer:
With my version of user namespaces you no longer have to worry
about the container root writing to files in /proc or /sys and
changing the behavior of the system. Nor do you have to worry
about messages passed across unix domain sockets to d-bus having a
trusted uid and being allowed to do something nasty.
It allows for applications with no capabilities to use multiple
uids and to implement privilege separation. I certainly see user
namespaces like this as having the potential to make linux systems
more secure.
As of this writing, there have been few comments from reviewers, so it is
not yet clear whether other developers agree with this assessment or not.
The nature of the namespace work means that it can be difficult to get it
accepted into the mainline, where developers tend to be concerned about the
overhead and relatively uninterested in the benefits. But, with years of
persistence, much of this work has gotten in anyway; chances are that user
namespaces, in some form, will eventually find their way in as well.
Comments (5 posted)
April 11, 2012
This article was contributed by Neil Brown
It has been
observed
that "Linux is evolution, not intelligent design". Evolution may seem
to be an effective way to allow many developers to work in parallel to
produce a coherent whole, but it does not necessarily produce a well
structured elegant orthogonal design. While the developers do try to
limit the more overt duplication of functionality there are often
cases where functionality overlaps, as the
ongoing
reflections
on RAID unification bear witness.
A different perspective on the challenges associated with this
tendency to concurrent evolution can be seen if we examine the "timed
gpio" piece of the various attempts to
bring Android closer to mainline.
The Android developers appear to have made a deliberate decision to
add functionality as completely separate drivers rather than modifying
existing drivers. This undoubtedly makes it easier for them to
forward-port to newer versions of the kernel, and also provides the
setting for a simple case study when the attempt is made to merge the
functionality upstream.
The "timed gpio" driver is really more than just a
driver. Firstly it includes a new "device class" named "timed output"
which can drive an arbitrary output for a set period of time (in
milliseconds). Secondly it includes a driver for this class which
drives a given GPIO (General Purpose Input/Output) line according to
the rules for the class. So we should really be thinking of "timed
output" as the required functionality.
The primary purpose for this functionality is apparently to control a
vibrator, as
used in mobile phones to alert the owner to conditions such as an
incoming call. On the hardware side, the vibrator is connected to some
GPIO line, and can be turned on or off by asserting or de-asserting
that line.
The first query that would be raised about such a possible submission (after
checking for white-space errors of course) is whether the Linux kernel
really needs another device class, or whether some existing device
class can be enhanced to meet the need. This is notably a different
attitude to the traditional Unix "tools" approach where the preferred
first step is to combine existing tools to achieve a goal, and only
merge the functionality into those tools when it has been proved.
With Linux driver classes there is no general mechanism for combining existing
drivers - there is no equivalent of the "pipe" or the standard data
format of "new-line separated lines of text" to aid in combining
things. So the only way to avoid continually reinventing from scratch
is to incrementally enhance existing functionality.
When casting around for existing device classes the first possibility
is clearly the "gpio" class. This allows direct control of GPIO lines
and can (when the device is suitably configured) drive lines as
outputs, sample them as inputs, and even detect level changes using
poll(). A simple solution for a vibrator would be to use "gpio" as it
is, and leave the timing up to user space. That is, some daemon could
provide a service to start the vibrator and then stop it shortly
afterward.
One problem with this approach is that it is not fail-safe.
User-space programs are generally seen as less reliable than the
kernel. The daemon could be killed while the vibrator is left on and
it would then stay on, wasting quite a lot of battery capacity and
irritating the user.
How much this is a serious issue is unclear, but it does seem to have
been important enough to the Android engineers to justify writing a
new driver so it should not be quickly dismissed.
Adding a timer function to the "gpio" class might be possible, though
is probably a bad choice. Timing is not intrinsic to the concept of
GPIOs and, if it were allowed into the class, it would be difficult to
justify not letting all sorts of other arbitrary features in. So it
seems best to keep it out, and in any case there is another class which
already supplies a very similar concept.
The "leds" class already performs a variety of timed output functions.
It is intended for driving LEDs and can set them or "on" or "off" or,
where supported, a range of brightness values in between. "leds" has a
rich "trigger" mechanism so that an LED can be made to flash depending
on the state of various network ports, the activity of different
storage devices, or the charge status of different power supplies. They
can also be triggered using a timer. This class already can drive a
GPIO on the assumption that it applies power to an LED, and could
easily be used to apply power to a vibrator as well (maybe we would
have to acknowledge that it was a Lively Energetic Device to get past
the naming police).
There is a precedent for this, as the original Openmoko phones - the Neo
and the Freerunner - use a "leds" driver to control the vibrator, as
does the Nokia N900.
Unfortunately the "leds" class doesn't actually meet the need, as it is
not possible to start the timer without passing through a state where
a user-space crash would leave the vibrator running endlessly. When
the "timer" trigger is enabled it starts with default values of 500ms
for "on" and 500ms for "off" which can then be adjusted.
If the application fails before resetting the "off" time, the vibrator will
come back on shortly. So for the
purposes of failing safe it is no better than the "gpio" class.
In the hope of addressing this - which could be seen as a design bug
in the "leds" class - Shuah Khan recently posted a
patch
to add a "timer-no-default" trigger and also to allow the "off" time to
be set to "forever". This would enable using the "leds" timer mechanism
to drive a vibrator with no risk of it staying on indefinitely.
Out of the discussion on the linux-kernel list arose the observation - not
mentioned
before in the discussions on mainlining Android - that there is another
class that can be and is being used to drive vibrators. This is,
somewhat surprisingly, the "input" subsystem.
Choosing a name for a subsystem that will not go out-of-date is a recurring
problem, which
can be seen in, for example, the "scsi" subsystem of Linux;
that subsystem now also
drives SATA disks and USB attached storage. Similarly the "input"
subsystem is also used for some outputs such as the LEDs on a keyboard
(those that light "caps lock" or "num lock"), the speaker in the PC
platform that is used for "beeping", and, more recently, for the
force-feedback functionality of some joysticks and other game
controllers. As Dmitry Torokhov (current maintainer of the "input"
class) suggests,
it is better to think of it as an "hid" (Human Interface Device)
class which happens to be named "input".
The force feedback framework in the input class provides for a range of
physical or "haptic" signals to be sent back to the user of the
device, one of which is the "rumble" which is really another name for
a vibration. This effect can be triggered in various ways and
importantly can be set to be activated for a fixed period of time.
That is, it can operate in a fail-safe mode.
So it seems that a device class suitable for vibrators already
exists. It isn't able to drive simple GPIO lines yet, however that is
unlikely to be far away. Dmitry has already
posted
a patch to create a rumble-capable input device from PWM (pulse
width modulation) hardware, and doing the same for a GPIO is a very
small step.
It is interesting that, though this question has been raised at various
times in various forums over the last year or so, this seems to be the
first time that using an input device with a rumble effect has been
suggested in the same context. It highlights the fact that there is
so much functionality in Linux that nobody really knows about all of
it, and finding the bit that meets a particular need is not always
straightforward. It also highlights the observation, which has been
made many times before, that sometimes the best way to get a useful
response is to post a credible patch. People seem to be more keen to
take a patch seriously than to enter into a less-focused discussion.
Whether the Android team will come on board with the rumble effect and
drop their timed gpio patch is not yet known. What is known is that
finding the right niche for new functionality does require
persistence.
Comments (2 posted)
April 11, 2012
This article was contributed by Mathieu Desnoyers, Julien Desfossez, and David Goulet
The recently released Linux Trace Toolkit next generation (LTTng) 2.0
tracer is the result of a two-year
development cycle
involving a team of dedicated developers. Unlike its predecessor, LTTng
0.x, it can be installed on a vanilla or distribution kernel without any
patches. It also performs combined tracing of both the kernel and
user space, and has many more features that will be detailed in this
article and its successor.
Why LTTng 2.0?
The main motivation behind LTTng 2.0 is that we identified a strong
demand for user-space tracing, and we noticed that targeting a user base
broader than developers required a lot of work focusing on usability.
Moving forward toward these goals led us to the unification of the
tracer control and user interface (UI) for both kernel and user-space tracing.
Some might wonder why we did not go down the Perf or Ftrace path for
our development. Perf does not meet our performance needs, and is
in many ways designed specifically for the Linux kernel (e.g. the trace
format is kernel-specific). As for Ftrace, its performance is similar to
that of LTTng,
but its main focus is on simplicity for kernel debugging use-cases,
which means a single user, single tracing session, and single set of
buffers. It also has a trace format specifically tuned for the Linux
kernel, which does not meet our requirements.
LTTng 2.0 features
LTTng 2.0 is pretty much a rewrite of the older 0.x LTTng versions,
but now focusing on usability and end-user experience. It builds on the
solid foundations of LTTng 0.x for data transport, uses the existing
Linux kernel instrumentation mechanisms, and makes the flexible
CTF (Common Trace Format)
available to end users. For example, that allows
prepending arbitrary performance monitoring unit (PMU) counter values to each
traced event.
LTTng provides an integrated interface for both kernel and user-space
tracing. A "tracing" group allows non-root users to control tracing and
read the generated traces. It is multi-user aware, and allows multiple
concurrent tracing sessions.
LTTng allows access to tracepoints, function tracing, CPU
PMU counters, kprobes, and kretprobes. It provides the
ability to attach "context" information to events in the trace (e.g.
any PMU counter, process and thread ID, container-aware virtual PIDs and
TIDs, process name, etc). All the extra information fields to be
collected with events are optional, specified on a per-tracing-session
basis (except for timestamp and event id, which are mandatory). It works
on mainline kernels (2.6.38 or higher) without any patches.
The Common Trace Format specification
has been designed in collaboration with various industry participants to
suit the tracing tool interoperability needs of the embedded,
telecommunications, high-performance, and Linux kernel communities. It is
designed to allow traces to be natively generated by the Linux kernel,
Linux user-space applications written in C/C++, and hardware components.
One major element of CTF is the Trace Stream Description Language (TSDL)
which flexibly enables description of various binary trace stream
layouts. It supports reading and writing traces on architectures with
different endian-ness and type sizes, including bare metal generating
trace data that is addressed in a bitwise manner. The trace data is
represented in a
native binary format for increased tracing speed and compactness, but can
be read and analyzed
on any other machine that supports the Babeltrace data converter.
In addition
to specifying the disk data storage format, CTF is designed to be streamed
over the
network through TCP or UDP, or sent through a serial interface. The
storage format allows discarding the oldest pieces of information when
keeping flight-recorder buffers in memory to support snapshot use cases.
The flexible layout also enables features such as attaching additional
context information to each event, and the layout acts as an index that
allows fast seeking through very large traces (many GB).
LTTng is a high-performance tracer that produces very compact trace
data, with low overhead on the traced system.
A somewhat dense overview of the LTTng 2.0 architecture can be seen at
right, click through for a larger view.
Tracer strengths overview
The diversity of tracing tools available in Linux today can baffle users
trying to pick which one to use. Each has been developed with
different use cases in mind, which makes the motto "use the right tool
for the job" very appropriate. In this section, we present the
strengths of the major Linux kernel tracing tools.
Please keep in mind that the authors tried to keep
an objective perspective when writing this section, which is based on
information
received during the past years' conferences.
LTTng 2.0
The targeted LTTng audience includes anyone responsible for production
systems, system administrators, and application developers. LTTng focuses on
providing a system-wide view (across software stack layers) with detailed
combined application and system-level execution traces, without adding
too much overhead to the traced system.
One downside of LTTng 2.0 is that it is not
provided by the upstream kernel: it requires that either the distribution, or
the end user, install separate packages. LTTng 2.0 is also not geared
toward kernel development: it currently does not support integration
with kernel crash dump tools, nor does it support kernel early boot tracing.
LTTng is best suited to finding performance slowdowns or latency issues
(e.g. long delays) in
production or while doing development when the cause is either unknown or comes
from the interaction between various software/hardware components.
It can also be used to monitor production systems and desktops (in flight recorder mode) and
trigger an execution trace snapshot when an error occurs, which provides
detailed reports to system administrators and developers.
(Note: flight recorder support was available in LTTng 0.x, but is not
fully supported by the LTTng 2.0 tracing session daemon. Stay tuned for
the 2.1 release currently in preparation.)
Perf
Perf is very good at sampling hardware and software counters. The key
feature around which it has been designed is per-process performance
counter sampling for the kernel and user space. It targets both user space and
kernel developer audiences. Perf is multi-user aware, although it
allows tracing from non-root users for per-process information only.
Perf's
event header is fixed, with extensible payload definitions. Therefore,
although new events can be added, the content of its event header is
fixed by its ABI. Its tracing infrastructure resides at kernel-level
only. That means tracing user space requires round-trips to the kernel,
which causes a performance hit. Tracing features have been added
using the same infrastructure developed for sampling.
In development environments, Perf is useful as a hardware performance
counter and
kernel-level software event sampler for a process (or group of
processes). It can give insight into the major bottlenecks slowing down
process execution, when the cause of the slowdown can be pinpointed to a
particular set of processes.
Ftrace
Ftrace has been designed with function tracing as primary purpose; it also
supports tracepoint instrumentation and dynamic probing. It
has efficient mechanisms to quickly collect data and to filter out
information that is not interesting to the user.
Ftrace targets a kernel
developer audience, including console output integration that allows
dumping tracing buffers upon a kernel crash. Its event headers are fixed by
the ABI, with extensible event payload definitions. Its ring buffer is
heavily optimized for performance, but it allows only a single user to trace
at any given time. Its tracing infrastructure resides at kernel-level
only. Therefore, similar to Perf, tracing user space requires a
round-trip to the kernel, which causes a performance hit.
In development environments, Ftrace is suited for kernel developers who
want to debug
bugs and latency issues occurring at kernel-level. One of the major
Ftrace strengths over Perf is its low overhead. It is therefore
well-suited for tracing high-throughput data coming from frequently hit
tracepoints or from function entry/exit instrumentation on busy systems.
LTTng 2.0 usage examples
LTTng 2.0 can be installed on a
recent Linux distribution without
requiring any kernel changes.
The individual package README files contain the installation
instructions and dependencies.
When using lttng for kernel tracing from the tracing group, the
lttng-sessiond daemon needs to be started as root beforehand. This is
usually performed by init scripts for Linux distributions.
The user interface entry point to LTTng 2.0 is the lttng command. The
same actions can also be performed programmatically through the lttng.h
API and the liblttng library.
1. Kernel activity overview with tracepoints
Tracing the kernel activity can be done by enabling all tracepoints and
then starting a trace session. This sequence of commands will show a
human-readable text log of the system activity (run as root or a user in
the tracing group):
# lttng create
# lttng enable-event --kernel --all
# lttng start
# sleep 10 # let the system generate some activity
# lttng stop
# lttng view
# lttng destroy
Here is an example of the event text output (with annotation), generated by
using:
# lttng view -e "babeltrace -n all"
to show all field names:
timestamp = 18:27:42.301503743, # Event timestamp
delta = +0.000001871, # Timestamp delta from previous event
name = sys_recvfrom, # Event name
stream.packet.context = { # Stream-specific context information
cpu_id = 3 # CPU which has written this event
},
event.fields = { # Event payload
fd = 4,
ubuf = 0x7F9C100AD074,
size = 4096,
flags = 0,
addr = 0x0,
addr_len = 0x0
}
To get the list of available tracepoints:
# lttng list --kernel
Specific tracepoints can be traced with:
# lttng enable-event --kernel irq_handler_entry,irq_handler_exit
# this can be followed by more "lttng enable-event" commands.
2. Dynamic Probes
This second example shows how to plant a dynamic probe (kprobe) in the
kernel and gather information from the probe:
# lttng create
# lttng enable-event --kernel sys_open --probe sys_open+0x0
# lttng enable-event --kernel sys_close --probe sys_close+0x0
# run "lttng enable-event" for more details
# lttng start
# sleep 10 # let the system generate some activity
# lttng stop
# lttng view
# lttng destroy
Example event generated:
timetamp = 18:32:53.198603728, # Event timestamp
delta = +0.000013485, # Timestamp delta from previous event
name = sys_open, # event name
stream.packet.context = { # Stream-specific context information
cpu_id = 1 # CPU which has written this event
},
event.fields = {
ip = 0xFFFFFFFF810F2185 # Instruction pointer where probe was planted
}
All instrumentation sources can be combined and collected within a
trace session.
To be continued ...
LTTng 2.0 (code named "Annedd'ale") was released on
March 20, 2012. It will be available as a package in Ubuntu 12.04 LTS,
and should be available shortly for other distributions.
Only text output is currently available by using the
Babeltrace converter. LTTngTop (which will be covered in part 2) is usable,
although it is still under
development. The graphical viewer LTTV and the Eclipse LTTng plugin are
currently being migrated to LTTng 2.0. Both LTTngTop and LTTV will
re-use the Babeltrace trace reading library, while the Eclipse LTTng
plugin implements a CTF reading library in Java.
Part 2 of this article, will feature more usage examples including
combined user space and kernel tracing, adding PMU
counter contexts along with kernel tracing, a presentation of
the new LTTngTop tool, and a discussion of the upstream plans for the project.
[ Mathieu Desnoyers is the CEO of EfficiOS Inc.,
which also employs Julien Desfossez and David Goulet.
LTTng was created under the
supervision of Professor
Michel R. Dagenais at Ecole Polytechnique de Montréal, where all of the
authors have done (or are doing) post-graduate studies. ]
Comments (none posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Miscellaneous
Page editor: Jonathan Corbet
Next page: Distributions>>