Brief items
The current development kernel is 3.3-rc5,
released on February 25. "
Hey, no delays this week. And nothing really odd going on
either. Maybe things are finally calming down." 164 non-merge
changesets went into this release; the short-form changelog can be found in
the announcement.
Stable updates: 3.2.8 was released
on February 27. "This is an "interesting" update, in that
"only" one bug is being resolved here, handling a floating point issue with
new x86 processors running in 32bit mode. But, this also fixes x86-64
issues as well in this area, so all users are encouraged to upgrade. If
anyone has any problems, please let us know as soon as possible."
The 3.0.23 and
3.2.9 updates were released on
February 29; they each contain just over 70 important fixes.
Comments (none posted)
In the Olden Days, we could easily spot code which needed auditing
simply by looking at the (lack of) coding style. But then someone
decided that the most important attribute of good code was the
nature of the whitespace within it. And they wrote a tool.
For a while, this made it really hard for us maintainers to tell
which incoming patches we should actually read! Fortunately, the
Anti-whitespace crusaders have come full circle: we can now tell
newbie coders by the fact that they obey checkpatch.pl.
--
Rusty Russell
The Linux kernel project has some serious and ongoing structural
problems and I think it's very important that as a project we
[Debian] keep our options open.
--
Ian Jackson
UIO provides a framework that actually works (unlike all of the
previous research papers were trying to do), and is used in real
systems (laser welding robots!) every day, manufacturing things
that you use and rely on.
You remove UIO at the risk of pissing off those robots, the choice
is yours, I know I'm not going to do it...
--
Greg Kroah-Hartman
80 column wraps are almost always not a sign of lack of screen real
estate, but a symptom of lack of thinking.
--
Ingo Molnar
Comments (none posted)
By Jonathan Corbet
February 29, 2012
The facility formerly known as
jump labels
is designed to minimize the cost of rarely-used features in the kernel.
The classic example is tracepoints, which are turned off almost all of the
time. Ideally, the overhead of a disabled tracepoint would be zero, or
something very close to it. Jump labels (now "static keys") use runtime
code modification to patch unused features out of the code path entirely
until somebody turns them on.
With Ingo Molnar's encouragement, Jason Baron recently posted a patch set changing the jump label mechanism
to look like this:
if (very_unlikely(&jump_label_key)) {
/* Rarely-used code appears here */
}
A number of reviewers complained, saying that very_unlikely()
looks like just a stronger form of the unlikely() macro, which
works in a different way. Ingo responded
that the resemblance was not coincidental and made sense, but he made few
converts. After arguing the point for a while, Ingo gave in to the inevitable and stopped pushing
for the very_unlikely() name.
Instead, we have yet another format for jump labels, as described in this document. A jump label
static key is defined with one of:
struct static_key key = STATIC_KEY_INIT_FALSE;
struct static_key key = STATIC_KEY_INIT_TRUE;
The initial value should match the normal operational setting - the one
that should be fast. Testing of static keys is done with:
if (static_key_false(&key)) {
/* Unlikely code here */
}
Note that static_key_false() can only be used with a key
initialized with STATIC_KEY_INIT_FALSE; for default-true keys,
static_key_true() must be used instead. To make a key true, pass
it to static_key_slow_inc(); removing a reference to a key (and
possibly making it false) is done with static_key_slow_dec().
This version of the change was rather less controversial and, presumably,
will find its way in through the 3.4 merge window. After that, one
assumes, this mechanism will not be reworked again for at least a
development cycle or two.
Comments (none posted)
Kernel development news
By Jonathan Corbet
February 28, 2012
Control groups are one of those features that kernel developers love to
hate. It is not that hard to find developers complaining about control
groups and even threatening to rip them out of the kernel some dark night
when nobody is paying attention. But it is much less common to see
explicit discussion around the aspects of control groups that these
developers find offensive or what might be done to improve the situation.
A recent linux-kernel discussion may have made some progress in that
direction, though.
The control group mechanism is really just a way for a system administrator
to partition processes into one or more hierarchies of groups. There can
be multiple hierarchies, and a given process can be placed into more than
one of them at any given time. Associated with control groups is the
concept of "controllers," which apply some sort of policy to a specific
control group hierarchy. The group scheduling controller allows control groups to
contend against each other for CPU time, limiting the extent to which one
group can deprive another of time in the processor. The memory controller
places limits on how much memory and swap can be consumed by any given
group. The network priority controller,
merged for 3.3, allows an administrator to give different groups better or
worse access to network interfaces. And so on.
Tejun Heo started the discussion with a lengthy
message describing his complaints with the control group mechanism and
some thoughts on how things could be fixed. According to Tejun, allowing
the existence of multiple process hierarchies was a mistake. The idea
behind multiple hierarchies is that they allow different policies to be
applied using different criteria. The documentation added with control
groups at the beginning gives an example with two distinct hierarchies that
could be implemented on a university system:
- One hierarchy would categorize processes based on the role of their
owner; there could be a group for students, one for faculty, and one
for system staff. Available CPU time could then be divided between
the different types of users depending on the system policy;
professors would be isolated somewhat from student activity, but,
naturally, the system staff would get the lion's share.
- A second hierarchy would be based on the program being executed by any
given process. Web browsers would go into one group while, say,
bittorrent clients could be put into another. The available network
bandwidth could then be split according to the administrator's view of
each class of application.
On their face, multiple hierarchies provide a useful level of flexibility
for administrators to define all kinds of policies. In practice, though,
they complicate the code and create some interesting issues. As Tejun
points out, controllers can only be assigned to one hierarchy. For
controllers implementing resource allocation policies, this restriction
makes some sense; otherwise, processes would likely be subjected to
conflicting policies when placed in multiple hierarchies. But there are
controllers that exist for other purposes; the "freezer" controller simply
freezes the processes found in a control group, allowing them to be
checkpointed or otherwise operated upon. There is no reason why this kind
of feature could not be available in any hierarchy, but making that
possible would complicate the control group implementation significantly.
The real complaint with multiple hierarchies, though, is that few
developers seem to see the feature as being useful in actual, deployed
systems. It is not clear that it is being used. Tejun suggests that this
feature could be phased out, perhaps with a painfully long deprecation
period. In the end, Tejun said, the control group hierarchy could
disappear as an independent structure, and control groups could just be
overlaid onto the
existing process hierarchy. Some others disagree, though; Peter Zijlstra
said "I rather like being
able to assign tasks to cgroups of my making without having to mirror
that in the process hierarchy." So the ability to have a control
group hierarchy that differs from the process hierarchy may not go away,
even if the multiple-hierarchy feature does eventually become deprecated.
A related problem that Tejun raised is that different controllers treat the
control group hierarchy differently. In particular, a number of
controllers seem to have gone through an evolutionary path where the
initial implementation does not recognize nested control groups but,
instead, simply flattens the hierarchy. Later updates may add full
hierarchical support. The block I/O controller, for example, only finished the job with hierarchical support
last year; others still have not done it. Making the system work properly,
Tejun said, requires getting all of the controllers to treat the hierarchy
in the same way.
In general, the controllers have been the source of a lot of grumbling over
the years. They tend to be implemented in a way that minimizes their
intrusiveness on systems where they are not used - for good reason - but
that leads to poor integration overall. The memory controller, for
example, created its own shadow page tracking system, leading to a resource-intensive mess that was only
cleaned up for the 3.3 release. The hugetlb controller is not integrated
with the memory controller, and, as of 3.3, we have two independent network
controllers. As the number of small controllers continues to grow (there
is, for example, a proposed timer slack
controller out there), things can only get more chaotic.
Fixing the controllers requires, probably more than anything else, a person
to take on the role as the overall control group maintainer. Tejun and Li
Zefan are credited with that role in the MAINTAINERS file, but it is still
true that, for the most part, control groups have nobody watching over
the whole system, so
changes tend to be made by way of a number of different subsystems. It is
an administrative problem in the end; it should be amenable to solution.
Fixing control groups overall could be harder, especially if the
elimination of the multiple-hierarchy feature is to be contemplated. That,
of course, is a user-space ABI change; making it happen would take years,
if it is possible at all. Tejun suggests "herding people to use a
unified hierarchy," along with a determined effort to make all of
the controllers handle nesting properly. At some point, the kernel could
start to emit a warning when multiple hierarchies are used. Eventually, if
nobody complains, the feature could go away.
Of course, if nobody is actually using multiple hierarchies, things could
happen a lot more quickly. Nobody entered the discussion to say that they
needed multiple hierarchies, but, then, it was a discussion among kernel
developers and not end users. If there are people out there using the
multiple hierarchy feature, it might not be a bad idea to make their use
case known. Any such use cases could shape the future evolution of the
control group mechanism; a perceived lack of such use cases could have a
very different effect.
Comments (4 posted)
February 28, 2012
This article was contributed by John Stultz
Overview and Introduction
A face-to-face meeting of the Android mainlining
interest group, which is a community interested in upstreaming the Android patch
set into the Linux kernel, was
organized by Tim Bird (Sony/CEWG) and hosted by Linaro Connect on
February 10th. Also in attendance were representatives from
Linaro, TI, ST-Ericsson, NVIDIA, Google's Android and ChromeOS teams, among
others.
The meeting was held to give better visibility into the various
upstreaming efforts going on, making sure any potential collaboration
between interested parties was able to happen. It covered a lot of ground,
mostly discussing Android features that recently were re-added to the
staging directory and how these features might be reworked so that they can
be moved into mainline officially. But a number of features that are still
out of tree were also covered.
Kicking off the meeting, participants introduced themselves and their
interests
in the group. Also some basic goals were covered just to make sure everyone
was on the same page. Tim summarized his goals for the project as: to let
android run on a vanilla mainline kernel, and everyone agreed to this as a
desired outcome.
Tim also articulated an important concern in achieving this
goal is the need to avoid regressions. As Android is actively shipping on a
huge number of devices, we need to take care that getting things upstream
at the cost of short term stability to users isn't an acceptable
trade-off. To this point, Brian Swetland (Google) clarified that the Google
Android developers are quite open to changes in interfaces. Since
Android releases both userland and kernel bundled together, there is
less of a strict need for backward API compatibility. Thus it's not the specific
interfaces they rely on, but the expected behavior of the functionality
provided.
This is a key point, as there is little value in upstreaming the
Android patches if the resulting code isn't actually
usable by Android. A major source of difficulty here is that the expected
behavior of features added in the Android patch set isn't always obvious,
and there is little in the way of design documentation. Zach Pfeffer
(Linaro) suggested that one way to both improve community understanding
of the expected behavior, as well as helping to avoid regressions, would be
to provide unit tests. Currently Android is for the most part tested at a
system level, and there are very few unit tests for the kernel features
added. Thus creating a collection of unit tests would be beneficial, and
Zach volunteered to start that effort.
Run-time power management and wakelocks
At this point, the discussion was handed
over to Magnus Damm to talk about power domain support as well as the wakelock
implementation that the power management maintainer Rafael Wysocki had recently posted
to the Linux kernel mailing list. The Android team said they were reviewing
Rafael's patches to establish if they were sufficient and would be
commenting on them soon.
Magnus asked if full suspend was still necessary
now that run-time power-management has improved; Brian clarified that
this isn't an either/or sort of situation. Android has long used dynamic
power management techniques at run time, and they are very excited about using
the run-time power-management framework to further to save power, but they
also want to use suspend to further reduce power usage. They really would
like to see deep idle for minutes to hours in length, but, even with
CONFIG_NOHZ, there are still a number of periodic timers that keep them from
reaching that goal. Using the wakelocks infrastructure to suspend allows them to
achieve those goals.
Magnus asked if Android developers could provide some
data on how effective run-time power-management was compared to suspend
using wakelocks, and to let him know if there are any particular issues that
can be addressed. The Android developers said they would look into it, and
the discussion continued on to Tim and his work with the logger.
Logger
Back in December, Tim posted the
logger patches to the linux-kernel list and got feedback for changes
that he wanted to
run by the Android team to make sure there were no objections. The first of
these, which the Android developers supported, is to convert static buffer
allocation to dynamic allocation at run-time.
Tim also shared some
thoughts on changing the log dispatching mechanism. Currently logger
provides a separation between application logs and system logs, preventing
a noisy application from causing system messages to be lost. In addition,
one potential security issue the Android team has been thinking about is a
poorly-written (or deliberately malicious) application dumping private
information into the log; that information could
then be read by any other application on the system. A proposal to work
around this problem is per-application log buffers; however, with users able to
add hundreds of applications, per-application or per-group partitioning
would consume a large amount of memory. The alternative - pushing the
logs out to the filesystem - concerns the Android team as it complicates
post-crash recovery and also creates more I/O contention, which can cause
very poor performance on flash devices.
Additionally, the benefits from
doing the logging in the kernel rather than in user space are low overhead,
accurate tagging, and no scheduler thrashing. The Android developers are
also very hesitant to add more long-lived daemons to the system, as while
phones have gained quite a lot of memory and cpu capacity, there is still
pressure to use Android on less-capable devices.
Another concern with some
of the existing community logging efforts going on is that Android
developers don't really want strongly structured logging, or binary
logging, because it further complicates accessing the data when the system
has crashed. With plain text logging, there is no need for special tools or
versioning issues should the structure change between newer and older
devices. In difficult situations, with plain text logging, a raw memory
dump of the device can still provide very useful information. Tim
discussed a proposed filesystem interface to the logger to which the
Android team re-iterated their interface agnosticism, as long as access
control is present and writing to the log is cheap. Tim promised to
benchmark his prototypes to ensure performance doesn't degrade.
ION
From here, the discussion moved to ION with Hiroshi Doyu (NVIDIA)
leading the discussion. There had been an earlier meeting at Linaro Connect
focusing on graphics that some of the ION developers attended, so Jesse
Barker (Linaro) summarized that discussion. The basic goal there is to make
sure the Contiguous Memory
Allocator (CMA), dma-buf,
and other buffer sharing approaches are compatible with both general Linux
and Android Linux.
Dima Zavin (Google) clarified that ION is focused on
standardizing the user-space interface, allowing different approaches
required by different hardware to work, while avoiding the need to
re-implement the same logic over and over for each device. The really hard
part of buffer sharing is sorting out how to allocate buffers that satisfy
all the constraints of the different hardware that will need to access
those buffers. For Android,
the constraints are constant per-device, so they can be statically decided,
but for more generic Linux systems there may need to be some run-time
detection of what devices a buffer might be shared with, possibly
converting buffer types when a incompatible buffer interacts with a
constrained bit of hardware for the first time.
There are some conflicting
needs here as Android is very focused on constant performance, and
per-device optimizations allow that, whereas the more general proposed
solution might have occasional performance costs, requiring buffers to be
occasionally copied.
The ION work is still under development, and hasn't
yet been submitted to the kernel mailing list, but Hiroshi pointed out that a
number of folks are actively working on integrating the dma-buf and CMA
work with ION, thus there is a need for a point where discussion and
collaboration can be done. As it seems the ION work won't move upstream or
into staging immediately, it's likely the linaro-mm-sig
mailing list will be the central collaboration point.
Ashmem & the Alarm driver
The next topic, was ashmem, which John Stultz (Linaro/IBM) has been
working on. He helped get the ashmem patches reworked and merged into
staging, and is now working on pulling the pinning/unpinning feature from
ashmem out of the driver and pushing it a little more deeply into the VM
subsystem via the fadvise volatile
interface. The effort is still under development, but there have been a
number of positive comments on the general idea of the feature.
The current
discussion with the code on linux-kernel is around how to best store the
volatile
ranges efficiently, and some of the behavior around coalescing neighboring
volatile ranges. In addition there have been requests by Andrew Morton to
find a way to reuse the range storage structure with the POSIX file locking
code, which has similar requirements. This effort wouldn't totally replace
the ashmem driver, as the ashmem driver also provides a way to get unlinked
tmpfs file descriptors without having tmpfs mounted, but it would greatly
shrink the
ashmem code, making it more of a thin "Android compatibility driver", which
would hopefully be more palatable upstream.
John then covered the Android Alarm Driver feature, which he had worked
on partially upstreaming with the virtualized RTC and POSIX
alarm_timers work last year. John has refactored the Android Alarm
driver and has submitted it for staging in 3.4; he also has a patch
queue ready which carves out chunks of the Android Alarm driver and
converts it to using the upstreamed alarm_timers. To be sure regressions
are not introduced, this conversion will likely be done slowly, so that any
minor behavioral differences between the Android alarm infrastructure and
the upstreamed infrastructure can be detected and corrected as
needed. Further, the upstream POSIX alarm user interfaces are not
completely sufficient to replace the Android Alarm driver, due to how it
interacts with wakelocks, but this patch set will greatly reduce the alarm
driver, and from there we can see if it would either be appropriate to
preserve this smaller Android specific interface, or try to integrate the
functionality into the timerfd code.
Low memory killer & interactive cpufreq governor
As Anton
Vorontsov (Linaro) could not attend, John covered Anton's work on improving
the low memory killer in the staging tree as well as his plans to
re-implement the low memory killer in user space using a low-memory notifier
interface to signal the Android Activity Manager. There is wider interest
in the community in a low-memory notifier interface, and a number of
different approaches have been proposed, using both control groups (with
the memory controller) or
simpler character device drivers.
The Android developers expressed concern that if the killing logic is
pushed out to the Activity Manager, it would have to be locked into memory
and the
killing path would have to be very careful not to cause any memory
allocations, which is difficult with Dalvik applications. Without this
care, if memory is
very tight, the killing logic itself could be blocked waiting for memory,
causing the situation to only get worse. Pushing the killing logic into its
own memory-locked task which was then carefully written not to trigger memory
allocations would be needed, and the Android developers weren't excited
about adding another long-lived daemon to the system. John acknowledged
those concerns, but, as Anton is fairly passionate about this
approach, he will likely continue his prototyping in order to get hard
numbers to show any extra costs or benefits to the user-space approach.
Anton had also recently pushed out the Android "interactive cpufreq
governor" to linux-kernel for discussion. Due to a general desire in the
community not to add any more cpufreq code, it seems the patches won't be
going in as-is. Instead it was suggested that the P-state policy should be
in the scheduler itself, using the soon-to-land average load tracking code,
so Anton will be doing further research there. There was also a question
raised whether the ondemand cpufreq governor would not now be sufficient,
as a number of bugs relating to the sampling period had been resolved in
the last year. The Android developers did not have any recent data on
ondemand, but suggested Anton get in touch with Todd Poyner at Google as he
is the current owner of the code on their side.
Items of difficulty
Having covered most of the active work, folks discussed which items
might be too hard to ever get upstream.
The paranoid
network interface, which hard-fixes device permissions to
specific group IDs, is clearly not generic enough to be able to go
upstream. Arnd Bergmann (Linaro/IBM) however commented that network
namespaces could be used to provide very similar functionality. The
Android developers were very interested in this, but Arnd warned that the
existing documentation is narrowly scoped and will likely be lacking to
help them do exactly what they want. The Android developers said they would
look into it.
The binder driver also was brought up as a difficult bit of code to
get included into mainline officially. The Android developers expressed
their sympathy as it is a large chunk of code with a long history, and they
themselves have tried unsuccessfully to implement alternatives. However,
the specific behavior and performance characteristics of binder are very
useful for their systems. Arnd made the good point that there is something
of a parallel to SysV IPC. The Linux community would never design something
like SysV IPC, and generally has a poor opinion of it, but it does support
SysV IPC to allow applications that need it to function. Something like
this sort
of a rationale might be persuasive to get binder merged officially, once
the code has been further cleaned up.
At this point, after 4 hours of discussion, the focus started to
fray. John showed a demo of the benefit of the Android drivers landing in
staging, with an AOSP build of Android 4.0 running on a vanilla 3.3-rc3
kernel with only few lines of change. Dima then showed off a very cool
debugging tool for their latest devices. With a headphone plug on one end
and USB on the other, he showed off how the headphone jack on the phone
can do double duty as a serial port allowing for debug access. With this,
excited side conversations erupted. Tim finally thanked everyone for
attending and suggested we do this again sometime soon, personal and
development schedules allowing.
Thanks
Many thanks to the meeting participants for an interesting discussion,
Tim Bird for organizing, and to Paul McKenney, Deepak Saxena, and Dave
Hansen for their edits and thoughts on this summary.
Comments (4 posted)
Back in October 2011, the Long-term Support Initiative (LTSI) was
announced at the LinuxCon Europe event by
Linux Foundation board member (and NEC manager) Tsugikazu Shibata. Since
that initial announcement, a lot has gone on in the planning stages of
putting the framework for this type of project together, and at the
Embedded Linux Conference a few weeks ago, the project was publicly
announced as active.
Shibata-san again presented about LTSI at ELC 2012, giving more details
about how the project would be run, and how to get involved. The full
slides for the presentation [PDF] are available online.
The LTSI project has grown out of the needs of a large number of consumer
electronic companies that use Linux in their products. They need a new
kernel every year to support newer devices, but they end up cobbling
together a wide range of patches and backports to meet their needs. At
LinuxCon in Europe, the results of a survey of a large number of kernel
releases in different products showed that there are a set of common
features that different companies need, yet they were implemented in
different ways, pulling from different upstream kernel or project versions,
and combining them in different kernel versions with different bug fixes,
some done in opposite ways. The results of this
survey are now available for the curious.
This is usually done because of the short development cycle that consumer
electronic companies are under in order to get a product released. Once a
product is released, the development cycle starts up again, and old work is
usually forgotten, except to the extent that it is copied into the new
project. This cycle is hard to break out of, but that needs to happen in
order to
be able to successfully take advantage of the Linux kernel community's
releases.
The LTSI project's goals were summarized as:
- Create a longterm community-based Linux kernel to cover the
embedded life cycle.
- Create an industry-managed kernel as a common ground for the
embedded industry.
- Set up a mechanism to support upstream activities for embedded
engineers.
Let's look at these goals individually, to find out how they are going
to be addressed.
Longterm community kernel
A few months ago, after discussing this project and the needs of the
embedded industry, I announced that I will be maintaining a longterm
Linux kernel for bugfixes and security updates for two years after it
was originally released by Linus. This kernel version will be picked
every year, enabling two different longterm kernels to be supported at
the same time. The rules of these kernels are the same as the normal
stable kernel releases, and can be found in the in-kernel file
Documentation/stable_kernel_rules.txt.
The 3.0 kernel was selected last August as the first of these longterm
kernels to be supported in this manner. The previous longterm kernel,
2.6.32, which was selected with the different enterprise Linux distributions'
needs in mind, will still be maintained for a while longer as they rely
on this kernel version for their users.
As the 3.0 kernel was selected last August, it looks like
the next longterm kernel will be selected sometime around August, 2012.
I've been discussing this decision
with a number of different companies about which version would work well
for them, and if anyone has any input into this, please let me know.
Longterm industry kernel
As shown at LinuxCon Europe, basing a product kernel on just a
community-released kernel does not usually work. Almost all consumer
electronics releases depend on a range of out-of-tree patches
for some of their features. These patches include the Android kernel patch
set,
LTTng, some real-time patches, different architecture and
system-on-a-chip support, and various small bugfixes. These patches,
even if they are accepted upstream into the kernel.org releases, usually
do not fit the requirements that the community stable kernel releases
require. Because of that, the LTSI kernel has been created.
This kernel release will be based on the current longterm community-based
kernel, with a number of these common patchsets applied. Applying
them in a single place enables developers from different companies to be
assured that the patches are correct, they have the latest versions,
and, if any fixes are needed, they can be done in a common way for all
users of the code.
This LTSI kernel tree is now public, and can be browsed at
http://git.linuxfoundation.org/?p=ltsi-kernel.git.
[Editor's note: this site, like the LTSI page linked below, fails badly if
HTTPS is used; "HTTPS Everywhere" users will have difficulty accessing
these pages.]
It currently contains a backport of the Android kernel patches, as well
as the latest LTTng codebase. Other patches are pending review to be
accepted into this tree in the next few weeks.
This tree is set up much like most distribution kernel trees are: a set
of quilt-based patches that apply on top of an existing kernel.org
release. This organization allows the patches to be easily reworked, and even
removed if needed; patches can also be forward-ported to new kernel
releases without
the problems of rebasing a whole expanded kernel git tree. There are
scripts in the tree to generate the patches in quilt, git, or just
tarball formats, meeting the needs of all device manufacturers.
During the hallway track at ELC 2012, I discussed the LTSI kernel with a
number of different embedded companies, embedded distributions, and
community distributions. All of these groups were eager to start using
the LTSI kernel as a base for their releases, as no one likes to do the
same work all the time. By working off of a common base, they can then
focus on the real value they offer their different communities.
Upstream support for embedded engineers
Despite the continued growth of the number of developers and companies
that are contributing to the Linux kernel, some companies still have a
difficult time of figuring out how to get involved. Because of this, a
specific effort to get embedded engineers working upstream is part of
the LTSI project.
The LTSI kernel will accept patches from companies even if they contain code
that does not meet the normal acceptance criteria set by Linux kernel
developers due to technical reasons. These patches will be cleaned up
in the LTSI kernel tree and helped to be merged upstream by either
submission to the various kernel subsystem groups, or through the
staging subsystem if the code will need more work than a simple cleanup.
This feedback loop into the community is to help these embedded
engineers learn how to work with the community, and in the end, be part
of the community directly, so for the next product they have to create,
they will not need to use the LTSI kernel as a "launching pad".
Help make the project obsolete
The end goal of LTSI is to put itself out of business. With companies
learning how to work directly with mainstream, their patches will not
need the help of the LTSI developers to be accepted. And if those
patches are accepted, their authors will not have to maintain them on their own
for their product's lifetime; instead, they can just rely on the community
longterm kernel support schedule for the needed bugfixes and security
updates. However, until those far-reaching goals are met, there will be
a need for the LTSI project and developers for some time.
If you work for a company interested in working with the LTSI kernel as
either a base for your devices, or to help get your code upstream,
please see the LTSI web site
for information on how to get involved.
Comments (2 posted)
Patches and updates
Kernel trees
Core kernel code
Development tools
Device drivers
Documentation
Filesystems and block I/O
Memory management
Networking
Architecture-specific
Security-related
Virtualization and containers
Benchmarks and bugs
Miscellaneous
Page editor: Jonathan Corbet
Next page: Distributions>>