LPC: Coping with hardware diversity
David started with a brief note to the effect that he dislikes the "embedded" term. If a system is connected to the Internet, he said, it is no longer embedded. Now that everything is so connected, it is time to stop using that term, and time to stop having separate conferences for embedded developers. It's all just Linux now.
ARM brings diversity
ARM is a relative newcomer to the industry, having been born in 1990 as
part of a joint venture between Acorn, VLSI, and Apple. The innovative
aspect to ARM was its licensing model; rather than being a processor
produced by a single manufacturer, ARM is a processor design that is
licensed to many manufacturers. The overall architecture for systems built
around
ARM is not constrained by that license, so each vendor creates its own
platform to meet its particular needs. The result has been a lot of
creativity and variety in the hardware marketplace, and a great deal of
commercial success. David estimated that each attendee in the room was
carrying about ten ARM processors; they show up in phones (several of them,
not just "the" processor), in disk controllers, in network interfaces,
etc.
Since each vendor can create a new platform (or more than one), there is no single view of what makes an ARM processor. Developers working with ARM usually work with a single vendor's platform and tend not to look beyond that platform. They are also working under incredibly tight deadlines; four months from product conception to availability on the shelves is not uncommon. There is a lot of naivety about open source software, its processes, and the licensing. In this setting, David said, fragmentation was inevitable. Linaro has been formed in response in an attempt to help the ARM community work better with the kernel development community; its prime mission is to bring about some consolidation in the ARM code base. Beyond that, he said, Linaro seeks to promote collaboration; without that, the community will be able to achieve very little. Companies working in the ARM space recognize the need to collaborate, but they are sometimes less clear on just which problems they should be trying to solve.
Once upon a time, Microsoft was the dominant empire and Linux was the upstart rebel child. Needless to say, Linux has been successful in many areas; it is now settling, he said, into a comfortable middle age. But this has all happened in the context of the PC architecture, which is not particularly diverse, so Linux, too, is not hugely diverse. It's also worth noting that, in this environment, hardware does not ship until Windows runs on it; making Linux work is often something that comes afterward.
The mobile world is different; Android, he said, has become the de facto standard mobile Linux distribution. It has become known for its "fork, rebase, repeat" development cycle. Android runs on systems with highly integrated graphics and media processors, and it is developed with an obsession about battery lifetime. In this world, things have turned around: now the hardware will not ship until Linux runs on it. Given the time pressures involved, it is no wonder, he said, that forking happens.
In the near future we are going to see the arrival of ARM-based server systems; that is going to stir things up again. They will be very different from existing servers - and from each other; the diversity of the ARM world will be seen again. There will be a significant long-term impact on the kernel as a result. For example, scheduling will have to become much more aware of power management and thermal management issues. Low power use will always be a concern, even in the server environment.
Problems to solve
Making all of this work is going to require greater collaboration between the ARM and kernel communities. ARM developers are developing the habits needed to work with upstream; the situation is much better than it was a few years ago. But we are going to need a lot more kernel developers with an ARM background, and they are going to have to get together and talk to each other more often. Some of that is beginning to happen; Linaro is trying to help with this process.
A big problem to deal with, he said, was boot architecture: what happens on the system before the kernel runs. Regardless of architecture, the boot systems are all broken and all secret; developers hate them. In the end we have to communicate system information to the kernel; now we are using features like ACPI or techniques like flattened device trees. We are seeing new standards (like UEFI) emerging, but, he asked, are we influencing those standards enough?
Taking things further: will there be a single ARM platform such that one kernel can run on any system? The answer was "maybe," but, if so, it is going to take some time. We're currently in a world where we have many such platforms - OMAP, iMX, etc. - and pulling them together will be hard. We need to teach ARM developers that not all code they develop belongs in their platform tree - or in arch/arm at all. The process of looking for patterns and turning them into generic code must continue. The ARM community is working toward the goal of creating a generic kernel; there are lots of interesting challenges to face, but other architectures have faced them before.
One step in the right direction is the recent creation of the arm-soc tree, managed by Arnd Bergmann. The goal of this tree is to support Russell King (the top-level ARM maintainer) and the platform maintainers and to increase the efficiency of the whole process. The arm-soc tree has become the path for much of the ARM consolidation work to get into the mainline kernel.
Returning briefly to power management, David noted that ARM-based systems
usually have no fans. The kernel needs a better thermal management
framework to keep the whole thing from melting. And that framework will
have to reach throughout the kernel; the scheduler may, for example, need
to move processes away from an overheating core to allow it to cool down.
Everywhere we look, he said, we need better instrumentation so we have a
better idea of what is happening with the hardware.
More efficient buffer management is a high priority for ARM devices; copying data uses power and generates heat, so copying needs to be avoided whenever possible. But existing kernel mechanisms are not always a good match to the ARM world, where one can encounter a plethora of memory management units, weakly-ordered memory, and more. There are a lot of solutions in the works, including CMA, a reworked DMA mapping framework, and more, but they are not all yet upstream.
In summary, we have some problems to solve. There is an inevitable tension between product release plans and kernel engineering. Product release cycles have no space for the "argument time" required to get features into the mainline kernel. It is, he said, a social engineering problem that we have to solve. It will certainly involve forking the kernel at times; the important part is joining back with the mainline afterward. And, he asked, do we really need to have everything in the kernel? Perhaps, in the case of "throwaway devices" with short product lives, we don't really need to have all that code upstream.
If we are going to scale the kernel across the diversity of contemporary
hardware, he said, we will have to maintain a strong focus on making our
code work on all systems. We'll have to continue to address the tensions
between mobile and server Linux, and we'll have to make efforts to cross
the kernel/user-space border and solve problems on both sides. This is a
discussion we will be having for some time, he said; events like the
Linux Plumbers Conference are the ideal place for that discussion.
Index entries for this article | |
---|---|
Kernel | Architectures |
Conference | Linux Plumbers Conference/2011 |
Posted Sep 14, 2011 16:23 UTC (Wed)
by khim (subscriber, #9252)
[Link] (9 responses)
Is it really good to start the keynote with the bogus statetement and then spend the rest of keynoted explaining why exactly said statement is bogus? Embedded OS (Linux or any other) is an OS which is tightly tied to hardware and where OS replacement is not supposed to happen without vendor involvement. Which pretty much describes all ARM devices on the market - connected or not. Sure, there are few devices (such as Nexuses from Google) where end-user is given the ability to install new OS, but even there this OS should be fine-tined for one particular device. ARM devices are embedded systems - and that's exactly the problem which we are discussing here. This is what sets them apart from PC or Mac. To say that the fact that devices are connected somehow makes somehow OS non-embedded... gosh.
Posted Sep 14, 2011 17:34 UTC (Wed)
by k8to (guest, #15413)
[Link] (6 responses)
I submit that the word barely means anything anymore.
Posted Sep 14, 2011 18:09 UTC (Wed)
by khim (subscriber, #9252)
[Link]
Perhaps, but then perhaps not. From the software developer POV difference between XBox360, Android phone and, for example, TV is pretty minimal. In all cases you can install your own custom firmware, in all cases this is not something developers had in mind at all. Now, you can argue that computers were initially developed in such a way (back then software was just a free addon to the hardware), but what we now understand under this term is some box where hardware is developed for the software, not the other way around. ARM systems are slowly moving in this direction, too, but they are still pretty much at stage where development is driven by hardware, not the other way around.
Posted Sep 14, 2011 19:22 UTC (Wed)
by nhippi (subscriber, #34640)
[Link] (4 responses)
Better use more clear terms:
General purpose / singe purpose system
Traditionally all of the "right side" have been lumped together as "embedded" systems, but in reality most systems are mix of features from both sides.
Consider Juniper routers which have x86 and FreeBSD but the really important stuff is the custom routing hw attached to the system. High performance single purpose machine with couple of leds and buttons.
Alternatively a Android tablet built on tegra2 reference design is general purpose system (you can install apps for almost any purpose and browse web for more), high performance (well depends what you compare to), battery powered, complex multitouch input support and a screen with resolution we used to have our PC's not a long time ago (1280 x 800).
Lumping both under "embedded" does not really help anyone.
Posted Sep 15, 2011 7:34 UTC (Thu)
by alison (subscriber, #63752)
[Link] (2 responses)
Thanks as always to Corbet for a fascinating post about what sounds like a fascinating presentation. My respect for Rusling and Linaro continues to grow. When I heard that Linaro was mostly Canonical-funded, I was suspicious, but I was completely wrong.
Posted Sep 15, 2011 14:29 UTC (Thu)
by james_w (guest, #51167)
[Link] (1 responses)
Posted Sep 15, 2011 17:52 UTC (Thu)
by wookey (guest, #5501)
[Link]
On the 'embedded' point, I have to disagree with khim. ARM is not all systems you never change the OS on, and even to the extent that that is true (a lot of random and fairly closed consumer kit) it's still the wrong way to think about it. ARM is just another architecture, like intel x86 and MIPS, and you can make whatever sort of computer you like out of it. Early ARM machines (when I got started in early 1990s) were full desktop machines, driving monitors, with harddrives and having plug-in keyboards. And we are about to see a lot more of that sort of thing with ARM servers, arm laptops, arm netbooks, home servers etc. Thinking of it as a 'mobile phone/embedded' arch is already behind the times.
There is already loads of ARM kit out there which is 'a real computer' and there is no reason why you shouldn't change the OS if you want to (although you may not have a very wide choice of non-linux OSes in practice).
To me 'embedded' was when you had 8K of RAM and 4 IO wires to play with - these days microcontrollers are much bigger than that and anything that can run linux has enormous resources in comparison.
[Disclosure: I've been working on arm kit since 1993 and am currently working for Linaro at ARM].
Posted Sep 15, 2011 11:39 UTC (Thu)
by jone (guest, #62596)
[Link]
I think your first item is more accurate as systems should really be looked at as:
As we've got devices going both ways .. Like i used to say of cameras, the most useful computing devices are the ones you have on you - so as I see it - you've got much of the small mobile market attempting to become General Purpose these days, and in the high end space - we've really got the reverse in many places as ppl are attempting to do more single/limited things closer to the hardware with general purpose servers and workstations.
Posted Sep 15, 2011 13:23 UTC (Thu)
by hrw (subscriber, #44826)
[Link]
Also have other ARM systems which I would not call embedded as I can run Debian on them.
But I also have (small) experience with embedded x86 systems which OS was done in fire-and-forget way which is common to embedded market regardless of CPU architecture.
Posted Oct 5, 2011 16:01 UTC (Wed)
by davidarusling (guest, #80637)
[Link]
-- Conferences ---
As we're consolidating ARM contributions, we're also pushing at various bits of kernel infrastructure, the memory management stuff, for example. To do the right thing (i.e., end up with the right code in the kernel) we need agreement from a wide range of open source communities (kernel, multimedia, video etc). Going to many, many kernel conferences is not terribly efficient.
I think that this is a scaling problem for the Linux kernel engineers; what forums make sense to attend in order to agree designs, code, directions? Whilst presentations are good for bootstrapping knowledge, I'm more interested in technical decisions being made. Linux plumbers is one such conference, but only happens once a year. Whilst Vancouver was good, it could have been better.
--- Embedded ---
Dave
Posted Sep 14, 2011 16:59 UTC (Wed)
by willnewton (guest, #68395)
[Link] (2 responses)
Posted Sep 15, 2011 6:18 UTC (Thu)
by svkelley (guest, #37299)
[Link] (1 responses)
He is not talking about spinning a new SoC. He is talking about a new board/device/platform based on a given SoC or variations of an SoC from a company. This is quite normal in embedded/consumer space. In fact, to save money, component/layout/CPU changes are done to reduce BOM cost on product spins. Fact. Of. Life. in the consumer electronics space.
Posted Sep 15, 2011 12:22 UTC (Thu)
by willnewton (guest, #68395)
[Link]
A spin of a board to reduce the BOM does not sound like "from product conception" to me. I wasn't there to hear the talk but this just sounded like a slightly surprising statement to me.
Posted Sep 15, 2011 11:19 UTC (Thu)
by ndye (guest, #9947)
[Link] (6 responses)
Declining the "cult of the new" leaves one searching for solutions to stretch a device's useful life past the sales department's intent. FLOSS promises hope, and success amortizes other costs, including the environmental costs of manufacturing.
I don't want Linux surrendering those goals, especially where the vendors intentionally profited by FLOSS shortening their development cycle.
Posted Sep 15, 2011 12:45 UTC (Thu)
by linusw (subscriber, #40300)
[Link]
But it's worth noting that systems like the Amiga and Atari ST which are considered landfill items in many places have excellent support in Linux 3.0, because people still love to hack them.
Whereas a comparatively new architecture like arch/cris isn't even compiling anymore if the linux-next autobuilds give a true impression.
So there is some tension between a hacker's view of that 1980's piece of hardware as a nice thing to hack on when they retire, and the general silicon industry's concept of hardware as something that has a planned support/life cycle, after which they pull the plug.
Posted Sep 18, 2011 9:12 UTC (Sun)
by giraffedata (guest, #1954)
[Link]
In other words, the device that is amortized over two years and then goes to the landfill really is less wasteful than the one that is amortized over 10 years.
The simplest application I find of this is the old equipment that still works, but I send to the landfill and replace because the cost of disposal and replacement is less than the cost of the extra electricity it takes to run it. But it also works for more complex resources like human labor and discomfort.
Posted Sep 20, 2011 14:53 UTC (Tue)
by njwhite (guest, #51848)
[Link] (3 responses)
One of the things I really love about FLOSS is how it can keep on supporting hardware well after its manufacturers have moved on. My main system was designed to be a short-lived netbook for casual users to enjoy for a year or two; I've used it for four (having had to replace various cheaply made components along the way), and expect to have it for a good while to come.
These 'throwaway' devices can remain useful or be repurposed for long after their manufacturers have moved on with free software; if they rely on lots of crappy non-upstream code, that's a lot tougher. A focus on upstream is important, even if not so directly for initial quality of the shipped product.
Posted Sep 20, 2011 16:09 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Posted Sep 22, 2011 14:19 UTC (Thu)
by fuhchee (guest, #40059)
[Link] (1 responses)
Do you have a sense of whether the recipients have been using the thing, or whether it was thrown away / regifted?
Posted Sep 22, 2011 15:29 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 15, 2011 14:45 UTC (Thu)
by btraynor (guest, #26672)
[Link] (2 responses)
Why does Mr. Rusling believe that it all wasn't just Linux before? Even though development triggers vary, such as bringing the next great cellphone to market, building a great netbook, or ensuring the antilock brakes function properly in your car. All of the development effort eventually filters upstream when the OSS principles are adhered to. In the end, a kernel fix, or a UI enhancement all come from the same place. But why bother to "stop having separate conferences for embedded developers"? Should the desktop conferences stop too? Should there be 1 super conference again (LinuxWorld)? Classification or specialization in an ecosystem as big as Linux seems to me to benefit the broader community in that it allows for smaller, lower cost conferences to be hosted in various locales that would otherwise not be able to accommodate a giant conference. Besides, if there were one super conference, it would result in a million and one Birds of a Feather (BoFs) sessions consisting of developers with specific interests, i.e. Embedded Linux. But what's in a name anyway? Call it what you will, but communities form around commonalities regardless, such as hardware (ex. BeagleBoard), or software (ex. Debian).
Posted Sep 15, 2011 17:51 UTC (Thu)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link] (1 responses)
One motivation for getting embedded developers to come to mainstream Linux events is to get them comfortable with contributing changes/fixes.
So one difference between embedded and desktop is that from what I can see, desktop developers are comfortable contributing upstream. This is becoming increasingly true for embedded developers, but more progress is needed.
Does it all make sense now?
Posted Sep 15, 2011 18:56 UTC (Thu)
by btraynor (guest, #26672)
[Link]
Posted Sep 16, 2011 21:33 UTC (Fri)
by error27 (subscriber, #8346)
[Link] (1 responses)
Rusling made some kind of Star Wars analogy here that I found disturbing. You had Microsoft as the evil Empire, and you had Linux as the Rebel Alliance. But then he talked about Linus logging on to the internet and asking for help. To me, the idea of Linus Torvalds as Princess Leah, that was an image I didn't need in my head.
Posted Sep 22, 2011 12:37 UTC (Thu)
by man_ls (guest, #15091)
[Link]
Gosh
David started with a brief note to the effect that he dislikes the "embedded" term. If a system is connected to the Internet, he said, it is no longer embedded. Now that everything is so connected, it is time to stop using that term, and time to stop having separate conferences for embedded developers. It's all just Linux now.
To embed: to set solidly in or as if in surrounding matter the nails were solidly embedded in those old plaster walls
Gosh
This may be so, but difference is still meaningful...
Gosh
High performance / limited performance system
Wall powered / Battery powered (power managment critical)
Qwerty and mouse / Limited input options
Big screen / small or no screen
Generic hardware / Tailored hardware
Gosh
Gosh
Gosh
Gosh
General Purpose/{Single, Limited} Purpose
Not every ARM == embedded
Too many Linux conferences
For me, traditional embedded is low memory, tight timings. Disk controller fits pretty well. Thinking of ARM as embedded is limiting, as it is no longer just embedded. Thinking of ARM as just another architecture is also misleading, because the drivers and business models are fundamentally different.
four months from product conception to availability on the shelves is not uncommonLPC: Coping with hardware diversity
What kinds of product does that refer to? An SoC could not be produced that quickly, and I can't see CE products being developed that fast either...
LPC: Coping with hardware diversity
LPC: Coping with hardware diversity
"throwaway" devices
"throwaway" devices
I have the same innate feeling of waste when a piece of hardware I paid for goes to the landfill. But then I apply logic, and I realize that our most valuable resources are ones that don't occupy landfills when they are spent, and those are the resources we're trying to save by abandoning hardware to the landfill.
"throwaway" devices
"throwaway" devices
"throwaway" devices
"throwaway" devices
"throwaway" devices
LPC: Coping with hardware diversity
LPC: Coping with hardware diversity
LPC: Coping with hardware diversity
Star wars analogy
Don't worry, there are no parallels. When Linus made his call for help Windows was not even close to desktop dominance. Now Mark Shuttleworth, on the other hand...
Star wars analogy