User: Password:
|
|
Subscribe / Log in / New account

Linux and the Internet of Things

Please consider subscribing to LWN

Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.

By Jake Edge
April 30, 2014
Embedded Linux Conference

Tim Bird is certainly no stranger to the stage at the Embedded Linux Conference—as a longtime organizer of the conference, he has introduced many keynotes over the years—but he hasn't given a keynote talk himself since 2005. That changed in 2014, as Bird gave a thought-provoking talk with a conclusion that will probably surprise some: he thinks a "good" fork of the kernel is the right approach to meld Linux and the "Internet of Things". The talk preceding that finish was both wide-ranging and entertaining.

[Tim Bird]

The industry has changed just a bit since 2005, he said. There are now approximately 1.5–2 billion ("with a B") Linux devices worldwide. He has been thinking about how to get Linux into the next 9 billion devices. Using the movie Inception as a bit of inspiration, he said that he wanted to try to inject some ideas into the minds of those in attendance.

He started with open source, which, at its core, is about software that can be freely used and shared by anyone. Those freedoms are guaranteed by the GPL, but there are other licenses that strike different balances between the rights of users and developers. The core idea behind the GPL is that developers publish the derivative software they create.

But just publishing the code is not enough, he said. It is important to build a community around the code, and that can't come from just releasing a tarball. A community has mailing lists, IRC channels, and conferences where it shares ideas. That community will then build up a bunch of technologies that get shared by all.

Network effects

That is an example of a "network effect", he said. We have an intuitive feel for how powerful network effects are, but some examples will help make that more clear. The first example that is always cited for explaining network effects is the phone network, Bird said, so he would start there as well. Each new phone added to the network will infinitesimally increase the value of all the phones already in the network. Essentially, the value of the system increases as you add to it. That is a network effect in action.

Another example is the battle over the desktop. Microsoft won that battle because it had the most users, which meant it had the most application developers, which brought in even more users. It was a "virtuous cycle for them", he said. We have seen the exact same thing play out in the Android vs. iOS battle. The number of apps in the app store was the focus of much of the coverage of that battle. That's because the number of apps available really affects the perceived value of the platform to users.

Format wars are another place where network effects come into play. The VHS vs. Betamax format war or the more recent HD DVD vs. Blu-ray war are good examples. The technical features of the formats were "almost inconsequential" to the outcome of the battle. In the latter case, he was convinced that HD DVD and Blu-ray would continue fighting it out "forever", but after Warner Bros. announced it would release its titles on Blu-ray, the battle was over in a matter of weeks. That announcement was enough to tip the network effects toward Blu-ray, and HD DVD proponents capitulated quickly after that.

Network effects are everywhere, he said, and "all large companies are trying to leverage network effects". He recalled how Google became such a dominant player. Originally, it was battling with Yahoo, which had a different approach toward indexing the content of the internet. Yahoo created a hierarchical set of bookmarks, while Google just had "pure search". Google foresaw that as the internet grew larger, Yahoo's approach would eventually fail. Everyone who added something to the internet in those days was infinitesimally affecting Yahoo in a negative way. Each third party site added was effectively helping Google.

[Tim Bird]

Companies will spend billions to win a format war, he said. He works for Sony, so he was interested to watch what happened with the Playstation 3. It shipped with a Blu-ray player as part of the game console, which made it more expensive than competitors as well as later to market. Sony almost lost the console wars because of that, but adding Blu-ray helped tip the scales toward that format. It turned out that the Playstation 3 was a "great Blu-ray player", so when that format won, it "pulled" the console back into the market.

Network effects have great "explanatory powers", he said. It explains format wars, but it also explains the subsidies that companies are willing to pour into their products. Those subsidies allow us to get "so much free stuff", but companies do it for the network effects. Adobe is a perfect example of that. It has both viewing tools and authoring tools for formats like Flash and PDF; eventually it figured out that giving away the viewing tools helped sell the authoring tools by way of network effects. In addition, network effects partly explain "fanboy" behavior ("though nothing can completely explain fanboys", he said with a chuckle). People act irrationally about their platform of choice because it is important to get more people on that platform—doing so makes the platform more valuable to the fanboys.

Open source and embedded

Open-source software is yet another example of network effects. Other developers write software that you use, which makes more value for you and them, which makes it more likely that more gets written. It also creates an ecosystem with books, training, tools, jobs, conferences, and so on around the software, which reinforces those effects.

But the "community" is not really a single community. For example, the kernel community is composed of many different sub-communities, for networking, scheduling, filesystems, etc. One day he will have his "USB hat" on, but on another he will be talking about the scheduler. One outcome of the network effects created by projects is that they push efforts in the direction of more generalized software, which doesn't work quite as well as software that is perfectly customized for the job at hand. But the generalization brings in more users and developers to increase the network effects.

Embedded devices are those that have a dedicated function. Mobile phones are not embedded devices any longer, they are, instead, platforms. Most of the embedded work these days is using Linux, which is a general-purpose operating system, and most of those devices are running on general-purpose hardware. Silicon vendors are tossing everything they can think of onto system-on-chips (SoCs).

He used to work on the linux-tiny project, which tried to keep the footprint of Linux small. Today, though, the smallest DRAM you can buy is 32M, so he doesn't really worry about linux-tiny any more. He also noted that he heard of a SoC that sold for the same price whether it had three cores or it had nine cores—"we just throw away silicon now".

In his work on cameras at Sony, there was a requirement to boot the kernel in one second. To do that, he had to take out a bunch of Linux functionality. For example, there is a cost for loading modules at boot time, so he would statically link the needed modules to remove that runtime cost. But that was removing a more general feature to "respecialize" Linux for the device.

No keynote would be complete without a rant about device tree, he said with a laugh. His chief complaint about device tree is that it makes it hard to specialize the kernel for a particular device. The whole idea is to support a single kernel image for multiple SoCs, so there is "gobs and gobs of code" to parse the tree at runtime. That also leaves lots of dead code that doesn't get used by a particular SoC "hanging around" in the image. When he tried to do some link-time optimization (LTO) of the kernel image, he couldn't make any real gains because of device tree.

But device tree builds network effects. It has made code that used to live deep inside an SoC tree visible to more people. It has also exposed the IP blocks that are often shared between SoCs so that a single driver can be used to access that hardware. That makes for a better driver because more people are working with the code.

Subtractive engineering

But the "Internet of Things" (IoT) changes the whole equation. We want computers in our cars, light switches, clothes, and maybe even food, he said. To do that, we won't be putting $50 processors into all of those things, instead we will want ten-cent processors that will run Linux. He showed a hypothetical cereal box with a display that showed it booting Linux. When cereal companies put a toy into the box, they spend around $1 on that toy, could we get to a display and processor running Linux for $1?

If we want to get there, he asked, how would we go about it? Linux is too big and too power hungry to run on that kind of system today. But earlier versions of Linux, 0.11 say, could run in 2M. Is Linux modular enough to cut it down for applications like the cereal box? He showed a picture of a Lego crane that a friend of his had built. It had custom parts, gear boxes, and could operate like a real crane. But if we wanted to build a small car, it probably wouldn't make sense to strip down the crane into a car—instead we would start with a small Lego kit.

If you want a "Linux" that is essentially just the scheduler and WiFi stack, that is quite difficult to do today. All sorts of "extras" come with those components, including, for example, the crypto module, when all that's really needed are some routines to calculate packet checksums.

When thinking back on his eleven years at Sony, Bird was "shocked" to realize how much of that time he has spent on "subtractive engineering". His work on linux-tiny and on boot-time reduction was all subtractive. In fact, "my job was to take Linux out of the cameras", he said.

It is more difficult to remove things from a system like Linux than it is to build something up from scratch. This subtractive method is not the way to get Linux into these IoT devices, he said. In addition, if you slim Linux down too far, "you don't have Linux any more". No one else will be running your "Linux", and no one will be developing software for it. You will have lost the network effects.

Fork it

So there is a seeming paradox between the needs of a low-end system and the need to maintain the network effects that make Linux so powerful. His suggestion is to "fork the kernel". That might make folks scratch their heads, but there are "good" forks and "bad" forks, he said.

One of the big concerns with forks is fragmentation. Many will remember the "Unix wars" (and generally not fondly). Each of the Unix vendors went out to build up its user base by adding features specific to one version of Unix. But all they managed to accomplish was to split the community multiple times, so that eventually it was so tiny that Windows was able to "swoop in" and capture most of the market, Bird said.

We are still living with the effects of that fragmentation today. The Unix vendors eventually realized the problems caused by the fragmentation and so efforts like POSIX and autotools came about to try to combat them. "Every time you run a configure script, you are a victim of the Unix wars", Bird said to audience laughter.

But there is "good fragmentation" too. It has happened in the history of Linux. For example, we can run Linux today on Cortex-M3 processors because the uClinux project forked the kernel to make it run on systems without a memory management unit (MMU). The hard part of a fork is reabsorbing it back into the kernel, but that's what happened with no-MMU. It took a lot of years of hard work, but the no-MMU kernel was eventually folded back into the mainline.

The uClinux folks didn't fork the community, they just forked a bit of the technology. The same thing has happened with Android in recent times. That project went off and did its own thing with the kernel, but much of that work is being pulled back into the mainline today. Because of what Android did, we have a bigger network today that includes both traditional embedded Linux and Android.

But don't just take his word for it, Bird said. He quoted from a May 2000 "Ask Linus" column wherein Linus Torvalds said that Linux forks targeting a new market where Linux does not have a presence actually make a lot of sense.

The IoT is just such a new market, and we need a "new base camp" from which to attack it. As he said at the outset, he was just trying to implant ideas into the heads of the assembled embedded developers. He did not have specific suggestions on how to go about forking the kernel or what the next steps should be, "I leave it up to you". The key will be to figure out a way to fork Linux but to keep the network effects so that "forking can equal growth". In Bird's opinion, that is how we should "attack" getting Linux onto those next 9 billion devices.


(Log in to post comments)

Linux and the Internet of Things

Posted May 1, 2014 12:52 UTC (Thu) by pedrocr (guest, #57415) [Link]

>We want computers in our cars, light switches, clothes, and maybe even food, he said.

We already have Linux in cars, so that's done. We already have plenty of home automation solutions for the light switch case, don't see why we'd want a full system in each switch but ok. But clothes and food? I have yet to see a compelling use case for the "Internet of Things", has anyone? So far it's a lot of "we'll build it first and then the applications with show up". They may, but I worry it's much ado about nothing.

Linux and the Internet of Things

Posted May 1, 2014 18:12 UTC (Thu) by mtaht (guest, #11087) [Link]

I disagree with tim subtly on one point.

"It is more difficult to remove things from a system like Linux than it is to build something up from scratch. "

No, it is far, far, far harder to build something up from scratch, and getting more difficult every day. The subtractive approach to linux is what allowed it to (for example) sweep the home router market - it was far easier at the time to cut it down to fit a small router than it was to (for example) build up any of the proprietary OSes at the time (vxworks, etc) to handle the new feature requirements.

We teeter perpetually on the brink of a complexity collapse, even with
solid abstractions like virtual memory, and stack hardening. Continued work to make embedded linux systems more manageable, and more updatable, is needed - and work, like that openwrt has been doing, to simplify userspace is continually needed.

The 80/20 rule applies to software stack needs for embedded devices, (actually it's probably closer to 99/1), but it's a different subset for every device.

However I agree with him that:

"This subtractive method is not the way to get Linux into these IoT devices, he said. In addition, if you slim Linux down too far, "you don't have Linux any more". No one else will be running your "Linux", and no one will be developing software for it. You will have lost the network effects."

The simplest devices (say, sensors) have power requirements and other problems that do seem to make alternate OSes like tinyOS more desirable, and with substantial investments into that sort of thing, I could see a new OS arise for many devices in the IoT category.

What I don't see in either scenario is a commitment to making sure these devices with tiny margins are kept updated and secure beyond the life of the product; the cost model is bad enough on devices in the 40 dollar range...

We face (and IMHO are already in) a world full of insecure, buggy devices, a toxic waste dump that requires superfund-level cleanups. Unless business models or government/licensing/certification structures are found to keep our embedded devices updated and safe to use, the Internet of Things may as well become a world of grey goo.

Linux and the Internet of Things

Posted May 2, 2014 7:42 UTC (Fri) by aleXXX (subscriber, #2742) [Link]

Personally, I'm sure I don't want computers in my light switches.
At work we have that, some kind of clever light switches. They are additionally wireless. And I think they get their energy from the actual button push movement.
Usually they work. Sometimes they don't...
I mean, it's a light switch. As long as I remember this is the first light switch I have ever used which sometimes does not work. Just mechanically connecting two cables is so a simple operation, I don't want a computer and operation system involved in this.

Alex

Linux and the Internet of Things

Posted May 2, 2014 13:38 UTC (Fri) by smitty_one_each (subscriber, #28989) [Link]

IPv6 addresses are all fun and games, until that moment you find yourself "living your life like a candle in the wind, never knowing who to cling to, until the reboot's in".

Linux and the Internet of Things

Posted May 2, 2014 16:19 UTC (Fri) by Baylink (guest, #755) [Link]

Alex makes here the smaller version of an argument I've been making the larger version of to lots of people I know in the broadcast world, who are convinced that the end game will be over-the-Net broadcasting replacing over-the-air, and that argument is this:

Complexity Will Kill You.

The complexity of a system is dependent, in large part, on the amount of state that is maintained in each node in the graph.

Broadcasting has a fair amount of it, but it's all in one place: the radio station, where a trained engineer is paid way too little money to know how to keep it all working.

And because that's true, there's a fighting chance that system will remain functional during... Katrina. Or Sandy. Or Andrew.

If you abandon point-to-multipoint RF broadcast technology completely, in favor of leveraging the Net...well, not only does your radio station now need a technical support department, but there's no guarantee at any given point *who* the call should go to.

And if there's been 8 inches of rain in an hour, and I'm trapped on top of my car (this just happened, today, in Tampa FL), I ain't got time to call tech support, y'know?

This complexity diaspora will be the thing that kills either ideas like wholly-Internet-based broadcasting... or *us*, after it's too late.

It's the dirty little secret of David Isen's Stupid Network...

Linux and the Internet of Things

Posted May 2, 2014 21:27 UTC (Fri) by khim (subscriber, #9252) [Link]

Complexity could be bane or salvation. When you mentioned Katrina I immediately recalled the funny fact: after Katrina New Orleans had no electricity, it had no running water (except on streets), all broadcast channels were not available, broken, but it had an Internet—although, obviously, not everywhere.

It's about points of failure, not about complexity. As for switch which sometimes does not work… you don't need electronics for that. Good old wear and rust work just fine. We don't even notice when that happens. We just replace the thing (eraser can help for a short time if you can not replace defective switch or light-bulb right away), but if the same thing happens with “clever” light switches… we just have no idea what to do. It's not because they are more complex or less reliable, that's because they are unfamiliar.

This being said Internet lately becomes both more complex and dangerously centralized. That is problematic combination, it's true.

Linux and the Internet of Things

Posted May 2, 2014 15:37 UTC (Fri) by Baylink (guest, #755) [Link]

> Each new phone added to the network will infinitesimally increase the value of all the phones already in the network. Essentially, the value of the system increases as you add to it. That is a network effect in action.

It's generally known as Metcalfe's Law, after Bob Metcalfe, inventor of Ethernet at Xerox PARC... and it's generally quoted as "the value of a network is proportional to the square of the number of nodes".

That's a *much* bigger number than "infinitesimally"; is there a point at which it switches?

Linux and the Internet of Things

Posted May 2, 2014 15:49 UTC (Fri) by Jonno (subscriber, #49613) [Link]

> Each new phone added to the network will infinitesimally increase the value of all the phones already in the network.

> "the value of a network is proportional to the square of the number of nodes".

> That's a *much* bigger number than "infinitesimally"; is there a point at which it switches?

There is about 6 billion phones in the network. The square of six billion and one is about 0.000000033% higher than the square of six billion. I certainly would describe that as "infinitesimally".

Linux and the Internet of Things

Posted May 2, 2014 16:37 UTC (Fri) by Baylink (guest, #755) [Link]

Oh, right...

I was thinking of the factorial; a completely different thing.

Sorry.

Linux and the Internet of Things

Posted May 2, 2014 16:08 UTC (Fri) by tbird20d (subscriber, #1901) [Link]

The presentation material is available online here:

This is in Prezi format, so it requires flash (sorry about that). Audio of the talk should be available from the Linux Foundation shortly. Unfortunately, there were technical difficulties and no video was made of the talk.

Linux and the Internet of Things

Posted May 9, 2014 22:45 UTC (Fri) by cas (subscriber, #52554) [Link]

why the hell would i want a display on a cereal box?

what possible use is that to me or to anyone except advertising scumbags and other such vermin?

Linux and the Internet of Things

Posted May 9, 2014 22:51 UTC (Fri) by dlang (subscriber, #313) [Link]

get off your high horse and think about the possibility that displays could be that cheap.

Yes, there's a lot of hype about IoT, just like there is/was about 'cloud', 'virtualization', 'object oriented programming' 'agile programming' etc

just because something can be misused and has hype around it doesn't mean that there is nothing there when you look deeper.

In many ways, we are already well into the IoT with smart meters and similar things, as it gets cheaper to add processing to things, you will see processors show up in all sorts of places

how about streetlights that call home when they have problems (not just when they are out, but when they get dim)

now extend this to lights in an office building and you can have the system schedule someone to come out and change all the problem lights in an area.

Linux and the Internet of Things

Posted May 10, 2014 3:07 UTC (Sat) by cas (subscriber, #52554) [Link]

my comment was about the specific example of displays on cereal boxes, not about the generic possibility of cheap displays.

cheap displays are good.

displays on cereal boxes are worthless except to advertising scumbags who would love to put annoying animated shit in your face every morning....and worse, want yet another opportunity to infect the minds of children.

Linux and the Internet of Things

Posted May 10, 2014 3:12 UTC (Sat) by dlang (subscriber, #313) [Link]

well, you don't need the IoT to put displays on cereal boxes (why do they need network connections) so it's a bad example all around.

Internet on cereal boxes

Posted May 11, 2014 17:10 UTC (Sun) by robbe (subscriber, #16131) [Link]

>(why do they need network connections)
Well, of course to upload the pictures of the embedded camera (you didn't think it only contains a display, did you?) to Skyn^WThe Cloud so it can serve you the /right/ kind of ads.

"So I see you like Flurble milk? Did you know that Galuxian is 2¢ cheaper and contains 8% more unsaturated fats?"

Linux and the Internet of Things

Posted May 20, 2014 11:45 UTC (Tue) by tbird20d (subscriber, #1901) [Link]

You need a computer and display to play video games, of course. Not every display is there just to run advertising. Cereal boxes already come with games, trivia and reading material on the back. Granted some of it is just advertising drivel. But there's no reason - at the price point of one dollar - that a game or some other form of interactive media couldn't be used solely to increase the sales of the present product, just like toys are today.

It's not like this is even very new territory. A magazine has already shipped with a special edition that had Linux embedded in it, and some people will remember the Chex Quest video game from the nineties, that shipped with certain Chex products.

Linux and the Internet of Things

Posted May 20, 2014 14:19 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

> some people will remember the Chex Quest video game from the nineties, that shipped with certain Chex products.

Still the best DOOM mod I've played :) .

Little Linux

Posted May 23, 2014 0:17 UTC (Fri) by vomlehn (subscriber, #45588) [Link]

From my point of view, what really matters is to support the developer who is writing the application-specific software on top of all of this. I'm not one of those, but I'm pretty sure the point of view is this:

I want everything I want to use to be there but I don't want to pay for anything I don't want to use. Oh, and I don't want to have to worry about how this happens.
The thing is, I'm not sure this is as hard as it seems. Imagine, if you will (cue Twilight Zone music), a kernel where large blocks of code are loadable modules: realtime signals, swap, splice, System V IPC, etc.[1] Picture also a glibc that is similarly segmented and dynamically loadable. Make all this stuff dynamically loadable (which it almost already is). Given an application that will not be downloading new code, you can now create an application that will statically analyse it and omit anything from the kernel and libraries that will not be needed. If you really do need to download new code, statically analyse *that* and sent the bits you need when you do the download. Voila! You now have a small, fast loading Linux.

Of course, this glosses over lots of details:

  • You may want simpler versions of things, like the SLOB allocator instead of SLUB (maybe not the strongest example, but you get it idea).
  • Lots of things get inlined, so modularizing will take a performance bite. But, these are not performance-critical applications, so you can disable inlining except for always_line.
  • Lots of initialization code happens at start up and may not be suitable to be run at a later time. This one feels real, but doable
  • You just can't break the kernel up this way; it's too tightly coupled. This, I just don't buy. It's a good practical argument against using microkernels, but if you look at the number of places where we have struct xxx_ops defined to handling what is logically creating subclasses, you'll get a hint that there are many places where this kind of partitioning can be done with only a small amount of suffering and lamentation.
  • Applications just use too much of the kernel; if you eliminated everything that wasn't being used you would still have to include the bulk of what's there. If this is the case, we already have a minimal kernel and no kernel fork, and no kernel developed from scratch, will help. I don't believe this is the case. Preliminary results from looking at system call usage suggest that we can drop lots of system calls, some of which are quite complex, and still support quite complex applications.

So, Tim, yeah, I agree. How much is done as a fork and how much is done directly in the mainline seems an open question. But the alternative is to try to rebuild Linux from scratch. And I am so tired of duplicating effort that this sounds awful to me.

[1] This is based on some system call usage analysis that I've been doing, so these are actually pretty low-usage in the real world.

Foam or Swiss cheese

Posted May 23, 2014 17:57 UTC (Fri) by vomlehn (subscriber, #45588) [Link]

It seems to me that the amount of effort required for a stripped-down kernel depends on whether the kernel looks more like foam or Swiss cheese. Given a given application and the code coverage information, it may be that the kernel usage has large adjacent areas that aren't executed. Timer file descriptor system calls are probably like this--either you use just about all of them or you use none. This is like Swiss cheese. By cutting out a few large pieces of the kernel, you can get big reductions in its heft.

More difficult is the case where kernel usage looks like foam, with a zillion separate areas where the code is unused. In this case, an application is using most of the system calls, but only a few of the options on each one. This yields a huge number of places to cut out. Not only is it hard to cut out a lot of places, but managing kernel configuration after a hypothetical fork of the kernel becomes a nightmare, with a zillion options.

The magnitude of the problem can be previewed by collecting kernel code coverage data, or we can just jump into this and see how much pain we feel. As far as managing kernel configuration in the "foam" case, I think it can still be done, but it probably requires the automated tools I postulated in my previous comment to be practical.

Foam or Swiss cheese

Posted May 23, 2014 18:28 UTC (Fri) by jimparis (subscriber, #38647) [Link]

> The magnitude of the problem can be previewed by collecting kernel code coverage data, or we can just jump into this and see how much pain we feel. As far as managing kernel configuration in the "foam" case, I think it can still be done, but it probably requires the automated tools I postulated in my previous comment to be practical.

Here's a probably-useless but interesting tale of using a similar automated tool to reduce the size of one of Farbrausch's 96k demos: http://fgiesen.wordpress.com/2012/04/08/metaprogramming-f...


Copyright © 2014, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds