User: Password:
|
|
Subscribe / Log in / New account

Networking on tiny machines

Please consider subscribing to LWN

Subscriptions are the lifeblood of LWN.net. If you appreciate this content and would like to see more of it, your subscription will help to ensure that LWN continues to thrive. Please visit this page to join up and keep LWN on the net.

By Jonathan Corbet
May 7, 2014
Last week's article on "Linux and the Internet of Things" discussed the challenge of shrinking the kernel to fit on to computers that, by contemporary standards, are laughably underprovisioned. Shortly thereafter, the posting of a kernel-shrinking patch set sparked a related discussion: what needs to be done to get the kernel to fit into tiny systems and, more importantly, is that something that the kernel development community wants to even attempt?

Shrinking the network stack

The patch set in question was a 24-part series from Andi Kleen adding an option to build a minimally sized networking subsystem. Andi is looking at running Linux on systems with as little as 2MB of memory installed; on such systems, the Linux kernel's networking stack, which weighs in at about 400KB for basic IPv4 support, is just too big to shoehorn in comfortably. By removing a lot of features, changing some data structures, and relying on the link-time optimization feature to remove the (now) unneeded code, Andi was able to trim things down to about 170KB. That seems like a useful reduction, but, as we will see, these changes have a rough road indeed ahead of them before any potential merge into the mainline.

Some of the changes in Andi's patch set include:

  • Removal of the "ping socket" feature that allows a non-setuid ping utility to send ICMP echo packets. It's a useful feature in a general-purpose distribution, but it's possibly less useful in a single-purpose tiny machine that may not even have a ping binary. Nonetheless the change was rejected: "We want to move away from raw sockets, and making this optional is not going to help us move forward down that path."

  • Removal of raw sockets, saving about 5KB of space. Rejected: "Sorry, you can't have half a functioning ipv4 stack."

  • Removal of the TCP fast open feature. That feature takes about 3KB to implement, but it also requires the kernel to have the crypto subsystem and AES code built in. Rejected: "It's for the sake of the remote service not the local client, sorry I'm not applying this, it's a facility we want to be ubiquitous and in widespread use on as many systems as possible."

  • Removal of the BPF packet filtering subsystem. Rejected: "I think you highly underestimate how much 'small systems' use packet capturing and thus BPF."

  • Removal of the MIB statistics collection code (normally accessed via /proc) when /proc is configured out of the kernel. Rejected: "Congratulations, you just broke ipv6 device address netlink dumps amongst other things."

The above list could be made much longer, but the point should be apparent by now: this patch set was not welcomed by the networking community with open arms. This community has been working with a strong focus on performance and features on contemporary hardware; networking developers (some of them, at least) do not want to be bothered with the challenges of trying to accommodate users of tiny systems. As Eric Dumazet put it:

I have started using linux on 386/486 pcs which had more than 2MB of memory, it makes me sad we want linux-3.16 to run on this kind of hardware, and consuming time to save few KB here and here.

The networking developers also do not want to start getting bug reports from users of a highly pared-down networking stack wondering why things don't work anymore. Some of that would certainly happen if a patch set like this one were to be merged. One can try to imagine which features are absolutely necessary and which are optional on tiny systems, but other users solving different problems will come to different conclusions. A single "make it tiny" option has a significant chance of providing a network stack with 99% of what most tiny-system users need — but the missing 1% will be different for each of those users.

Should we even try?

Still, pointing out some difficulties inherent in this task is different from saying that the kernel should not try to support small systems at all, but that appears to be the message coming from the networking community. At one point in the discussion, Andi posed a direct question to networking maintainer David Miller: "What parts would you remove to get the foot print down for a 2MB single purpose machine?" David's answer was simple: "I wouldn't use Linux, end of story. Maybe two decades ago, but not now, those days are over." In other words, from his point of view, Linux should not even try to run on machines of that class; instead, some sort of specialty operating system should be used.

That position may come as a bit of a surprise to many longtime observers of the Linux development community. As a general rule, kernel developers have tried to make the system work on just about any kind of hardware available. The "go away and run something else" answer has, on rare occasion, been heard with regard to severely proprietary and locked-down hardware, but, even in those cases, somebody usually makes it work with Linux. In this case, though, there is a class of hardware that could run Linux, with users who would like to run Linux, but some kernel developers are telling them that there is no interest in adding support for them. This is not a message that is likely to be welcomed in those quarters.

Once upon a time, vendors of mainframes laughed at minicomputers — until many of their customers jumped over to the minicomputer market. Minicomputer manufacturers treated workstations, personal computers, and Unix as toys; few of those companies are with us now. Many of us remember how the proprietary Unix world treated Linux in the early days: they dismissed it as an underpowered toy, not to be taken seriously. Suffice to say that we don't hear much from proprietary Unix now. It's a classic Innovator's Dilemma story of disruptive technologies sneaking up on incumbents and eating their lunch.

It is not entirely clear that microscopic systems represent this type of disruptive technology; the "wait for the hardware to grow up a bit" approach has often worked well for Linux in the past. It is usually safe to bet on computing hardware increasing in capability over time, so effort put into supporting underpowered systems is often not worth it. But we may be dealing with a different class of hardware here, one where "smaller and cheaper" is more important than "more powerful." If these systems can be manufactured in vast numbers and spread like "smart dust," they may well become a significant part of the computing substrate of the future.

So the possibility that tiny systems could be a threat to Linux should certainly be considered. If Linux is not running on those devices, something else will be. Perhaps it will be a Linux kernel with the networking stack replaced entirely by a user-space stack like lwIP, or perhaps it will be some other free operating system whose community is more interested in supporting this hardware. Or, possibly, it could be something proprietary and unpleasant. However things go, it would be sad to look back someday and realize that the developers of Linux could have made the kernel run on an important class of machines, but they chose not to.


(Log in to post comments)

Networking on tiny machines

Posted May 7, 2014 14:30 UTC (Wed) by rossburton (subscriber, #7254) [Link]

Regarding lwIP, Andy covered that in his post:

"There were proposals to instead use LWIP in user space. LWIP with its socket interface comes in at a bit over 100k overhead per application."

Andy's current series is 170K for kernel-based networking, and can possibly go further.

Networking on tiny machines

Posted May 7, 2014 15:09 UTC (Wed) by stefanha (subscriber, #55072) [Link]

For something really small look at iPXE's TCP/IP stack that is under 64 KB. But you lose all of the "operating system" niceties like threads, memory management, userspace, POSIX, etc.

The stack also doesn't match the performance of Linux simply because it doesn't use as many buffers and doesn't implement all the advanced TCP/IP features.

https://git.ipxe.org/ipxe.git/tree/HEAD:/src/net

Networking on tiny machines

Posted May 7, 2014 16:07 UTC (Wed) by epa (subscriber, #39769) [Link]

Didn't the old ka9q implement a full TCP/IP stack (as well as SLIP)? How much memory did that need?

Networking on tiny machines

Posted May 7, 2014 16:21 UTC (Wed) by raven667 (subscriber, #5198) [Link]

I'm sure what's considered a "full" TCP/IP stack changes over time, for example I would consider IPv6 and cryptographically secure sequence numbers to be mandatory now.

Networking on tiny machines

Posted May 7, 2014 17:01 UTC (Wed) by epa (subscriber, #39769) [Link]

I'm not sure I would; much as we would wish the world to use IPv6 everywhere by now, it does not; and if you assume that anything important will be encrypted higher up the stack, then TCP sequence number spoofing is only a DoS attack, and there are plenty of those to pick from anyway.

Networking on tiny machines

Posted May 8, 2014 4:05 UTC (Thu) by drag (subscriber, #31333) [Link]

If you want to be able to do the 'smart dust' type thing then ipv6 is going to be needed. Or something else other then Ipv4, at the very least. In fact it may be more useful in the long run to look at striping out ipv4 support altogether. And it's not a 'long long run'... more like 3-5 years long run.

"cryptographically secure sequence" addresses is a bit suspect. Does anybody actually use that stuff? Randomized addresses is much more accommodating to the 'small system' meme, though. No need to figure out any address, just find the network address and pick a number at random. Couldn't be any simpler.

Networking on tiny machines

Posted May 8, 2014 7:22 UTC (Thu) by kleptog (subscriber, #1183) [Link]

> In fact it may be more useful in the long run to look at striping out ipv4 support altogether. And it's not a 'long long run'... more like 3-5 years long run.

I don't know. Internal corporate networks are moving even slower than I thought possible. RFC1918 addresses are ubiquitous and plentiful.

For consumer connections IPv6 is going to be necessary just due to the number of devices, but if you can hide an entire business behind a handful of IPs and use RFC1918 internally... I think the transition is going to take much longer, if ever in that context.

At home, I have a handful of devices using DHCP, switching to IPv6 is simple. At work I have dozens of machines, all talking to each other on RFC1918 addresses, which don't need to talk to the outside world, why would I ever switch? And if you do need something from the internet, HTTP proxies satisfy almost every need.

Networking on tiny machines

Posted May 8, 2014 13:43 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

>RFC1918 addresses are ubiquitous and plentiful.
Plentiful they are not.

There's really just one usable /8 which might look much, but once you start allocating addresses from it for a company with multiple sites and try to do VPN for remote access, it's almost inevitable that you'll have collisions with many CPEs.

Networking on tiny machines

Posted May 8, 2014 19:04 UTC (Thu) by drag (subscriber, #31333) [Link]

I know that Comcast, and I am guessing other people with very large networks, has had to come up with a schemes for internal NAT'ng and tunnels that involved tunneling IPv4 over IPv4 because they need multiple 10.0.0.0/8 networks to be able to address all their subnets and equipment.

Once you run out of IPv4 private addresses things start to get really ugly really quick.

Networking on tiny machines

Posted May 8, 2014 19:25 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Nope, Comcast simply got additional public IPs for their internal control plane: http://www.ipv4depletion.com/?p=493

Networking on tiny machines

Posted May 9, 2014 1:23 UTC (Fri) by drag (subscriber, #31333) [Link]

I don't think that lasted very long, then. 2010 is a very long time ago when it comes to internet addresses.

Nowadays I think that it's most likely Comcast, and probably others, have gone dual stack with IPv4 tunneling over IPv6. That's the most sane solution and it opens up those big blocks IPv4 space to be leased to customers.

For people that don't want to go that route there is always 'NAT444'.

http://www.networkworld.com/community/node/45776

Public IPv4 being NAT'd to private IPv4 networks using other private IPv4 networks. Given my experiences with my phone and various protocols on 'WAN' networks I can guess which direction many of the phone carriers decided to go.

Networking on tiny machines

Posted May 9, 2014 2:50 UTC (Fri) by rahvin (subscriber, #16953) [Link]

Carrier grade NAT is a reality in Europe and Asia, and I have no doubt we'll be seeing it implemented in the US in due course.

Someone like government or Google/Facebook with a lot of IP space and basically a key component of the internet needs to step up and move completely to IPV6 and force everyone's hand. If major groups moved their services to IPv6 and refused to provide ipv4 services it could start the landslide that shifted the entire internet. If it's going to ever happen it needs to start soon because there is still a lot of equipment out there that's not ipv6 compatible.

Networking on tiny machines

Posted May 9, 2014 7:56 UTC (Fri) by dlang (subscriber, #313) [Link]

so you are saying that some critical part of the internet should make itself inaccessible to 90%+ [1] of the internet for some unknown timeframe (but probably years) to force everyone to upgrade.

somehow I think that the competiton would just step in and the company would go out of business in the meantime.

[1] last I heard, IPv6 traffic is somewhere in the 3-6% range, and every network stack I've heard of uses IPv6 in preference to IPv4 if it works

Networking on tiny machines

Posted May 9, 2014 8:24 UTC (Fri) by khim (subscriber, #9252) [Link]

last I heard, IPv6 traffic is somewhere in the 3-6% range

3.01% by latest Google's estimate. But it's grows quite strongly: the same time last year it barely crossed 1% mark.

It's well-known fact that Internet only just works. This time (as every time previosly) all attempts to postpone the switch were used in the very same way they were used in the past: to push switch back few years and do nothing in the meanwhile.

Only when screams “Aaargh. I need, really need XX IPv4 addresses or else my whole company will go down in flames” started getting calm “Oh, I'm so sorry that your company is going down in flames. Nice weather, isn't it?” response people started switching en-masse to IPv6.

Networking on tiny machines

Posted May 9, 2014 8:31 UTC (Fri) by dlang (subscriber, #313) [Link]

3% is hardly "people started switching en-masse" That's still in the "early adopters" combined with a little bit of "people don't realize they're using it" where some network people setup IPv6 because "it's the right thing to do" (as opposed to any push from the users)

now, if your last paragraph was written in future mode rather than in past tense, then I could possibly agree with you. But I think that there is a LOT more room for 'temporary' fixes (including sales of IPv4 addresses) in the meantime.

Networking on tiny machines

Posted May 9, 2014 10:47 UTC (Fri) by jem (subscriber, #24231) [Link]

Check out the numbers on https://www.google.com/intl/en/ipv6/

The growth is steady, and there is a chance the for the global percentage to jump to 6-7 before the end of the year. The 3 % figure is for the whole Internet; the percentages for some countries are much bigger, e.g. USA 7.14%, Germany 8.38%, France 5.23%, Belgium 16.93%.

Networking on tiny machines

Posted May 10, 2014 14:12 UTC (Sat) by marcH (subscriber, #57642) [Link]

> 3% is hardly "people started switching en-masse"

It's not en-masse but it's millions: enough to prove it works on a massive scale.

> That's still in the "early adopters" combined with a little bit of "people don't realize they're using it"

I think the vast majority of people start using IPv6 when their ISP (and new Android version...) starts, which means they indeed don't even realize it.

Networking on tiny machines

Posted May 27, 2014 15:32 UTC (Tue) by krakensden (subscriber, #72039) [Link]

Networking on tiny machines

Posted May 13, 2014 0:33 UTC (Tue) by rahvin (subscriber, #16953) [Link]

You are correct abandoning ipv4 isn't the right course, in retrospect what I should have said was more along the lines of popping up a big scary warning that tells the user to call their ISP.

I'm sure half their users calling support because Google told them their is something wrong with their internet would do two things, the first is make the ISP hate Google with a passion and the second is cause the ISP to ensure ipv6 is implemented and being used in preference to ipv4.

It's frustrating for me because I'm on Comcast business, just a month ago I finally got a free modem upgrade to support Docsis 3 and ipv6 (I had to specifically request this upgrade), when I inquired about ipv6 support which their own tools say are fully deployed on my CMTS I was told it's in beta on the business side and the beta is closed. That Beta was open to users last year. In other words the only way I can use ipv6 is if I had a modem that supported it and I requested to be part of a "beta" a year ago when I didn't have a modem that supported it. I'd take sweet relish in Google or Facebook doing that to Comcast.

Networking on tiny machines

Posted May 9, 2014 15:43 UTC (Fri) by raven667 (subscriber, #5198) [Link]

> moved their services to IPv6 and refused to provide ipv4 services

That's not going to happen, not now not ever. That's just not how the world works.

> Google/Facebook with a lot of IP space and basically a key component of the internet needs to step up and move completely to IPV6

This has pretty much happened, both Google and Facebook are dual-stack for their public facing properties, as are some of the major CDNs, Netflix and YouTube as well. There is a long tail of IPv4-only services that will exist for the next decade or two but all the highest traffic services are ready and waiting for clients to convert over.

> there is still a lot of equipment out there that's not ipv6 compatible

Not true of end-user devices like computers and phones but is true for many consumer routers, even though the cable DOCSIS 3 standard mandates IPv6 support for the modem, a lot of routers will have to be replaced (good business opportunity for router makers really, hopefully we can shoe-horn CoDel in the new deployment as well)

> Carrier grade NAT is a reality

Even my organization is moving forward with a large NAT system but we are tying the deployment of NAT with the deployment of IPv6 because everything which routes directly (all the most popular web properties mentioned above) doesn't have to go through the NAT which greatly reduces the expense of it.

I expect these two factors to drive IPv6 for home users, it should be cheaper for ISPs to provision than expanding the NAT and it should be lower latency for customers where that matters like VoIP and gaming.

Networking on tiny machines

Posted May 10, 2014 14:17 UTC (Sat) by marcH (subscriber, #57642) [Link]

> I expect these two factors to drive IPv6 for home users, it should be cheaper for ISPs to provision than expanding the NAT and it should be lower latency for customers where that matters like VoIP and gaming.

Peer to peer.

The one thing that crazy/triple NATs break is peer to peer.

I would only take a couple of successful peer-to-peer applications (think Napster, Skype, some decentralized game,...) to force ISPs to implement IPv6.

So what is very effectively delaying IPv6 (forever?) is... "cloud computing".

Networking on tiny machines

Posted May 27, 2014 15:34 UTC (Tue) by krakensden (subscriber, #72039) [Link]

> some decentralized game

Many multiplayer console games- like Call of Duty- are, it saves on hosting costs. It's mostly invisible to the players though.

Networking on tiny machines

Posted May 27, 2014 15:54 UTC (Tue) by marcH (subscriber, #57642) [Link]

So, "carrier-grade NAT" (really need to find a more appropriate name for this kludge, preferably a funny one) is doomed?

Networking on tiny machines

Posted May 27, 2014 16:41 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Maybe "[aircraft] carrier-sized gnat" problems?

Networking on tiny machines

Posted May 27, 2014 16:49 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Hmm... And guided high-velocity ammunition seems to be a great solution for both of them!

Networking on tiny machines

Posted May 8, 2014 14:39 UTC (Thu) by raven667 (subscriber, #5198) [Link]

I would think that any devices which need to communicate with the new "smart dust" would have IPv6 as an available feature, there are very few devices which aren't IPv6 capable, just many which don't currently have IPv6 enabled. It makes more sense to build toward the future than the past.

sequence numbers

Posted May 8, 2014 17:13 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

The cryptographically secure sequence numbers aren't for addressing, they're to avert an attack http://en.wikipedia.org/wiki/TCP_sequence_prediction_attack

So yes, people use that. You would presumbly be able to do without it on a closed network, but then again, strictly speaking you could choose to do without TCP/IP altogether on such a network. Diverging too far from normal risks losing most of the benefits of choosing TCP/IP in the first place.

sequence numbers

Posted May 8, 2014 19:00 UTC (Thu) by drag (subscriber, #31333) [Link]

Ok. Thanks for the correction. I was thinking you were referring to one of the more esoteric ipv6 addressing schemes.

Networking on tiny machines

Posted May 7, 2014 15:23 UTC (Wed) by johill (subscriber, #25196) [Link]

Not sure I understand davem's logic here - if some other OS with another IP stack is used, it seems it would also not implement those features that Andy was removing, so the "ubiquitous and in widespread use" thing won't happen anyway ...

Networking on tiny machines

Posted May 7, 2014 16:09 UTC (Wed) by proski (subscriber, #104) [Link]

I assume the logic is that if Linux won't budge, the manufacturers would have to increase the amount of memory to accommodate Linux. I doubt they would be happy to use VxWorks in watches and wearables.

Networking on tiny machines

Posted May 7, 2014 16:15 UTC (Wed) by javispedro (subscriber, #83660) [Link]

Maybe it is time to start considering Minix :)

Networking on tiny machines

Posted May 7, 2014 16:24 UTC (Wed) by epa (subscriber, #39769) [Link]

He went on to say "I wouldn't use Linux, end of story". So the other poster is quite right to point out that this contradicts the desire to have TCP fast open available on as many hosts as possible.

I wonder whether a minimal AES implementation written just for TCP fast open could be used instead of the full crypto subsystem.

Networking on tiny machines

Posted May 7, 2014 19:18 UTC (Wed) by fandingo (subscriber, #67019) [Link]

I think we've all learned recently that cryptosystems are seriously under maintained and reviewed. The last thing we need is a special-purpose one that will only be used by a niche industry (embedded computing) that hasn't ever had an acceptable attention to security.

Networking on tiny machines

Posted May 7, 2014 20:39 UTC (Wed) by epa (subscriber, #39769) [Link]

I agree; there are plenty of independently maintained and reviewed AES implementations which could be used.

Networking on tiny machines

Posted May 8, 2014 10:57 UTC (Thu) by intgr (subscriber, #39733) [Link]

This isn't really comparable. Implementing a symmetric cipher is quite straightforward and easy to test. Basically it's just doing arithmetic operations and table lookups in a loop.

But I'm sure they could just invoke the existing AES code directly without going through the kernel's crypto API, for similar savings in code size.

All the complexities and vulnerabilities in crypto libraries tend to come from protocol logic and data structure parsing, not the ciphers/primitives themselves.

> cryptosystems

Just a nitpick, "cryptosystem" refers to a set of algorithms for a single purpose (such as the RSA encryption cryptosystem, comprised of key generation, encryption and decryption).

Networking on tiny machines

Posted May 9, 2014 4:41 UTC (Fri) by jeff_marshall (subscriber, #49255) [Link]

Agreed; implementing a block cipher is stupid simple unless you are trying to mitigate the relevant side channel attacks, and for this use case side channels aren't a meaningful threat.

I've implemented AES myself in software for several different platforms on bare metal, and helped others to implement it in hardware. In all cases, it was pretty straightforward.

Networking on tiny machines

Posted May 7, 2014 18:54 UTC (Wed) by josh (subscriber, #17465) [Link]

That would be drastically overestimating the influence of a single objecting Linux kernel maintainer versus the maintenance of an out-of-tree branch that incorporates the patches. Note that almost every other kernel maintainer has been more than willing to merge patches to shrink the kernel.

The designers of devices and bills of material (BOMs) will fight for every last expense, and will not change their planned amounts of memory or storage when they can change software to be more efficient instead.

Networking on tiny machines

Posted May 9, 2014 16:28 UTC (Fri) by lacos (subscriber, #70616) [Link]

> The designers of devices and bills of material (BOMs) will fight for
> every last expense, and will not change their planned amounts of memory
> or storage when they can change software to be more efficient instead.

Indeed, there's a reason why they're called *hard*ware and *soft*ware.

Networking on tiny machines

Posted May 10, 2014 14:42 UTC (Sat) by marcH (subscriber, #57642) [Link]

> The designers of devices and bills of material (BOMs) will fight for every last expense, and will not change their planned amounts of memory or storage when they can change software to be more efficient instead.

For small software changes you are right. Good luck explaining the concept of "upstreaming code" to some hardware engineers...

If on the other hand you talk about switching to a different operating system, which is effectively a major product change, then of course everyone from top to bottom listens.

Networking on tiny machines

Posted May 7, 2014 19:13 UTC (Wed) by daniels (subscriber, #16193) [Link]

Which won't happen: compare the surface area required by the ARM Cortex-M (microcontroller subset of the usual ARMv7 architecture), to that required by even as little as 16MB of memory. Total non-starter.

Networking on tiny machines

Posted May 7, 2014 19:54 UTC (Wed) by gioele (subscriber, #61675) [Link]

> compare the surface area required by the ARM Cortex-M (microcontroller subset of the usual ARMv7 architecture), to that required by even as little as 16MB of memory. Total non-starter.

Could somebody provide these numbers for us non-experts?

What is the surface area of an ARM Cortex-M? And of 16MB of RAM (or other kinds of memory)?

Networking on tiny machines

Posted May 7, 2014 20:05 UTC (Wed) by daniels (subscriber, #16193) [Link]

I don't have the 16MB area off the top of my head, but ARM lists (for the core only) between 0.04-0.56mm² for the Cortex-M4, or 0.03-0.43mm² for the Cortex-M3.

TI doesn't ship their Cortex-Ms with anything more than 256kB of SRAM, which you have to say isn't likely to be due to the cost of that much memory ...

Networking on tiny machines

Posted May 8, 2014 5:11 UTC (Thu) by ncm (subscriber, #165) [Link]

I think I must be misreading those numbers: between 1/25 and 1/2 of a square mm, really? Depending on what? Process geometry? Amount of RAM? Cache size?

Networking on tiny machines

Posted May 8, 2014 9:52 UTC (Thu) by daniels (subscriber, #16193) [Link]

That's the floorplan area for just the core processor, so does not include any of the peripherals such as RAM, cache, display controller, etc.

Networking on tiny machines

Posted May 8, 2014 14:04 UTC (Thu) by jonnor (guest, #76768) [Link]

Depending on the process.
256kB SRAM is roughly the same die size as a Cortex M3 CPU core.
http://zeptobars.ru/en/read/MDR32F9Q2I-1986VE91T-whats-in...

Networking on tiny machines

Posted May 8, 2014 17:12 UTC (Thu) by yaap (subscriber, #71398) [Link]

There's a pretty big range of micro-controllers, from a stripped down core with not even an integer hardware multiplier to uC with MMU, FPU and L1 caches controller. The core logic (no memory) roughly goes from 8 to 80 kgates.

A stripped down core at ~8 kgates could be less than 0.05 mm2 in 90nm (when synthesized for reduced size, not speed). Typically those uC contain mixed logic and do not use the finer processes, it's more in the 65 to 180 nm range.

Then the ratio of SRAM to logic density is very very roughly 1.2: you can store 1.2 bit on average in the area used by a logic gate. Be careful, there are a lot of variations here depending on the chosen trade-off between area, power (dynamic and leakage) and speed, for both logic and memories. So only take this as a very rough order a magnitude number.

From that, you can see that even a high-end uC logic at ~80 kgates is only as big as ~12 kB of SRAM (bytes not bits now). And such a high-end uC would be over dimensioned for a basic connected sensor.
It may be surprising to some, but a core logic is quite negligible compared to the attached memories. The logic size still does matter but indirectly, as it's somewhat related to the power efficiency.

There are several levels of M2M / IoT systems, and their implementation can be very different. There are also threshold effects on the memory sizes.

At the low-end, linux is over kill. All the memory is embedded in the die, and it must be very small (few 10s kB to 100s kB). There are many free OS for this area, RTOS actually, and one of the most popular is FreeRTOS.

At the other end, for high performance / fancy devices, linux usually makes perfect sense and its size is not an issue. There's a big threshold effect: if one need an external SDRAM, the smallest/cheapest long term supported size nowadays would be 128 MB LPDDR2. Linux fits without problem, and there is absolutely no point in optimizing the IP stack size.
This memory could cost ~$0.85 in standard temperature range, and ~$1 in industrial range.

Then in between there is an area where optimizing linux may be useful, but it's not a given. Here too there are threshold effects. For medium systems one could use an external chip containing some Flash and pSRAM (it's SDRAM, but made too look like SRAM to the uC. So the uC doesn't need an SDRAM controller) for example. That can go up to 8 MB typically. It's cheaper, but you can't pick any size. If you need a bit over 2 MB, then it's 4 MB let's say. It's not practical to embed such big SRAM in the uC die, as they don't use small nodes (see above).
Maybe optimizing can get one to the lower size, but then one has to compare the effort to the saving. You would need a high volume to justify it.
And then, one can question if linux is the best choice there. If footprint is this critical one could use a rich RTOS. There are also free options, see eCos (http://ecos.sourceware.org, bought by Red Hat if my memory serves well). It's leaner than linux, due to being simpler too of course, but enough for a simple connected device.

So I think DaveM reaction makes sense. Pushing upstream the burden of maintaining a special configuration of Linux for what looks like a corner case where using a different OS may be even better from a BoM cost point of view doesn't seem justified at this stage. Let someone prove the case by selling real products based on such optimization in volume first seems a sensible approach to me. I'm not holding my breath to be frank.

Networking on tiny machines

Posted May 10, 2014 11:38 UTC (Sat) by mgedmin (subscriber, #34497) [Link]

Thank you, this was very interesting.

Networking on tiny machines

Posted May 10, 2014 14:38 UTC (Sat) by marcH (subscriber, #57642) [Link]

> I think I must be misreading those numbers: between 1/25 and 1/2 of a square mm, really? Depending on what? Process geometry? Amount of RAM? Cache size?

For pure comparisons between features/blocks I think it's easier to look at number of gates - independent from the process.

Networking on tiny machines

Posted May 8, 2014 14:34 UTC (Thu) by jonnor (guest, #76768) [Link]

SRAM die sizes are roughly 0.5mm^2/Mbit on sub-40nm process, or around 64mm^2 for 16MB.
http://techon.nikkeibp.co.jp/english/NEWS_EN/20130226/268...

Networking on tiny machines

Posted May 7, 2014 16:11 UTC (Wed) by epa (subscriber, #39769) [Link]

I didn't understand that "we want to move away from raw sockets", but at the same time a patch to allow turning off raw sockets was rejected.

Networking on tiny machines

Posted May 7, 2014 18:56 UTC (Wed) by josh (subscriber, #17465) [Link]

And in particular, the various rejections claiming that embedded systems will want every last feature of the existing networking stack do not match up with the real-world embedded systems that explicitly don't.

Networking on tiny machines

Posted May 7, 2014 21:48 UTC (Wed) by smurf (subscriber, #17840) [Link]

Not to mention that many of those systems will not be connected to the Big Bad Internet. They'll be on a tightly-constrained firewalled-off subnet where near-realtime responsiveness on embedded hardware is much more relevant than having crypto available. Or a packet filter. Heck, even IP fragmentation+reassembly makes no sense whatsoever on a restricted LAN with no outside traffic.

Networking on tiny machines

Posted May 7, 2014 22:14 UTC (Wed) by PaulWay (subscriber, #45600) [Link]

Please don't say that :-) Because as we've seen from SCADA systems, things that people say "no-one would ever connect this directly to the internet with no firewall" will... be connected directly to the internet with no firewall. People are lazy. Security is everyone's concern.

Have fun,

Paul

Networking on tiny machines

Posted May 7, 2014 22:21 UTC (Wed) by josh (subscriber, #17465) [Link]

Even if these systems are connected to the Internet, if they don't have a packet filter configured, they might as well not have packet filtering compiled in.

Networking on tiny machines

Posted May 8, 2014 13:51 UTC (Thu) by k3ninho (subscriber, #50375) [Link]

Of course I was going to put my fridge-freezer on the public internet*! How else would it tweet that I left the door open and all the contents were ruined. It might be a tough ask to get a twitter client inside 2MB of RAM, TBH. :-)

*: However well you idiot-proof the world, nature breeds a better class of idiot.

K3n.

Networking on tiny machines

Posted May 8, 2014 19:17 UTC (Thu) by drag (subscriber, #31333) [Link]

People need to stop considering private networks as secure. Unless here is a air-wall between it and the outside networks there are just too many holes in and out to automatically assume that nobody is able to get in and attack vulnerable systems or insecure protocols.

Networking on tiny machines

Posted May 11, 2014 0:28 UTC (Sun) by eean (guest, #50420) [Link]

Talk to Iranian nuclear researchers about air wall security. :p

Mostly while the Linux kernel shouldn't be insecure itself, its role in providing security is for sure not always needed.

Networking on tiny machines

Posted May 22, 2014 14:16 UTC (Thu) by quanstro (guest, #77996) [Link]

it might be a mistake to over generalize from a targeted attack on a high-value asset.

Networking on tiny machines

Posted May 7, 2014 17:43 UTC (Wed) by ibukanov (subscriber, #3942) [Link]

It is interesting that an idea of a user-space IP stack is useful not only for tiny systems, but for those that wants to handle 1e6 connections with minimal latency. Perhaps indeed the extremes can not be handled by a common stack in the kernel.

Networking on tiny machines

Posted May 10, 2014 21:23 UTC (Sat) by marcH (subscriber, #57642) [Link]

> It is interesting that an idea of a user-space IP stack is useful not only for tiny systems, but for those that wants to handle 1e6 connections with minimal latency.

Done at least here: http://www.shenick.com/products/diversifeye/ ; possibly also elsewhere.

Networking on tiny machines

Posted May 16, 2014 18:36 UTC (Fri) by piotrjurkiewicz (guest, #96438) [Link]

Luigi Rizzo's netmap goes that way: http://info.iet.unipi.it/~luigi/netmap/

I think such an approach could be a solution for the problems mentioned in the article too. I mean to implement some kind of lightweight kernel interface between NIC and userspace. Than user would be able to disable kernel networking and use his own userspace networking stack (either tiny or performance-oriented one) in extreme cases.

Networking on tiny machines

Posted May 16, 2014 18:41 UTC (Fri) by dlang (subscriber, #313) [Link]

that's already available, that's what LWIP is that Andi mentioned in his original post. It's not that small, and it only takes a few apps before it becomes larger than the kernel version (and only a couple to be larger than Andi's stripped down version)

Networking on tiny machines

Posted May 7, 2014 18:22 UTC (Wed) by boog (subscriber, #30882) [Link]

I completely agree with the many suggestions in this article that the rejections risk being short-sighted. One only needs to look at what has happened to Windows. They made the bet that computers would only become more powerful and have struggled for years with netbooks and tablets, etc.

Networking on tiny machines

Posted May 7, 2014 20:41 UTC (Wed) by epa (subscriber, #39769) [Link]

Computers become more powerful but software also becomes more bloated over time at nearly the same rate, so if your operating system runs like a dog on low-end hardware today that will likely still be the case in five years.

Networking on tiny machines

Posted May 10, 2014 14:36 UTC (Sat) by marcH (subscriber, #57642) [Link]

> One only needs to look at what has happened to Windows. They made the bet that computers would only become more powerful and have struggled for years with netbooks and tablets, etc.

Computers have obviously become much more powerful. The problem is that Microsoft made a much bigger bet than this: they made the bet that the power of computers would grow much FASTER than Windows bloat.

With Windows phone it looks like they finally learned how to trim bloat. But too little, too late. Also, very poor choice of a name. They were probably too proud of it which shows how they've lost it.

Networking on tiny machines

Posted May 10, 2014 20:44 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

One only needs to look at what has happened to Windows. They made the bet that computers would only become more powerful and have struggled for years with netbooks and tablets, etc.
Computers have obviously become much more powerful.

You seem to have missed the point, because computers have not become universally more powerful. Several entire classes of computers less powerful than previously common ones have been introduced and taken over much of the workload of those previous ones. So even if Microsoft hadn't added any bloat at all, it still would have lost market share. To stay even, it would have had to do what this article reports is proposed for Linux: take stuff out.

Networking on tiny machines

Posted May 10, 2014 20:56 UTC (Sat) by marcH (subscriber, #57642) [Link]

Even some micro-controllers today are more powerful than the first systems Windows ran on...

Networking on tiny machines

Posted May 10, 2014 21:23 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

Even some micro-controllers today are more powerful than the first systems Windows ran on...

True, but I don't know how that's relevant here, because the claim was that Microsoft bet that computers would continually get more powerful, not that they would always be more powerful than the first ones Windows ran on.

Actually, thinking about it some more, I realize that one of these backward steps happened well before the netbooks and tablets. It happened with laptop computers. The first laptops were far less powerful in every dimension than previously typical computers they replaced. They didn't even have color displays.

But I don't know how Windows fared in that transition.

Networking on tiny machines

Posted May 10, 2014 22:30 UTC (Sat) by raven667 (subscriber, #5198) [Link]

> But I don't know how Windows fared in that transition.

This is a good point I hadn't seen mentioned before. What happened was that people ran Windows on both laptops and desktops, so Windows didn't suffer here, but people were highly award that laptops were inferior in almost every way so it is laptop sales which suffered, only people who valued portability above all else used them. Laptops only became generally popular in the mid 2000s when the specs finally caught up to desktops, or at least when single-core speeds leveled off.

Since we are on the subject it should be mentioned that the whole net book category of computers was designed for Linux first and in the competitive marketplace traditional Linux desktops lost fair and square, people returned them in droves.

Networking on tiny machines

Posted May 15, 2014 22:08 UTC (Thu) by Wol (guest, #4433) [Link]

People returned them in droves? I think you've been taken in by propaganda!

There was the highly publicised example of one manufacturer. I think they claimed "the majority of our linux netbooks have been returned", only for somebody else to point out they didn't make any!!!

And when I asked (at ASDA, our Walmart subsidiary) I was told that they'd had pretty much NO returns at all. That was where I bought my netbook (which unfortunately I broke - I ended up thanks to wifely pressure trying to install XP on it. Somehow that trashed the BIOS :-(

No - there was a lot of propaganda to try and make linux netbooks look bad, but as far as I can make out, every time anybody dug into the figures they discovered somebody was lying with statistics ...

Cheers,
Wol

Networking on tiny machines

Posted May 7, 2014 21:20 UTC (Wed) by dlang (subscriber, #313) [Link]

what was even more bothersome to me was his statement saying to just use a 2.4 kernel, and when questioned on this his response of (paraphrased) "so you are saying that the 2.4 maintainer is incompetent"

Linux CAN be made to run small

Posted May 7, 2014 21:42 UTC (Wed) by tbird20d (subscriber, #1901) [Link]

It's a shame to see these patches rejected for such specious reasons. Reports from ELC indicate that the kernel can be made to run in small-footprint devices with useful functionality.

The micro-Yocto project has shown a recent kernel running in under 2M RAM, with networking (and presumably with these patches). Vitaly Wool presented some information about running Linux on a system with 2M flash and 256K ram (but the kernel was a bit old - 2.6.33).

Here are links to their presentations:

It would be nice to be able to share this work widely, by getting it upstream.

Networking on tiny machines

Posted May 7, 2014 23:24 UTC (Wed) by liam (subscriber, #84133) [Link]

L4 based kernels, fiasco.oc, seem a better choice for such hardware.
They require more development to bring up but they can be incredibly customized.

Networking on tiny machines

Posted May 8, 2014 1:38 UTC (Thu) by ras (subscriber, #33059) [Link]

I wish I could find the patches.

I'm guessing the put the feature under the control of a CONFIG_XXX setting in the kernel build system. Hell, he might even have put it under "Minimal Linux ABI" to warn the user if they features are enabled they would break the conventional userspace ABI.

If so, I'm struggling to understand DaveM's NACK's. Particularly his reasons. I could have understood "sorry, supporting this in the future will be a nightmare". But "I don't want a 1/2 working networking stack". Maybe he doesn't, when the choice is not using Linux at all or a 1/2 working networking stack, some people are going to take the latter. His response of "no you don't" sounds a tad condescending.

Networking on tiny machines

Posted May 8, 2014 1:43 UTC (Thu) by ras (subscriber, #33059) [Link]

Apologies to DaveM. He did make that argument: http://lwn.net/Articles/597626/

Networking on tiny machines

Posted May 8, 2014 13:57 UTC (Thu) by simlo (guest, #10866) [Link]

But isn't testing of a special configuration left to the users of that configuration to test? That way there is no testing burden on the kernel developers.

But there is a solution: Maintain a branch out of the main tree. If the patches are of no burden to the network developers, then it ought to be easy to merge to and maintain the branch.

Networking on tiny machines

Posted May 8, 2014 15:29 UTC (Thu) by PaulMcKenney (subscriber, #9624) [Link]

Good point, similar to the -rt tree.

Networking on tiny machines

Posted May 8, 2014 19:01 UTC (Thu) by simlo (guest, #10866) [Link]

Let us hope there will be no conflicts when merging -tinynet and -rt then.

Feature management is done best by config options not branches. On the other hand maintaining special cases on branches let the core developers concentrate on core functionality not the special cases.

Networking on tiny machines

Posted May 8, 2014 19:25 UTC (Thu) by PaulMcKenney (subscriber, #9624) [Link]

I am not all that worried about conflicts between -rt and -tinynet. After all, we manage frequent conflicts between well over 100 developer trees as it is. And separate trees allow large changes to be tested, refined, and otherwise improved without mainline breakage.

Networking on tiny machines

Posted May 10, 2014 21:26 UTC (Sat) by marcH (subscriber, #57642) [Link]

> Feature management is done best by config options not branches.

It's one of the most classic configuration management trade-off. Only simple problems have simple answers here.

Networking on tiny machines

Posted May 13, 2014 19:40 UTC (Tue) by mwsealey (subscriber, #71282) [Link]

Question: 2MB of writable memory or 2MB of XIP flash?

Can we even think of an application that justifies a 2MB kernel (or even a 1MB kernel!?) that wouldn't also take up more than 2MB?

Networking on tiny machines

Posted May 13, 2014 19:47 UTC (Tue) by dlang (subscriber, #313) [Link]

well, if turning on networking by itself is 400K, that's 40% of your 1M kernel before you do anything else. So it's going to be pretty easy to hit that limit.

Networking on tiny machines

Posted May 27, 2014 18:17 UTC (Tue) by garyamort (guest, #93419) [Link]

I think it's been pretty clear for quite a while that the gatekeepers for Linux have been pretty much focused primarily on servers and the bright shiny 16-cpu their employers buy them for quite a number of years.

BFS was declared a failure because it didn't handle well on 16+ cpu's....which while that may have been common for linux kernel coders back in 2009 - it's not common for most people.

Android devs realized quite early that enhancements to make Linux function well on embedded devices would be difficult to get accepted - so they made the sensible choice of writing complete replacements for linux functions rather then try to maintain a patch set against libraries which kernel devs could change and break the patches. Which seems to have upset some kernel devs that instead of playing catch up and stroking their ego's - android devs just side stepped them.

It's really not that big of a deal though - in the end all that really happens is Linux gets forked into a specialized version - and if the distro becomes popular enough, kernel devs eat crow and grudgingly accept the code they previously rejected.

Sure, in the end a lot of kernel devs end up looking like incompetent twits when their 'reasoned arguments' turn out to be a bunch of narcissistic ego stroking, but Linux itself continually improves by absorbing the code that works well while at the same time avoiding having the handle all the fiddly debugging.

Networking on tiny machines

Posted May 27, 2014 19:08 UTC (Tue) by dlang (subscriber, #313) [Link]

the huge changes the Android devs have been making aren't in the kernel, they are in userspace.

There have been a handful of features that they've added to the kernel, and after a lot of discussion those are getting integrated (or replaced with something the upstream devs and the android devs can agree on)

So at best you are mixing issues.

Networking on tiny machines

Posted May 27, 2014 22:09 UTC (Tue) by renox (subscriber, #23785) [Link]

> the huge changes the Android devs have been making aren't in the kernel, they are in userspace.

Uh? These changes are not relevant here, as the GP was talking about Linux.

> There have been a handful of features that they've added to the kernel, and after a lot of discussion those are getting integrated (or replaced with something the upstream devs and the android devs can agree on). So at best you are mixing issues.

Given the time and the difficulty for to have these 'handful of features' integrated into the mainline, I'd say that he is right..

Networking on tiny machines

Posted May 28, 2014 0:01 UTC (Wed) by dlang (subscriber, #313) [Link]

well, the OP talks about replacing functions instead of maintining patches against libraries.

what libraries does the kernel maintain?

It's far from obvious that this is talking about kernel functions.

Networking on tiny machines

Posted May 28, 2014 6:03 UTC (Wed) by smurf (subscriber, #17840) [Link]

There are a couple of reasons why wakelocks or the Binder haven't been accepted as they are, and those have nothing to do with mobile phones' small footprints, memory or otherwise.

Don't twist history.

Networking on tiny machines

Posted May 28, 2014 14:43 UTC (Wed) by raven667 (subscriber, #5198) [Link]

You are right that the reason these features were rejected as-is is not due to mobile devices small resources but it has a lot to do with the fact that the kernel has many many constituencies a large number of which are big iron and any new features have to be workable across the entire gamut of places where Linux is used, so features have to integrate well with the overall scheme, layout and style of the kernel, can't cause too many regressions on other completely unrelated systems and should be generally useful to those other systems as well which is why the specific code wasn't pulled in, as it was too much bolted on and single use, but the fact that these features were written provides an example of what is possible and what is needed so they were re-implemented in ways that had broader consensus across the Linux kernel ecosystem.

Networking on tiny machines

Posted May 29, 2014 7:43 UTC (Thu) by smurf (subscriber, #17840) [Link]

That being said, IMHO the demand for shrinking the kernel back down to something that works well on single-digit MByte machines is a reasonable goal.

It'll probably live in its own tree before getting into mainline, but then the linux-tiny patchset of earlier times did the same thing. So did/do the RT patches, for that matter.

Networking on tiny machines

Posted May 31, 2014 4:22 UTC (Sat) by garyamort (guest, #93419) [Link]

"any new features have to be workable across the entire gamut" - no they don't. In the case of things like various schedulers and power control management - as long as the userspace API is identical they don't need to work across all systems.

The i/o scheduler for example has many different patterns, some of which work better under different circumstances.

Multiple processor schedulers could be included in the kernel as well - rather then requiring constantly updated patch sets for every new kernel.

For an area I am familiar with, just browse through the ARM directories:
https://github.com/torvalds/linux/tree/master/arch/arm

Lots of kernel bits there for custom chips which are only found in specific hardware devices.

Also since there seems to be some confusion, I'm not saying a lot of kernel devs are hostile to mobile phones or whatnot - I'm saying that for the most part, their personally concerned with large, multiprocessor servers and high end hardware. If a new feature doesn't have some affect on what their interested in, it is quite often dismissed. Consider the comments quoted for this article.

Changes which would provide real benefits today but "don't further" some future goal, which historically can take years to be implemented, and which don't benefit the personal use cases of the maintainer - rejected.

Changes which provide performance benefits in real use cases by the submitter are rejected because the maintainer guesses that they won't apply to others - while at the same time admitting that he wouldn't use Linux for any of those use cases so he is not in a good position to judge.

Networking on tiny machines

Posted May 31, 2014 7:56 UTC (Sat) by dlang (subscriber, #313) [Link]

The question of schedulers has been discussed many times. This is one of those things that Linus has very strong feelings on.

He's seen other systems start to have vastly different schedulers for different purposes, and the end result is that there is a lot of confusion and each one is tailored for a specific use case, but that use case doesn't match what the users are actually needing to do (and/or users end up needing to do a little bit of multiple different types of work that each scheduler developer says "well, if you want to do that, don't use my scheduler")

and I'll say that the current disk schedulers seem to have this problem, although there is a little bit of a difference in that sometimes the disk controller does some of the work for the kernel, so there is more of a case for the noop scheduler, but there isn't the equivalent for the task scheduler)


Copyright © 2014, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds