Networking on tiny machines
Shrinking the network stack
The patch set in question was a 24-part series from Andi Kleen adding an option to build a minimally sized networking subsystem. Andi is looking at running Linux on systems with as little as 2MB of memory installed; on such systems, the Linux kernel's networking stack, which weighs in at about 400KB for basic IPv4 support, is just too big to shoehorn in comfortably. By removing a lot of features, changing some data structures, and relying on the link-time optimization feature to remove the (now) unneeded code, Andi was able to trim things down to about 170KB. That seems like a useful reduction, but, as we will see, these changes have a rough road indeed ahead of them before any potential merge into the mainline.
Some of the changes in Andi's patch set include:
- Removal of the "ping socket" feature that allows a non-setuid
ping utility to send ICMP echo packets. It's a useful
feature in a general-purpose distribution, but it's possibly less
useful in a single-purpose tiny machine that may not even have a
ping binary. Nonetheless the change was
rejected: "
We want to move away from raw sockets, and making this optional is not going to help us move forward down that path
". - Removal of raw sockets, saving about 5KB of space. Rejected: "
Sorry, you can't have half a functioning ipv4 stack.
" - Removal of the TCP fast open feature.
That feature takes about 3KB to implement, but it also requires the
kernel to have the crypto subsystem and AES code built in. Rejected: "
It's for the sake of the remote service not the local client, sorry I'm not applying this, it's a facility we want to be ubiquitous and in widespread use on as many systems as possible.
" - Removal of the BPF packet filtering subsystem. Rejected: "
I think you highly underestimate how much 'small systems' use packet capturing and thus BPF.
" - Removal of the MIB statistics collection code (normally accessed via
/proc) when /proc is configured out of the kernel.
Rejected: "
Congratulations, you just broke ipv6 device address netlink dumps amongst other things
".
The above list could be made much longer, but the point should be apparent by now: this patch set was not welcomed by the networking community with open arms. This community has been working with a strong focus on performance and features on contemporary hardware; networking developers (some of them, at least) do not want to be bothered with the challenges of trying to accommodate users of tiny systems. As Eric Dumazet put it:
The networking developers also do not want to start getting bug reports from users of a highly pared-down networking stack wondering why things don't work anymore. Some of that would certainly happen if a patch set like this one were to be merged. One can try to imagine which features are absolutely necessary and which are optional on tiny systems, but other users solving different problems will come to different conclusions. A single "make it tiny" option has a significant chance of providing a network stack with 99% of what most tiny-system users need — but the missing 1% will be different for each of those users.
Should we even try?
Still, pointing out some difficulties inherent in this task is different
from saying that the kernel should not try to support small systems at all,
but that appears to be the message coming from the networking community.
At one point in the discussion, Andi posed a
direct question to networking maintainer David Miller: "What
parts would you remove to get the foot print down for a 2MB single purpose
machine?
" David's answer was simple:
"I wouldn't use Linux, end of story. Maybe two decades ago, but not
now, those days are over.
" In other words, from his point of view,
Linux should not even try to run on machines of that class; instead, some
sort of specialty operating system should be used.
That position may come as a bit of a surprise to many longtime observers of the Linux development community. As a general rule, kernel developers have tried to make the system work on just about any kind of hardware available. The "go away and run something else" answer has, on rare occasion, been heard with regard to severely proprietary and locked-down hardware, but, even in those cases, somebody usually makes it work with Linux. In this case, though, there is a class of hardware that could run Linux, with users who would like to run Linux, but some kernel developers are telling them that there is no interest in adding support for them. This is not a message that is likely to be welcomed in those quarters.
Once upon a time, vendors of mainframes laughed at minicomputers — until many of their customers jumped over to the minicomputer market. Minicomputer manufacturers treated workstations, personal computers, and Unix as toys; few of those companies are with us now. Many of us remember how the proprietary Unix world treated Linux in the early days: they dismissed it as an underpowered toy, not to be taken seriously. Suffice to say that we don't hear much from proprietary Unix now. It's a classic Innovator's Dilemma story of disruptive technologies sneaking up on incumbents and eating their lunch.
It is not entirely clear that microscopic systems represent this type of disruptive technology; the "wait for the hardware to grow up a bit" approach has often worked well for Linux in the past. It is usually safe to bet on computing hardware increasing in capability over time, so effort put into supporting underpowered systems is often not worth it. But we may be dealing with a different class of hardware here, one where "smaller and cheaper" is more important than "more powerful." If these systems can be manufactured in vast numbers and spread like "smart dust," they may well become a significant part of the computing substrate of the future.
So the possibility that tiny systems could be a threat to Linux should
certainly be considered. If Linux is not running on
those devices, something else will be. Perhaps it will be a Linux kernel
with the networking stack replaced entirely by a user-space stack like lwIP, or perhaps it
will be some other free operating system whose community is more interested
in supporting this hardware. Or, possibly, it could be something
proprietary and unpleasant. However things go, it would be sad to look
back someday and realize that the developers of Linux could have made the
kernel run on an important class of machines, but they chose not to.
| Index entries for this article | |
|---|---|
| Kernel | Embedded systems |
| Kernel | Networking |
Posted May 7, 2014 14:30 UTC (Wed)
by rossburton (subscriber, #7254)
[Link]
"There were proposals to instead use LWIP in user space. LWIP with its socket interface comes in at a bit over 100k overhead per application."
Andy's current series is 170K for kernel-based networking, and can possibly go further.
Posted May 7, 2014 15:09 UTC (Wed)
by stefanha (subscriber, #55072)
[Link] (26 responses)
The stack also doesn't match the performance of Linux simply because it doesn't use as many buffers and doesn't implement all the advanced TCP/IP features.
Posted May 7, 2014 16:07 UTC (Wed)
by epa (subscriber, #39769)
[Link] (25 responses)
Posted May 7, 2014 16:21 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (24 responses)
Posted May 7, 2014 17:01 UTC (Wed)
by epa (subscriber, #39769)
[Link] (23 responses)
Posted May 8, 2014 4:05 UTC (Thu)
by drag (guest, #31333)
[Link] (22 responses)
"cryptographically secure sequence" addresses is a bit suspect. Does anybody actually use that stuff? Randomized addresses is much more accommodating to the 'small system' meme, though. No need to figure out any address, just find the network address and pick a number at random. Couldn't be any simpler.
Posted May 8, 2014 7:22 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (19 responses)
I don't know. Internal corporate networks are moving even slower than I thought possible. RFC1918 addresses are ubiquitous and plentiful.
For consumer connections IPv6 is going to be necessary just due to the number of devices, but if you can hide an entire business behind a handful of IPs and use RFC1918 internally... I think the transition is going to take much longer, if ever in that context.
At home, I have a handful of devices using DHCP, switching to IPv6 is simple. At work I have dozens of machines, all talking to each other on RFC1918 addresses, which don't need to talk to the outside world, why would I ever switch? And if you do need something from the internet, HTTP proxies satisfy almost every need.
Posted May 8, 2014 13:43 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (17 responses)
There's really just one usable /8 which might look much, but once you start allocating addresses from it for a company with multiple sites and try to do VPN for remote access, it's almost inevitable that you'll have collisions with many CPEs.
Posted May 8, 2014 19:04 UTC (Thu)
by drag (guest, #31333)
[Link] (16 responses)
Once you run out of IPv4 private addresses things start to get really ugly really quick.
Posted May 8, 2014 19:25 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (15 responses)
Posted May 9, 2014 1:23 UTC (Fri)
by drag (guest, #31333)
[Link] (14 responses)
Nowadays I think that it's most likely Comcast, and probably others, have gone dual stack with IPv4 tunneling over IPv6. That's the most sane solution and it opens up those big blocks IPv4 space to be leased to customers.
For people that don't want to go that route there is always 'NAT444'.
http://www.networkworld.com/community/node/45776
Public IPv4 being NAT'd to private IPv4 networks using other private IPv4 networks. Given my experiences with my phone and various protocols on 'WAN' networks I can guess which direction many of the phone carriers decided to go.
Posted May 9, 2014 2:50 UTC (Fri)
by rahvin (guest, #16953)
[Link] (13 responses)
Someone like government or Google/Facebook with a lot of IP space and basically a key component of the internet needs to step up and move completely to IPV6 and force everyone's hand. If major groups moved their services to IPv6 and refused to provide ipv4 services it could start the landslide that shifted the entire internet. If it's going to ever happen it needs to start soon because there is still a lot of equipment out there that's not ipv6 compatible.
Posted May 9, 2014 7:56 UTC (Fri)
by dlang (guest, #313)
[Link] (6 responses)
somehow I think that the competiton would just step in and the company would go out of business in the meantime.
[1] last I heard, IPv6 traffic is somewhere in the 3-6% range, and every network stack I've heard of uses IPv6 in preference to IPv4 if it works
Posted May 9, 2014 8:24 UTC (Fri)
by khim (subscriber, #9252)
[Link] (4 responses)
3.01% by latest Google's estimate. But it's grows quite strongly: the same time last year it barely crossed 1% mark. It's well-known fact that Internet only just works. This time (as every time previosly) all attempts to postpone the switch were used in the very same way they were used in the past: to push switch back few years and do nothing in the meanwhile. Only when screams “Aaargh. I need, really need XX IPv4 addresses or else my whole company will go down in flames” started getting calm “Oh, I'm so sorry that your company is going down in flames. Nice weather, isn't it?” response people started switching en-masse to IPv6.
Posted May 9, 2014 8:31 UTC (Fri)
by dlang (guest, #313)
[Link] (3 responses)
now, if your last paragraph was written in future mode rather than in past tense, then I could possibly agree with you. But I think that there is a LOT more room for 'temporary' fixes (including sales of IPv4 addresses) in the meantime.
Posted May 9, 2014 10:47 UTC (Fri)
by jem (subscriber, #24231)
[Link]
Check out the numbers on https://www.google.com/intl/en/ipv6/
The growth is steady, and there is a chance the for the global percentage to jump to 6-7 before the end of the year. The 3 % figure is for the whole Internet; the percentages for some countries are much bigger, e.g. USA 7.14%, Germany 8.38%, France 5.23%, Belgium 16.93%.
Posted May 10, 2014 14:12 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (1 responses)
It's not en-masse but it's millions: enough to prove it works on a massive scale.
> That's still in the "early adopters" combined with a little bit of "people don't realize they're using it"
I think the vast majority of people start using IPv6 when their ISP (and new Android version...) starts, which means they indeed don't even realize it.
Posted May 27, 2014 15:32 UTC (Tue)
by krakensden (subscriber, #72039)
[Link]
http://www.comcast6.net/index.php/8-ipv6-trial-news-and-i...
Posted May 13, 2014 0:33 UTC (Tue)
by rahvin (guest, #16953)
[Link]
I'm sure half their users calling support because Google told them their is something wrong with their internet would do two things, the first is make the ISP hate Google with a passion and the second is cause the ISP to ensure ipv6 is implemented and being used in preference to ipv4.
It's frustrating for me because I'm on Comcast business, just a month ago I finally got a free modem upgrade to support Docsis 3 and ipv6 (I had to specifically request this upgrade), when I inquired about ipv6 support which their own tools say are fully deployed on my CMTS I was told it's in beta on the business side and the beta is closed. That Beta was open to users last year. In other words the only way I can use ipv6 is if I had a modem that supported it and I requested to be part of a "beta" a year ago when I didn't have a modem that supported it. I'd take sweet relish in Google or Facebook doing that to Comcast.
Posted May 9, 2014 15:43 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (5 responses)
That's not going to happen, not now not ever. That's just not how the world works.
> Google/Facebook with a lot of IP space and basically a key component of the internet needs to step up and move completely to IPV6
This has pretty much happened, both Google and Facebook are dual-stack for their public facing properties, as are some of the major CDNs, Netflix and YouTube as well. There is a long tail of IPv4-only services that will exist for the next decade or two but all the highest traffic services are ready and waiting for clients to convert over.
> there is still a lot of equipment out there that's not ipv6 compatible
Not true of end-user devices like computers and phones but is true for many consumer routers, even though the cable DOCSIS 3 standard mandates IPv6 support for the modem, a lot of routers will have to be replaced (good business opportunity for router makers really, hopefully we can shoe-horn CoDel in the new deployment as well)
> Carrier grade NAT is a reality
Even my organization is moving forward with a large NAT system but we are tying the deployment of NAT with the deployment of IPv6 because everything which routes directly (all the most popular web properties mentioned above) doesn't have to go through the NAT which greatly reduces the expense of it.
I expect these two factors to drive IPv6 for home users, it should be cheaper for ISPs to provision than expanding the NAT and it should be lower latency for customers where that matters like VoIP and gaming.
Posted May 10, 2014 14:17 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (4 responses)
Peer to peer.
The one thing that crazy/triple NATs break is peer to peer.
I would only take a couple of successful peer-to-peer applications (think Napster, Skype, some decentralized game,...) to force ISPs to implement IPv6.
So what is very effectively delaying IPv6 (forever?) is... "cloud computing".
Posted May 27, 2014 15:34 UTC (Tue)
by krakensden (subscriber, #72039)
[Link] (3 responses)
Many multiplayer console games- like Call of Duty- are, it saves on hosting costs. It's mostly invisible to the players though.
Posted May 27, 2014 15:54 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (2 responses)
Posted May 27, 2014 16:41 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted May 27, 2014 16:49 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted May 8, 2014 14:39 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted May 8, 2014 17:13 UTC (Thu)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
So yes, people use that. You would presumbly be able to do without it on a closed network, but then again, strictly speaking you could choose to do without TCP/IP altogether on such a network. Diverging too far from normal risks losing most of the benefits of choosing TCP/IP in the first place.
Posted May 8, 2014 19:00 UTC (Thu)
by drag (guest, #31333)
[Link]
Posted May 7, 2014 15:23 UTC (Wed)
by johill (subscriber, #25196)
[Link] (29 responses)
Posted May 7, 2014 16:09 UTC (Wed)
by proski (subscriber, #104)
[Link] (19 responses)
Posted May 7, 2014 16:15 UTC (Wed)
by javispedro (guest, #83660)
[Link]
Posted May 7, 2014 16:24 UTC (Wed)
by epa (subscriber, #39769)
[Link] (4 responses)
I wonder whether a minimal AES implementation written just for TCP fast open could be used instead of the full crypto subsystem.
Posted May 7, 2014 19:18 UTC (Wed)
by fandingo (guest, #67019)
[Link] (3 responses)
Posted May 7, 2014 20:39 UTC (Wed)
by epa (subscriber, #39769)
[Link]
Posted May 8, 2014 10:57 UTC (Thu)
by intgr (subscriber, #39733)
[Link] (1 responses)
But I'm sure they could just invoke the existing AES code directly without going through the kernel's crypto API, for similar savings in code size.
All the complexities and vulnerabilities in crypto libraries tend to come from protocol logic and data structure parsing, not the ciphers/primitives themselves.
> cryptosystems
Just a nitpick, "cryptosystem" refers to a set of algorithms for a single purpose (such as the RSA encryption cryptosystem, comprised of key generation, encryption and decryption).
Posted May 9, 2014 4:41 UTC (Fri)
by jeff_marshall (subscriber, #49255)
[Link]
I've implemented AES myself in software for several different platforms on bare metal, and helped others to implement it in hardware. In all cases, it was pretty straightforward.
Posted May 7, 2014 18:54 UTC (Wed)
by josh (subscriber, #17465)
[Link] (2 responses)
The designers of devices and bills of material (BOMs) will fight for every last expense, and will not change their planned amounts of memory or storage when they can change software to be more efficient instead.
Posted May 9, 2014 16:28 UTC (Fri)
by lacos (guest, #70616)
[Link]
Indeed, there's a reason why they're called *hard*ware and *soft*ware.
Posted May 10, 2014 14:42 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
For small software changes you are right. Good luck explaining the concept of "upstreaming code" to some hardware engineers...
If on the other hand you talk about switching to a different operating system, which is effectively a major product change, then of course everyone from top to bottom listens.
Posted May 7, 2014 19:13 UTC (Wed)
by daniels (subscriber, #16193)
[Link] (9 responses)
Posted May 7, 2014 19:54 UTC (Wed)
by gioele (subscriber, #61675)
[Link] (8 responses)
Could somebody provide these numbers for us non-experts?
What is the surface area of an ARM Cortex-M? And of 16MB of RAM (or other kinds of memory)?
Posted May 7, 2014 20:05 UTC (Wed)
by daniels (subscriber, #16193)
[Link] (7 responses)
TI doesn't ship their Cortex-Ms with anything more than 256kB of SRAM, which you have to say isn't likely to be due to the cost of that much memory ...
Posted May 8, 2014 5:11 UTC (Thu)
by ncm (guest, #165)
[Link] (5 responses)
Posted May 8, 2014 9:52 UTC (Thu)
by daniels (subscriber, #16193)
[Link]
Posted May 8, 2014 14:04 UTC (Thu)
by jonnor (guest, #76768)
[Link]
Posted May 8, 2014 17:12 UTC (Thu)
by yaap (subscriber, #71398)
[Link] (1 responses)
A stripped down core at ~8 kgates could be less than 0.05 mm2 in 90nm (when synthesized for reduced size, not speed). Typically those uC contain mixed logic and do not use the finer processes, it's more in the 65 to 180 nm range.
Then the ratio of SRAM to logic density is very very roughly 1.2: you can store 1.2 bit on average in the area used by a logic gate. Be careful, there are a lot of variations here depending on the chosen trade-off between area, power (dynamic and leakage) and speed, for both logic and memories. So only take this as a very rough order a magnitude number.
From that, you can see that even a high-end uC logic at ~80 kgates is only as big as ~12 kB of SRAM (bytes not bits now). And such a high-end uC would be over dimensioned for a basic connected sensor.
There are several levels of M2M / IoT systems, and their implementation can be very different. There are also threshold effects on the memory sizes.
At the low-end, linux is over kill. All the memory is embedded in the die, and it must be very small (few 10s kB to 100s kB). There are many free OS for this area, RTOS actually, and one of the most popular is FreeRTOS.
At the other end, for high performance / fancy devices, linux usually makes perfect sense and its size is not an issue. There's a big threshold effect: if one need an external SDRAM, the smallest/cheapest long term supported size nowadays would be 128 MB LPDDR2. Linux fits without problem, and there is absolutely no point in optimizing the IP stack size.
Then in between there is an area where optimizing linux may be useful, but it's not a given. Here too there are threshold effects. For medium systems one could use an external chip containing some Flash and pSRAM (it's SDRAM, but made too look like SRAM to the uC. So the uC doesn't need an SDRAM controller) for example. That can go up to 8 MB typically. It's cheaper, but you can't pick any size. If you need a bit over 2 MB, then it's 4 MB let's say. It's not practical to embed such big SRAM in the uC die, as they don't use small nodes (see above).
So I think DaveM reaction makes sense. Pushing upstream the burden of maintaining a special configuration of Linux for what looks like a corner case where using a different OS may be even better from a BoM cost point of view doesn't seem justified at this stage. Let someone prove the case by selling real products based on such optimization in volume first seems a sensible approach to me. I'm not holding my breath to be frank.
Posted May 10, 2014 11:38 UTC (Sat)
by mgedmin (subscriber, #34497)
[Link]
Posted May 10, 2014 14:38 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
For pure comparisons between features/blocks I think it's easier to look at number of gates - independent from the process.
Posted May 8, 2014 14:34 UTC (Thu)
by jonnor (guest, #76768)
[Link]
Posted May 7, 2014 16:11 UTC (Wed)
by epa (subscriber, #39769)
[Link] (8 responses)
Posted May 7, 2014 18:56 UTC (Wed)
by josh (subscriber, #17465)
[Link] (7 responses)
Posted May 7, 2014 21:48 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (6 responses)
Posted May 7, 2014 22:14 UTC (Wed)
by PaulWay (guest, #45600)
[Link]
Have fun,
Paul
Posted May 7, 2014 22:21 UTC (Wed)
by josh (subscriber, #17465)
[Link]
Posted May 8, 2014 13:51 UTC (Thu)
by k3ninho (subscriber, #50375)
[Link]
*: However well you idiot-proof the world, nature breeds a better class of idiot.
K3n.
Posted May 8, 2014 19:17 UTC (Thu)
by drag (guest, #31333)
[Link] (2 responses)
Posted May 11, 2014 0:28 UTC (Sun)
by eean (subscriber, #50420)
[Link] (1 responses)
Mostly while the Linux kernel shouldn't be insecure itself, its role in providing security is for sure not always needed.
Posted May 22, 2014 14:16 UTC (Thu)
by quanstro (guest, #77996)
[Link]
Posted May 7, 2014 17:43 UTC (Wed)
by ibukanov (subscriber, #3942)
[Link] (3 responses)
Posted May 10, 2014 21:23 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
Done at least here: http://www.shenick.com/products/diversifeye/ ; possibly also elsewhere.
Posted May 16, 2014 18:36 UTC (Fri)
by piotrjurkiewicz (guest, #96438)
[Link] (1 responses)
I think such an approach could be a solution for the problems mentioned in the article too. I mean to implement some kind of lightweight kernel interface between NIC and userspace. Than user would be able to disable kernel networking and use his own userspace networking stack (either tiny or performance-oriented one) in extreme cases.
Posted May 16, 2014 18:41 UTC (Fri)
by dlang (guest, #313)
[Link]
Posted May 7, 2014 18:22 UTC (Wed)
by boog (subscriber, #30882)
[Link] (7 responses)
Posted May 7, 2014 20:41 UTC (Wed)
by epa (subscriber, #39769)
[Link]
Posted May 10, 2014 14:36 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (5 responses)
Computers have obviously become much more powerful. The problem is that Microsoft made a much bigger bet than this: they made the bet that the power of computers would grow much FASTER than Windows bloat.
With Windows phone it looks like they finally learned how to trim bloat. But too little, too late. Also, very poor choice of a name. They were probably too proud of it which shows how they've lost it.
Posted May 10, 2014 20:44 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (4 responses)
You seem to have missed the point, because computers have not become universally more powerful. Several entire classes of computers less powerful than previously common ones have been introduced and taken over much of the workload of those previous ones. So even if Microsoft hadn't added any bloat at all, it still would have lost market share. To stay even, it would have had to do what this article reports is proposed for Linux: take stuff out.
Posted May 10, 2014 20:56 UTC (Sat)
by marcH (subscriber, #57642)
[Link] (3 responses)
Posted May 10, 2014 21:23 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (2 responses)
True, but I don't know how that's relevant here, because the claim was that Microsoft bet that computers would continually get more powerful, not that they would always be more powerful than the first ones Windows ran on.
Actually, thinking about it some more, I realize that one of these backward steps happened well before the netbooks and tablets. It happened with laptop computers. The first laptops were far less powerful in every dimension than previously typical computers they replaced. They didn't even have color displays.
But I don't know how Windows fared in that transition.
Posted May 10, 2014 22:30 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (1 responses)
This is a good point I hadn't seen mentioned before. What happened was that people ran Windows on both laptops and desktops, so Windows didn't suffer here, but people were highly award that laptops were inferior in almost every way so it is laptop sales which suffered, only people who valued portability above all else used them. Laptops only became generally popular in the mid 2000s when the specs finally caught up to desktops, or at least when single-core speeds leveled off.
Since we are on the subject it should be mentioned that the whole net book category of computers was designed for Linux first and in the competitive marketplace traditional Linux desktops lost fair and square, people returned them in droves.
Posted May 15, 2014 22:08 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
There was the highly publicised example of one manufacturer. I think they claimed "the majority of our linux netbooks have been returned", only for somebody else to point out they didn't make any!!!
And when I asked (at ASDA, our Walmart subsidiary) I was told that they'd had pretty much NO returns at all. That was where I bought my netbook (which unfortunately I broke - I ended up thanks to wifely pressure trying to install XP on it. Somehow that trashed the BIOS :-(
No - there was a lot of propaganda to try and make linux netbooks look bad, but as far as I can make out, every time anybody dug into the figures they discovered somebody was lying with statistics ...
Cheers,
Posted May 7, 2014 21:20 UTC (Wed)
by dlang (guest, #313)
[Link]
Posted May 7, 2014 21:42 UTC (Wed)
by tbird20d (subscriber, #1901)
[Link]
The micro-Yocto project has shown a recent kernel running in under 2M RAM, with networking (and presumably with these patches). Vitaly Wool presented some information about running Linux on a system with 2M flash and 256K ram (but the kernel was a bit old - 2.6.33).
Here are links to their presentations:
It would be nice to be able to share this work widely, by getting it upstream.
Posted May 7, 2014 23:24 UTC (Wed)
by liam (guest, #84133)
[Link]
Posted May 8, 2014 1:38 UTC (Thu)
by ras (subscriber, #33059)
[Link] (6 responses)
I'm guessing the put the feature under the control of a CONFIG_XXX setting in the kernel build system. Hell, he might even have put it under "Minimal Linux ABI" to warn the user if they features are enabled they would break the conventional userspace ABI.
If so, I'm struggling to understand DaveM's NACK's. Particularly his reasons. I could have understood "sorry, supporting this in the future will be a nightmare". But "I don't want a 1/2 working networking stack". Maybe he doesn't, when the choice is not using Linux at all or a 1/2 working networking stack, some people are going to take the latter. His response of "no you don't" sounds a tad condescending.
Posted May 8, 2014 1:43 UTC (Thu)
by ras (subscriber, #33059)
[Link] (5 responses)
Posted May 8, 2014 13:57 UTC (Thu)
by simlo (guest, #10866)
[Link] (4 responses)
But there is a solution: Maintain a branch out of the main tree. If the patches are of no burden to the network developers, then it ought to be easy to merge to and maintain the branch.
Posted May 8, 2014 15:29 UTC (Thu)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link] (3 responses)
Posted May 8, 2014 19:01 UTC (Thu)
by simlo (guest, #10866)
[Link] (2 responses)
Feature management is done best by config options not branches. On the other hand maintaining special cases on branches let the core developers concentrate on core functionality not the special cases.
Posted May 8, 2014 19:25 UTC (Thu)
by PaulMcKenney (✭ supporter ✭, #9624)
[Link]
Posted May 10, 2014 21:26 UTC (Sat)
by marcH (subscriber, #57642)
[Link]
It's one of the most classic configuration management trade-off. Only simple problems have simple answers here.
Posted May 13, 2014 19:40 UTC (Tue)
by mwsealey (subscriber, #71282)
[Link] (1 responses)
Can we even think of an application that justifies a 2MB kernel (or even a 1MB kernel!?) that wouldn't also take up more than 2MB?
Posted May 13, 2014 19:47 UTC (Tue)
by dlang (guest, #313)
[Link]
Posted May 27, 2014 18:17 UTC (Tue)
by garyamort (guest, #93419)
[Link] (8 responses)
BFS was declared a failure because it didn't handle well on 16+ cpu's....which while that may have been common for linux kernel coders back in 2009 - it's not common for most people.
Android devs realized quite early that enhancements to make Linux function well on embedded devices would be difficult to get accepted - so they made the sensible choice of writing complete replacements for linux functions rather then try to maintain a patch set against libraries which kernel devs could change and break the patches. Which seems to have upset some kernel devs that instead of playing catch up and stroking their ego's - android devs just side stepped them.
It's really not that big of a deal though - in the end all that really happens is Linux gets forked into a specialized version - and if the distro becomes popular enough, kernel devs eat crow and grudgingly accept the code they previously rejected.
Sure, in the end a lot of kernel devs end up looking like incompetent twits when their 'reasoned arguments' turn out to be a bunch of narcissistic ego stroking, but Linux itself continually improves by absorbing the code that works well while at the same time avoiding having the handle all the fiddly debugging.
Posted May 27, 2014 19:08 UTC (Tue)
by dlang (guest, #313)
[Link] (7 responses)
There have been a handful of features that they've added to the kernel, and after a lot of discussion those are getting integrated (or replaced with something the upstream devs and the android devs can agree on)
So at best you are mixing issues.
Posted May 27, 2014 22:09 UTC (Tue)
by renox (guest, #23785)
[Link] (6 responses)
Uh? These changes are not relevant here, as the GP was talking about Linux.
> There have been a handful of features that they've added to the kernel, and after a lot of discussion those are getting integrated (or replaced with something the upstream devs and the android devs can agree on). So at best you are mixing issues.
Given the time and the difficulty for to have these 'handful of features' integrated into the mainline, I'd say that he is right..
Posted May 28, 2014 0:01 UTC (Wed)
by dlang (guest, #313)
[Link]
what libraries does the kernel maintain?
It's far from obvious that this is talking about kernel functions.
Posted May 28, 2014 6:03 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (4 responses)
Don't twist history.
Posted May 28, 2014 14:43 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (3 responses)
Posted May 29, 2014 7:43 UTC (Thu)
by smurf (subscriber, #17840)
[Link]
It'll probably live in its own tree before getting into mainline, but then the linux-tiny patchset of earlier times did the same thing. So did/do the RT patches, for that matter.
Posted May 31, 2014 4:22 UTC (Sat)
by garyamort (guest, #93419)
[Link] (1 responses)
The i/o scheduler for example has many different patterns, some of which work better under different circumstances.
Multiple processor schedulers could be included in the kernel as well - rather then requiring constantly updated patch sets for every new kernel.
For an area I am familiar with, just browse through the ARM directories:
Lots of kernel bits there for custom chips which are only found in specific hardware devices.
Also since there seems to be some confusion, I'm not saying a lot of kernel devs are hostile to mobile phones or whatnot - I'm saying that for the most part, their personally concerned with large, multiprocessor servers and high end hardware. If a new feature doesn't have some affect on what their interested in, it is quite often dismissed. Consider the comments quoted for this article.
Changes which would provide real benefits today but "don't further" some future goal, which historically can take years to be implemented, and which don't benefit the personal use cases of the maintainer - rejected.
Changes which provide performance benefits in real use cases by the submitter are rejected because the maintainer guesses that they won't apply to others - while at the same time admitting that he wouldn't use Linux for any of those use cases so he is not in a good position to judge.
Posted May 31, 2014 7:56 UTC (Sat)
by dlang (guest, #313)
[Link]
He's seen other systems start to have vastly different schedulers for different purposes, and the end result is that there is a lot of confusion and each one is tailored for a specific use case, but that use case doesn't match what the users are actually needing to do (and/or users end up needing to do a little bit of multiple different types of work that each scheduler developer says "well, if you want to do that, don't use my scheduler")
and I'll say that the current disk schedulers seem to have this problem, although there is a little bit of a difference in that sometimes the disk controller does some of the work for the kernel, so there is more of a case for the noop scheduler, but there isn't the equivalent for the task scheduler)
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Plentiful they are not.
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
last I heard, IPv6 traffic is somewhere in the 3-6% range
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
sequence numbers
sequence numbers
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
> every last expense, and will not change their planned amounts of memory
> or storage when they can change software to be more efficient instead.
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
256kB SRAM is roughly the same die size as a Cortex M3 CPU core.
http://zeptobars.ru/en/read/MDR32F9Q2I-1986VE91T-whats-in...
Networking on tiny machines
It may be surprising to some, but a core logic is quite negligible compared to the attached memories. The logic size still does matter but indirectly, as it's somewhat related to the power efficiency.
This memory could cost ~$0.85 in standard temperature range, and ~$1 in industrial range.
Maybe optimizing can get one to the lower size, but then one has to compare the effort to the saving. You would need a high volume to justify it.
And then, one can question if linux is the best choice there. If footprint is this critical one could use a rich RTOS. There are also free options, see eCos (http://ecos.sourceware.org, bought by Red Hat if my memory serves well). It's leaner than linux, due to being simpler too of course, but enough for a simple connected device.
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
http://techon.nikkeibp.co.jp/english/NEWS_EN/20130226/268...
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
One only needs to look at what has happened to Windows. They made the bet that computers would only become more powerful and have struggled for years with netbooks and tablets, etc.
Computers have obviously become much more powerful.
Networking on tiny machines
Networking on tiny machines
Even some micro-controllers today are more powerful than the first systems Windows ran on...
Networking on tiny machines
Networking on tiny machines
Wol
Networking on tiny machines
It's a shame to see these patches rejected for such specious reasons. Reports from ELC indicate that the kernel can be made to run in small-footprint devices with useful functionality.
Linux CAN be made to run small
Networking on tiny machines
They require more development to bring up but they can be incredibly customized.
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
Networking on tiny machines
https://github.com/torvalds/linux/tree/master/arch/arm
Networking on tiny machines
