Super long-term kernel support
CIP, he said, is one of the most conservative projects out there, but also one of the most important. It is working to create a stable base layer for civil infrastructure systems. It is not trying to create a new distribution. Civilization runs on Linux. Infrastructure we all count on, including that dealing with transportation, power generation, and more, is Linux based. If those systems fail, we will have serious problems. But this kind of infrastructure runs on a different time scale than a typical Linux distribution. The development time required just to place such a system in service can approach two decades, and the system itself can then stay in service for 25-60 years.
The computing systems that support this infrastructure must thus continue
to work for a long time. It must be based on "industrial-grade" software
that is able to provide the required level of reliability, robustness, and
security. But the systems supporting civil infrastructure also must be
brought up to current technology levels.
Until now, the long-term support needed to keep them running has been done
by individual companies, with little in the way of shared effort. That has
kept these systems functional, but it is an expensive approach that tends
to lag behind the current state of the technology.
The way to do a better job, Kobayashi said, is to put together a collaborative framework that supports industrial-grade software while working with the upstream development communities as much as possible. That is the role that the CIP was created to fill. There are currently seven member companies supporting CIP, with Moxa being the latest addition. They are supporting the project by contributing directly to upstream projects and funding work that advances CIP's objectives.
CIP is currently focused on the creation of an open-source base layer consisting of a small number of components, including the kernel, the GNU C library, and BusyBox. Distributors will be able to build on this base as is needed, but CIP itself is starting small. The primary project at the moment is the creation of the super-long-term support (SLTS) kernel which, it is hoped, can be supported for at least ten years; as experience with extra long-term support grows, future kernels will have longer periods of support. The first SLTS kernel will be based on the 4.4 LTS release and will be maintained by Ben Hutchings; the 4.4.120-cip20 release came out on March 9.
For the most part, the CIP SLTS kernel will be based on vanilla 4.4, but there are some additions being made. The latest Meltdown and Spectre fixes are being backported to this kernel for example, as are some of the hardening patches from the Kernel Self-Protection Project. Support for some Siemens industrial-control boards is being added. Perhaps the most interesting enhancement, however, is the realtime preemption patch set, which is of interest for a number of the use cases targeted by the CIP project. CIP has joined the realtime preemption project as a member and is planning to take over the maintenance of the 4.4-rt kernel. The first SLTS kernel with realtime support was released in January.
In general, the project's policy will be to follow the upstream stable releases for as long as they are supported. Backports from newer kernels are explicitly allowed, but they must be in the mainline before being considered for addition to an SLTS kernel. New kernel versions will be released every four-to-six weeks. There is an explicit policy of non-support for out-of-tree drivers; distributors and users can add them, of course, but any bugs must be demonstrated in a pristine SLTS kernel before the CIP project will act on them.
A new major kernel release will be chosen for super-long-term support every two or three years. The project is currently thinking about which release will be the base for the next SLTS kernel; for obvious reasons, alignment with upstream LTS choices is important. There will be a meeting at the Japan Open Source Summit to make this decision.
There is some initial work on testing infrastructure based on the "board at desk" model; the testing framework is based on the kernelci.org infrastructure. Future work includes collaboration with other testing efforts, more frequent test coverage, and support for container deployment on SLTS-based systems. Debian has been chosen as the primary reference distribution for CIP systems, and all of the CIP core packages have been taken from Debian. As part of this effort, CIP is supporting the Debian-LTS effort at the platinum level.
The CIP core effort is working on the creation of installable images consisting of a small subset of Debian packages and the CIP SLTS kernel. This work can be found on GitLab. CIP is working with Debian to provide longer-term support of a subset of packages, to improve cross-compilation support, and to improve the sharing of DEP-5 license information.
In the longer-term, CIP is looking toward IEC-62443 security certification. That is an ambitious goal and CIP can't get there by itself, but the project is working on documentation, test cases, and tools that will hopefully help with an eventual certification effort. Another issue that must be on the radar of any project like this is the year-2038 problem, which currently puts a hard limit on how long a Linux system can be supported. CIP is working with kernel and libc developers to push solutions forward in this area.
Someday CIP hopes to work more on functional safety issues and to come up with better solutions for long-term software updates. The project has just joined the EdgeX Foundry to explore what common ground may be found with that group. Clearly, the CIP project has a lot of issues on its plate; it seems likely that we will be hearing about this project for a long time.
[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting your
editor's travel to ELC.]
Index entries for this article | |
---|---|
Kernel | Long-term support initiative |
Conference | Embedded Linux Conference/2018 |
Posted Mar 19, 2018 17:35 UTC (Mon)
by eru (subscriber, #2753)
[Link] (3 responses)
Quote of the Year!
Posted Mar 19, 2018 21:26 UTC (Mon)
by flussence (guest, #85566)
[Link] (2 responses)
Posted Mar 19, 2018 22:14 UTC (Mon)
by smoogen (subscriber, #97)
[Link] (1 responses)
The really sad part is that a lot of those Windows XP boxes may not have been approved to be used until after Windows XP was end of lifed. By the time 7 goes EOL, the various industries will be about ready to use it. This comes from everything from thorough testing and back fixes to the hardware which was running Windows 3.11 was finally no longer able to be replaced so they decided to move to the next approved system.
A lot of these industrial/infrastructure units are going to be bought over 10-20 year periods with most of them sitting on shelves for half their lives. The reason they are still using XP is as much they still have replacement systems as it is that the software was written for NT 4.0 and only worked on XP by accident.
Posted Mar 20, 2018 19:41 UTC (Tue)
by Tov (subscriber, #61080)
[Link]
Posted Mar 19, 2018 17:44 UTC (Mon)
by arjan (subscriber, #36785)
[Link] (57 responses)
Backporting security things way back has shown to not work well after some amount of time, just because the tester base evaporates; the people left on the old kernel aren't the broad test base after all.. they're more conservative.
The whole spectre/meltdown thing has to me shown that the embedded/enterprise model of supporting really old kernels for a really old time has come to an end in terms of viability. The last few months there have been significant structural changes because of the security issues, and the next 6 to 24 months we'll see a LOT more of that. Those kind of things just don't lend themselves well for stable backports. It's not just the kernel... it's also the compiler and other tools that go with it.
But then again maybe security does not matter for infrastructure ;-)
Posted Mar 19, 2018 18:34 UTC (Mon)
by tau (subscriber, #79651)
[Link] (2 responses)
I wish that civilization did run on Linux, but unfortunately far too much industrial hardware runs on Windows XP, still. Security problems are an economic externality, so they persist, in much the same way that pollution is an economic externality, so it too persists. There simply isn't any economic incentive to care about security or computer system modernization for its own sake. Look at the Equifax breach for the clearest example.
People will, unfortunately, continue to be unreasonable. You can try to change the incentives that cause this to happen, or you can do your best to mitigate the damage.
Posted Mar 19, 2018 22:35 UTC (Mon)
by FLHerne (guest, #105373)
[Link] (1 responses)
Posted Mar 21, 2018 11:20 UTC (Wed)
by jezuch (subscriber, #52988)
[Link]
Posted Mar 19, 2018 20:04 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (50 responses)
It was not uncommon for utilities to switch from green-screen VT100 terminals running over X.25 straight to LCD displays and fiber-optic TCP connectivity in their control centers.
This is reasonable, considering that risks of downtime are extremely high. You can orchestrate one migration every 20 years, which usually includes two sets of control centers running at the same time and doing the handover. But doing this on a constant basis is not sustainable.
Posted Mar 19, 2018 21:17 UTC (Mon)
by atelszewski (guest, #111673)
[Link] (17 responses)
There goes this saying: "Never touch a running system".
--
Posted Mar 19, 2018 22:04 UTC (Mon)
by arjan (subscriber, #36785)
[Link] (8 responses)
and maybe a feature or two
and .. and ..
and then your backports are high risk since the code now runs in a context it was never tested before
Posted Mar 19, 2018 22:48 UTC (Mon)
by tlamp (subscriber, #108540)
[Link] (5 responses)
the same could be said about a new kernel, it contains thousands of lines not tested in the environment needed...
Easier to ensure one really needed security feature gets back ported right, when the need arises maybe once in 10 years, then a whole control system underlying kernel gets just swapped out every few weeks....
And no, your ordinary civil infrastructure project doesn't needs the newest fancy syscall, IO scheduler, whatever feature, at the moment it gets released.
As other said *never* touch a running system, this is not about a a small daemon or web app of yours, this can affect millions of people and whole economies in a meaningful way!
Posted Mar 20, 2018 16:32 UTC (Tue)
by mjthayer (guest, #39183)
[Link] (4 responses)
Posted Mar 20, 2018 19:48 UTC (Tue)
by smoogen (subscriber, #97)
[Link] (3 responses)
For a long term security kernel, it would take them to wait the 6-9 months for feedback from one set of changes to be run through.
These devices are going to sit on a shelf for years at a time until put into replacement due to some forklift upgrade. They will then get looked at years later. Most of the devices may be only hooked up to some sort of serial network so updates are done by hand as the bandwidth for updating is faster that way.
Posted Mar 21, 2018 12:01 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (2 responses)
Posted Mar 21, 2018 14:10 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (1 responses)
Posted Mar 21, 2018 14:19 UTC (Wed)
by mjthayer (guest, #39183)
[Link]
https://wiki.linuxfoundation.org/civilinfrastructureplatf...
Posted Mar 20, 2018 7:56 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Security provided by network separation and features are usually provided by new systems working in parallel with old systems.
Some utilities even stockpile hardware, so that they can replace failing components with hardware from the same batch.
Posted Mar 20, 2018 9:13 UTC (Tue)
by Mog (subscriber, #29529)
[Link]
Posted Mar 20, 2018 13:02 UTC (Tue)
by arjan (subscriber, #36785)
[Link]
Posted Mar 20, 2018 21:26 UTC (Tue)
by JFlorian (guest, #49650)
[Link] (6 responses)
Posted Mar 21, 2018 12:04 UTC (Wed)
by mjthayer (guest, #39183)
[Link]
Posted Mar 21, 2018 17:44 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
We are very sorry for the last time we connected a 50kV line instead of a 5kV, we're pretty sure it doesn't happen with this patch.
Thanks!
Posted Mar 21, 2018 17:53 UTC (Wed)
by JFlorian (guest, #49650)
[Link] (1 responses)
Posted Mar 21, 2018 18:04 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
They usually involve months (if not years) of testing both software and hardware under varying conditions. You wouldn't want your thermal management system to freeze because an update introduced a subtle memory leak or a race condition that become apparent only after a couple of months of runtime.
CIP will provide a better foundation for it, but it most definitely won't solve the issue of long deployment cycles.
Posted Mar 29, 2018 12:07 UTC (Thu)
by federico3 (guest, #101963)
[Link]
Source: I work on CI/CD/orchestration systems that are used in those fields.
Posted Mar 29, 2018 19:21 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
"If it ain't broke, fix it till it is" :-)
If you're talking hardware that's expensive, you're not going to replace it. Why UPGRADE the kernel, when the majority of changes is adding new hardware drivers, if your system has no new hardware that needs it? And if it's not new drivers, the rest of the new code is pretty much equally useless FOR YOU.
Think about the story of when they dropped serial ports from hardware. Apparently Bill Gates' response to one customer's complaints was "well, buy new peripherals, then". 20 industrial machines, at $250K apiece??? All for the sake of $10 board in a computer?
Cheers,
Posted Mar 19, 2018 22:28 UTC (Mon)
by rahvin (guest, #16953)
[Link] (31 responses)
As time passes the number of exploits found and patched in those kernels should go down dramatically but fixing critical stuff like meltdown or heartbleed (not kernel based I know) should be ported back and these systems updated. Keep in mind that within a decade you may be driving around in car with Linux controlling all your safety systems if you aren't already, that alone should scare you if security exploits aren't being patched.
Posted Mar 20, 2018 10:52 UTC (Tue)
by eru (subscriber, #2753)
[Link] (30 responses)
Old Volvo sedans also seem immortal. But those did not contain any CPU:s and software, probably even the ignition control is electromechanical. They also were more repairable than modern cars. I seriously doubt that any "digitalized" automobile manufactured today can be seen on the road 30 years from now.
Posted Mar 20, 2018 12:39 UTC (Tue)
by musicinmybrain (subscriber, #42780)
[Link] (29 responses)
Posted Mar 20, 2018 13:00 UTC (Tue)
by arjan (subscriber, #36785)
[Link] (27 responses)
Posted Mar 20, 2018 13:19 UTC (Tue)
by felixfix (subscriber, #242)
[Link] (26 responses)
3D printers will make parts will be easier to get than ever.
Posted Mar 20, 2018 13:36 UTC (Tue)
by tao (subscriber, #17563)
[Link]
Posted Mar 20, 2018 14:59 UTC (Tue)
by arjan (subscriber, #36785)
[Link] (10 responses)
Posted Mar 21, 2018 11:28 UTC (Wed)
by jezuch (subscriber, #52988)
[Link] (9 responses)
Posted Mar 24, 2018 21:29 UTC (Sat)
by giraffedata (guest, #1954)
[Link] (8 responses)
How does a corner charging station work? Doesn't it take hours to charge a car?
I have a colleague who worked on battery technology and told me that a standard gas station nozzle delivers 30 megawatts, and that there was nothing on the horizon that could match that with electric battery storage.
Except that I read once about an idea for swapping out the entire battery.
Posted Mar 25, 2018 4:55 UTC (Sun)
by songmaster (subscriber, #1748)
[Link]
I guess the energy capacity of a gas pump nozzle is somewhat analogous to the bandwidth of a truck full of hard drives driving down a highway — wires aren’t always the fastest way to transport energy/data.
Posted Mar 25, 2018 15:52 UTC (Sun)
by excors (subscriber, #95769)
[Link] (3 responses)
The high-power charging stations are for rare long trips. Apparently the Tesla Model S can get 170 miles of charge in 30 minutes. (Full charge takes disproportionately longer, so it's quicker to do multiple partial charges). So it's not hours, but long enough that I guess you'd typically want facilities (shops, food, etc) for people to use (and spend money in) while waiting. I guess with that, plus the reduced demand if most charging is done at home, it's not going to be able to support anywhere near as many charging stations as there are gas stations today.
There's also the Formula E approach where the drivers get a fully-charged battery in about ten seconds, by simply swapping their entire car. Not sure how well that would work with consumer vehicles though.
Posted Mar 29, 2018 12:09 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (1 responses)
I'm afraid many (maybe most) people don't have a home where they can plugin a cable from the car. Think about a place like this: https://goo.gl/maps/bgH6CUD7BBS2.
"The high-power charging stations are for rare long trips."
I guess cars currently spend about 3-5 minutes at the fuel pump: fill the car, go to the shop, pay, leave (maybe just to a parking slot). If cars need to spend about 10 times as much time at the plug, the motorway rest station will need 10 times more space - instead of 16 pumps, 160 parking places with plugs. Not sure all of them would have the place.
Posted Mar 29, 2018 12:43 UTC (Thu)
by jem (subscriber, #24231)
[Link]
What's the problem? As demand grows, chargers will pop up in the car parks. I don't see why you can't eventually have enough chargers at the sides of the parking spaces to serve a 100 % electrified car fleet.
Posted Mar 31, 2018 8:23 UTC (Sat)
by daenzer (subscriber, #7050)
[Link]
Posted Mar 25, 2018 17:38 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
For fast charging, Tesla superchargers can deliver 150kW of power right now and 400kW chargers have been demonstrated. And you don't really need full 30MW for charging if you can drive 500 miles on a single charge.
Posted Mar 28, 2018 3:08 UTC (Wed)
by giraffedata (guest, #1954)
[Link]
Posted Mar 29, 2018 18:30 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Full charge, or useful charge?
These figures are from when I was looking at possibly getting a Nissan Leaf about 3 years ago ...
Time to full charge - several hours.
So, bearing in mind our daughter lives over 200 miles away, with a range of about 200 miles we could get there *easily* with one short stop at a service station to recharge. Both the car, and ourselves :-)
And while the UK plans to ban the sale of non-electric cars by 2040, I suspect we will still have small liquid-fuel engines that can provide top-up power. I'm planning for my next car to be a mixed-mode car - primarily electric with backup petrol engine for when the range is insufficient.
Cheers,
Posted Mar 20, 2018 16:12 UTC (Tue)
by nix (subscriber, #2304)
[Link] (4 responses)
I find it hard to believe that anyone would be so foolish as to consider that this would never happen. The Earth does not have an infinite volume, after all, and nearly all of it is nickel-iron and/or peridotite, not oil. Trivial back-of-the-envelope calculations show that we are using oil at roughly a million times its production rate, so eventually, if we keep extracting it, we will run out, unless we reduce the extraction rate more than a millionfold (which is obviously impossible without finding complete replacements for absolutely all oil's industrial uses, even the minor ones).
Posted Mar 20, 2018 16:33 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (3 responses)
Being extremely pedantic, peak oil means that the amount that you can extract and make a profit on has started to fall.
If a reduction in consumption means that oil prices stay below (say) $50 per barrel, then us knowing that there are 1,000 years worth of oil that costs $60 per barrel to extract is irrelevant to the peak oil concept; it's only the oil that you can extract and sell at a profit that counts.
So, any sane advocate of "peak oil" expects that there will be a significant amount of known oil around after the peak - it's just that it won't be worth extracting, because you'll have to sell it for less than the cost of that extraction.
Or, put another way, we can reach "peak oil" because the replacement of oil with substitutes like solar energy and plant-derived plastics has been so successful that oil is worthless, not just because we've run out of oil
Posted Mar 22, 2018 5:33 UTC (Thu)
by khim (subscriber, #9252)
[Link] (2 responses)
What government couldn't do, however, is to change price of oil measured in terms of oil! Hundred years ago you needed to spend one barrel of oil to extract hundred barrels of oil. Today what you get back is closer to 6 to 8 (it's not easy to calculate the exact "price" since so many components are involved). When you finally hit the bottom (need one barrel of oil to extract one barrel of oil from the Earth) it wouldn't matter how much oil is left there - it's pointless to try to extract it.
Posted Mar 27, 2018 15:04 UTC (Tue)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Mar 27, 2018 15:32 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
But even using cheaper energy, it's not worth it if the cost of extraction is ¢100/barrel, and the value of oil is ¢50/barrel. It doesn't matter what the value of a ¢ is, or indeed whether governments print more money - if the raw value of the oil is lower than the cost of extracting it, then it will get left alone.
Posted Mar 20, 2018 17:39 UTC (Tue)
by felixfix (subscriber, #242)
[Link] (8 responses)
Posted Mar 20, 2018 18:34 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
You just need to input a LOT of energy.
Posted Mar 20, 2018 21:54 UTC (Tue)
by rahvin (guest, #16953)
[Link] (6 responses)
Renewables are getting so cheap that we're probably going to see falling electricity prices in the US over the next 20 years and in doing so it's going to price Oil right out of the market. The savings grace for petroleum has always been transportation where there was no replacement for gas/diesel but with the electrification of transportation nearly 80% of oil use goes out the window ending very rapidly the age of oil.
Posted Mar 21, 2018 15:28 UTC (Wed)
by anselm (subscriber, #2796)
[Link] (5 responses)
The “age of oil” will stay with us until we figure out how to run long-haul commercial aircraft on something other than hydrocarbon-based aviation fuel (not obvious). Or figure out how to beam stuff (including people) from A to B, whichever happens earlier.
Posted Mar 21, 2018 16:14 UTC (Wed)
by excors (subscriber, #95769)
[Link] (1 responses)
I've read Glasshouse so I wouldn't trust teleporters not to infect me with malware or accidentally clone me. Especially if the teleporters are running a 20-year-old kernel.
Posted Mar 21, 2018 16:22 UTC (Wed)
by sfeam (subscriber, #2841)
[Link]
Posted Mar 21, 2018 16:19 UTC (Wed)
by eru (subscriber, #2753)
[Link] (2 responses)
Some carriers are already experimenting with flying on biofuel. Currently it is blended with lots of fossil fuel, but I expect experience and research will eventually allow switching to 100% biofuel.
Posted Mar 21, 2018 18:29 UTC (Wed)
by rahvin (guest, #16953)
[Link] (1 responses)
Overall aircraft and ship travel combined use about 20% of petroluem, but the ships can easily be converted to electric drive, the trick is aircraft and it's going to be a hard one to fix no doubt. But if Electricity gets as cheap as the projections are showing someone is going to have a LOT of incentive to produce a plane with electric engines with the same performance and speed as Jet Engines.
Posted Mar 29, 2018 18:39 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
And converting one form of oil to another (especially making petrol and diesel) is pretty easy using zeolyte catalysts, so I think making liquid hydrocarbon fuels from bio-matter isn't a chemistry problem.
It's an economic problem - can we do it efficiently enough to make it worthwhile?
Cheers,
Posted Mar 20, 2018 23:14 UTC (Tue)
by roc (subscriber, #30627)
[Link]
Posted Mar 20, 2018 1:27 UTC (Tue)
by gdt (subscriber, #6284)
[Link]
We know how to do hitless in-service upgrade of UNIX-like embedded systems. The Juniper M40 router, released in 1998, shows a hardware design. But we've yet to see the techniques in these once-expensive embedded systems make their way into more traditional embedded equipment. The other big difference between networking and more traditional embedded systems is the mindset of the clients. The idea that "software is a process, not a product" would be instinctively familiar to anyone who has managed a network, but foreign to most owners of SCADA equipment. There's also some buck-passing. If a manufacturer's QA is limited to "it's working in the field" then any software update is high risk. Rather than accept the cost of developing adequate QA the manufacturer seeks "stability" in all the software components. That leads to ridiculous situations where known remote exploits are allowed to remain on production systems because the manufacturer can't deliver an update. We know that software development can substantially decrease risk when it incorporates continuous integration and testing. But few embedded software systems do this. A surprising number of systems can't even build from source: the executable kernel came with the SoC and the terms of the GPL weren't insisted upon. Many SoCs are shipped with binary kernel modules and other practices which are antagonistic to continuous integration and testing.
Posted Mar 20, 2018 6:17 UTC (Tue)
by eru (subscriber, #2753)
[Link]
I expect the 20-year LTS kernels are meant to be running on hardware that is not going to be upgraded significantly (if at all) during its lifetime. This will make running the latest kernel (and busybox, etc) increasingly harder after some years have passed, because of the seemingly inevitable growth of memory and CPU requirements in each new version. The maintainers could try to counteract this by configuring out unneeded functionality (like in the recent LWN series about slimming the kernel), but then their configuration diverges more and more from the better tested default configurations.
Another issue is that the devices in the installation get older and fall out of common use. So their devices get less and less testing in the latest kernels, and they "bit-rot". After 20 years, the poor maintainer of the long-term supported hardware might wind out being the only developer still supporting some old device driver or CPU variant. Work that would have been not needed, if the kernel base version had been "frozen".
Posted Mar 20, 2018 15:39 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link]
Posted Mar 19, 2018 18:48 UTC (Mon)
by kees (subscriber, #27264)
[Link] (3 responses)
I'm curious which defensive security features they're interested in backporting, too. Much easier if they started with 4.14. :)
Posted Mar 19, 2018 20:06 UTC (Mon)
by sashal (✭ supporter ✭, #81842)
[Link]
Posted Mar 19, 2018 22:20 UTC (Mon)
by smoogen (subscriber, #97)
[Link]
For a lot of industrial/infrastructure software, the regulations require testing/certification/etc which can take up to 10 years before it is 'done'. All the tools that are used to build it and any tools with the OS have to be 'locked' down for this and remain that way until the hardware is finally turned off in 40 years. [I expect teams in Unisys and HP are still working on versions of OS's that were written before most of the team was born.] If the project started 2 years ago, they may have already started the paperwork process of 'locking all the nuts and bolts' for these certs. Moving to 4.14 or some other one would require it all to start over again.
Posted Mar 20, 2018 15:41 UTC (Tue)
by BenHutchings (subscriber, #37955)
[Link]
Posted Mar 19, 2018 21:20 UTC (Mon)
by atelszewski (guest, #111673)
[Link]
I hope these are good news for RT preempt patches.
--
Posted Mar 20, 2018 13:53 UTC (Tue)
by tdz (subscriber, #58733)
[Link]
They better do. On one hand, having a single distribution that "gets it right" is preferable to a base layer, which then gets messed up by whoever builds a system on top of it. A company or organization that cannot build a working and upgrade-able distribution on their own probably cannot built one with CIPs base layer either.
On the other hand, building a today's software 20 years from now will require today's development environments and toolchains. A single SLTS distribution would provide that.
Besides all this, it's sad that we (as in "software engineers") maintain old systems forever instead of building systems that can be replaced and updated reliably.
Posted Mar 21, 2018 17:11 UTC (Wed)
by magfr (subscriber, #16052)
[Link] (1 responses)
Posted Mar 22, 2018 13:40 UTC (Thu)
by BenHutchings (subscriber, #37955)
[Link]
Posted Mar 29, 2018 12:36 UTC (Thu)
by jechevarriar (guest, #122370)
[Link]
Posted Apr 10, 2018 15:57 UTC (Tue)
by helmut (guest, #104440)
[Link] (2 responses)
This is wrong. It is an utter misrepresentation of what is actually going on. I've sent around 1000 patches improving cross compilation in Debian and seen lots of packages. I've not seen a single cross compilation patch or even bug report that I could attribute to CIP in any way. I've not seen a single mail to the debian-cross@lists.debian.org list that I could connect to CIP in any way. That would be the canonical point of contact for cross compilation issues.
If CIP is working on cross-compilation support, then they are doing it in some Debian-derivative or they are doing it in some kind of "throw over the wall" approach, but they are certainly not working with Debian on that matter.
Posted Apr 20, 2018 13:09 UTC (Fri)
by toscalix (guest, #95313)
[Link] (1 responses)
"CIP is DISCUSSING with Debian to provide longer-term support of a subset of packages, to improve cross-compilation support, and to improve the sharing of DEP-5 license information."
Knowing how accurate Jonathan C. usually is, probably Yoshi, the speaker from CIP (Toshiba) used the wrong verb. If that was the case, I am sure it was far from his intention to offend anybody or attribute to CIP any work that has not taken place yet.
In any case, CIP and some Debian developers from LTS are just talking for now about a variety of topics. CIP needs first to learn how Debian LTS works, then what sources is CIP interested on to create and maintain for a very long time an industrial grade base system and finally how this Linux Foundation Initiative can actively participate together with Debian in maintenance tasks, as we are currently doing with the Linux kernel 4.4-stable.
In order to demonstrate the interest, besides talking, CIP sponsored DebConf in 2017 and will again in 2018, as far as I know. CIP will increase this year its presence in DebConf as part of our learning process and a meeting related with this topic is expected in Taiwan, at the event.
Posted Apr 20, 2018 17:57 UTC (Fri)
by helmut (guest, #104440)
[Link]
My complaint only targeted CIP's involvement with cross-compiling Debian. If you want to discuss cross-compiling Debian, I invite you to mail debian-cross@lists.debian.org and or join the IRC channel #debian-bootstrap on the OFTC network. Even describing your use case and priorities is valuable. For instance, I presently use popcon to prioritize packages. Please don't follow up with lwn comments, but move that matter to the proper channels though.
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Best regards,
Andrzej Telszewski
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
An LTS kernel, with the S for support which implies maintenance (such as security) you will get more changes than your specific use of it would strictly need.
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Everything gets upgraded all the time but, obviously, there are layers of safeguards.
Super long-term kernel support
Wol
Super long-term kernel support
I still see pickups from the 70's on the road.
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support - fuel availability for 20-year-old cars
Some of them are already switching partly or fully into charging stations. This is huge infrastructure already in place, it would be foolish not to reuse it.
Super long-term kernel support - fuel availability for 20-year-old cars
Super long-term kernel support - fuel availability for 20-year-old cars
"vs plugging in a charger at home every day"
Super long-term kernel support - fuel availability for 20-year-old cars
Super long-term kernel support - fuel availability for 20-year-old cars
Super long-term kernel support - fuel availability for 20-year-old cars
Super long-term kernel support - fuel availability for 20-year-old cars
So, summing up: there is no practical way to convert a corner gas station to a charging station, so it probably is not happening today, and such conversions are not a reason to doubt that gas will be easy to find 20 years from now.
Super long-term kernel support - fuel availability for 20-year-old cars
Super long-term kernel support - fuel availability for 20-year-old cars
Time to 80% charge - 30 minutes? A decent motorway services rest break.
Wol
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
If a reduction in consumption means that oil prices stay below (say) $50 per barrel, then us knowing that there are 1,000 years worth of oil that costs $60 per barrel to extract is irrelevant to the peak oil concept; it's only the oil that you can extract and sell at a profit that counts.
Government could always just print more money.
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Never mind "accidentally clone me", what about copyright infringement?
Super long-term kernel support
how to run long-haul commercial aircraft on something other than hydrocarbon-based aviation fuelSuper long-term kernel support
Super long-term kernel support
Super long-term kernel support
Wol
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
This stuff might become really cool thing.
Best regards,
Andrzej Telszewski
Super long-term kernel support
> It is not trying to create a new distribution.
Super long-term kernel support
A bit like what happened by accident with Red Hat ES 4 - there were many things that claimed to support Linux but meant 2.4 on x86-32.
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support
Super long-term kernel support