|
|
Subscribe / Log in / New account

Virtualizing the locomotive

September 16, 2015

This article was contributed by Paolo Bonzini


KVM Forum

Among the more curious talks at KVM Forum 2015 was Mark Kraeling's "Virtualizing the Locomotive" (YouTube video and slides [PDF]) that was submitted in the "end-user presentation" category. The topic is exactly what the title says: virtualization for train software. The presenter works for GE Transportation.

There's a lot of electronics and software in trains, and virtualization doesn't really apply to all applications on a train; GE is not planning to apply virtualization directly affecting the safety of the train, for example. These systems use techniques such as lockstep execution and voting. In addition, the systems have to go through special certification. The hypervisor just gets in the way too much.

However, a train has a lot more software, written by multiple suppliers, each of which used to provide completely separate hardware too. For example, a lot of functionality is related to remote control. It is common for an engineer in the front of the train to drive locomotives at the back of the train, possibly two miles away and on the other side of the hill. Radio-based communication is also used to drive locomotives from the ground at stations or maintenance facilities ("a giant train set", Kraeling called them).

It is very common therefore to have not just a separate processor per application, but even to duplicate components such as cell phone modems. A hardware platform that supports virtualization can go against this trend by encouraging consolidation. Deploying all these systems onto a single multi-core processor saves money and enables more reuse of hardware. And if functionality can be added to an existing system just by dropping a new virtual machine (VM) into it, there is no problem if some applications are developed for a specific Linux version, for a legacy OS, or even for Windows CE. This flexibility is another advantage of virtualization.

However, consolidation requires some level of sandboxing for the different VMs. This is not unlike a data center and Kraeling, in fact, used the analogy of a "data center on wheels" several times; but some of the specific use cases for isolation are interesting. It can be hard to patch locomotive software, because—unlike commodity servers—locomotives cost millions of dollars and customers really do not want them to stay idle while testing updates. The requirements this imposes on the quality of software are obvious, but you also need to make the most out of the testing time you can get in the field. For example, an effective way to validate new software is to place it on production locomotives just for the purpose of collecting logs until the software crashes, as well as for collecting data about the crash itself. But it is not stable, so the code is not really in use: it must not interfere with other VMs and with the control systems. Virtualization helps a lot with this kind of sandboxing, of course.

Hardware and software

After explaining the use cases, Kraeling presented the system's hardware and software. The processors selected are x86 and ARM. x86 processors are used for compute-heavy applications. ARM is mostly used for networking services, though 64-bit ARM has the potential to replace x86 as well. All processors are quite low-end, and they often run a 32-bit hypervisor and operating system in order to save memory. Xen was faster than KVM on 32-bit x86, so Xen was used there; on the other hand, KVM was faster than Xen for ARM hardware. GE wants to use KVM with 64-bit ARM as well when the processors are ready.

The focus on low-end x86 and low-power ARM systems is due to power-consumption concerns, which can be an issue for the locomotive's computer systems. This may be surprising, because on a diesel locomotive you basically have a power plant at hand. However, power is directly related to heat, and a locomotive can easily reach 70°C. If the surrounding environment is as hot as the processor, you cannot easily solve overheating problems by adding fans. And even though there is room for fans at the bottom of the chassis, customers do not like maintaining and cleaning them. The low-end, power-conscious x86 processors (GE uses the BayTrail E3845 and the Broadwell 5500U) can run at those temperatures without fans.

For the management layer, there's no standardized tool yet, but GE is looking into OpenStack. A member of the audience pointed out that the libvirt project was started to bridge the differences between Xen and KVM, so it may help GE as well if it does not need the complexity of OpenStack.

Kraeling has tested containers as well. The isolation and flexibility is, of course, not as good as you can get from virtual machines, but they were faster on ARM, so they are looking into CoreOS and Docker. He hasn't yet looked at why x86 didn't benefit from containers. My guess would be that different applications are running on the two systems; CPU-bound applications are quite efficient in a virtual machine.

While high-level, the talk gave an interesting glimpse into a field that most developers are not familiar with; such "embedded" usage of a data center hypervisor like Xen or KVM is not a well-known topic. But, in fact, a presentation [PDF] from KVM Forum 2010 has some striking similarities with Kraeling's. Embedded virtualization is probably more frequent than one would think, and will probably become even more common in combination with realtime virtualization.

Index entries for this article
GuestArticlesBonzini, Paolo
ConferenceKVM Forum/2015


to post comments

Virtualizing the locomotive

Posted Sep 17, 2015 14:57 UTC (Thu) by pj (subscriber, #4506) [Link] (1 responses)

Has anyone built some kind of 'virtualization concurrence' layer? The idea would be to present the same VM to two sets of software, one live and one that's supposed to 'concur' (ie. produce the same (or better?) outputs to some VM facility). This way you run production software as 'live' and your new version as 'supposed to concur' and can see when/if it doesn't. A slight modification might allow the 'multiple masters must concur before action is take' kind of redundancy that things like space vehicles used to implment.

Virtualizing the locomotive

Posted Sep 18, 2015 17:38 UTC (Fri) by jki (subscriber, #68176) [Link]

COLO (course-grained lock-step for KVM) is aiming at a different use case (hot-standby) but basically comes with the host-side piecees to enable this.

But this alone may not help to improve the overall confidence in the produced results. Depending on the target application, regulatory requirements etc., you may also need diversity in the infrastructure, i.e. the virtualization platform AND the hardware. Or you need to verify that both behave correctly in absence of hardware errors and that they detect all errors reliably enough. That will be tough to achieve with a normal virtualization stacks, and that's why we started the Jailhouse hypervisor project.

Virtualizing the locomotive

Posted Sep 24, 2015 14:36 UTC (Thu) by geek (guest, #45074) [Link] (2 responses)

"GE is not planning to apply virtualization directly affecting the safety of the train, for example."

doesn't anyone else see a problem here? A two mile long train loaded with bitumen and only "indirect???" safety issues introduced by software errors??? OMG and WTF!

Virtualizing the locomotive

Posted Sep 24, 2015 18:15 UTC (Thu) by bronson (subscriber, #4806) [Link] (1 responses)

You're saying you actually do want them to use virtualization in areas affecting the safety of the train?

Virtualizing the locomotive

Posted Sep 24, 2015 18:38 UTC (Thu) by raven667 (subscriber, #5198) [Link]

I couldn't actually parse the OPs statement so I don't think it meant anything substantial, all I got out of it was just an inchoate pretentious outrage.


Copyright © 2015, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds