|
|
Log in / Subscribe / Register

The perenial "Nuclear Power Plant" example

The perenial "Nuclear Power Plant" example

Posted Oct 12, 2004 19:37 UTC (Tue) by arget (guest, #5929)
In reply to: The perenial "Nuclear Power Plant" example by sbergman27
Parent article: Approaches to realtime Linux

A nuclear power plant operates with that wonderful oxymoron, a controlled fission chain reaction. A highly energetic neutron hits a Uranium (or Plutonium) atom, and splits it into two smaller atoms, with some heat energy and a neutron or two left over, that in turn can go on to split more Uranium atoms. It's a balancing act, too many neutrons, and the reaction goes "super-critical" and releases exponentially more energy, potentially doubling in sub-second time frames (periods). A bomb is designed to go super-critical very, very quickly. A normally functioning reactor will operate in "critical" with a period of infinity, right on the razor's edge between super-critical and sub-critical (where there are not enough neutrons to sustain a chain reaction). Now, because of some inherent randomness, the reactor is generally a hair one side or the other of critical. Modern reactors are designed so that the geometry is such that things don't get too "hot" (or too "cold") too quickly, and you have some time to adjust as your period drops into positive or negative numbers from infinity. The razor's edge is more like a broad ridge. Even so, you want to be able to respond quickly. You can't wait for a computer to reboot. Is it ever on the order of micro or even milliseconds in a (modern, Western) reactor? Nah, but it could get within minutes, or tens of seconds. Really, space travel is probably a better example of something that needs to be controlled within microseconds.


to post comments

The perenial "Nuclear Power Plant" example

Posted Oct 12, 2004 19:58 UTC (Tue) by euvitudo (guest, #98) [Link]

I like your description. There has been a bit of discussion about real-time linux in my workplace. A group is writing software that receives a stream of bits from a set of CCDs used for astronomical observations. They chose linux as the platform, but found out that they ended up losing a row of data every so-often (during each readout) due to the kernel going out to make sure its shirt was properly tucked in.

The obvious need here is to not lose track of the stream (in this case, flood) of bits coming from the hardware. I can imagine (though this may not actually be the case) that if a nuclear reactor has been streaming bits to it's warning systems, you certainly do not want to find out that the kernel was taking a short bathroom break. For my needs, I do not require a real-time system; if the kernel pauses for a brief moment to do some catch-up work, I don't care.

OT: safer nuclear reactors

Posted Oct 13, 2004 12:41 UTC (Wed) by jvotaw (subscriber, #3678) [Link] (5 responses)

[ Note: this is definitely not my field; apologies if I get this wrong. ]

For what it's worth, there are some designs of nuclear reactors that are fairly safe. Yes, they're operating in "critical", but it's unlikely that they will go super-critical quickly.

The two broadest, relevant questions about reactor designs include: how stable is the speed of the nuclear reaction? and, if it becomes unstable, does the speed tend to increase or decrease?

Chernobyl uses a fairly unstable design that tends to get hotter if it gets out or control. A counter-example are the CANDU reactors, which are pretty stable and safe.

There are even better designs which have not yet been implemented, such as CAESAR. As I understand it, this design uses depleted, non-radioactive Uranium as fuel. Steam moderates the speed of neutrons to the precise speed where they will cause depleted Uranium to split. If the reactor overheats or underheats, the density of the steam changes, neutrons are no longer moving at the speed necessary to sustain the reaction, and the reaction stops. The advantages of using depleted Uranium as fuel include the ability to have Uranium rods which are 100% fuel, instead of around 5% in traditional reactors, which means ~40 years of power without replacing the fuel rods. Also, the fuel rods are not usable for nuclear weapons either before or after they are used; we'd have the option of building these reactors in unstable countries without increasing nuclear proliferation.

Again, this is definitely not my field, so please forgive me (and correct me) if I'm wrong.

-Joel

OT: safer nuclear reactors

Posted Oct 14, 2004 9:58 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

`non-radioactive Uranium'? An interesting substance: a shame it doesn't exist.

OT: safer nuclear reactors

Posted Oct 14, 2004 13:34 UTC (Thu) by jvotaw (subscriber, #3678) [Link]

I stand corrected. Even pure U-238 is (minimally) radioactive, it seems.

The larger point remains: this is a substance that is widely considered safe enough to be used in ceramic glazing, sailboat keels, race cars, oil drills, etc. (Although, admittedly, not safe enough that you'd want to turn it in to a powder and disperse it into the air or water.)

Thanks, Wikipedia.

-Joel

OT: safer nuclear reactors

Posted Oct 15, 2004 20:07 UTC (Fri) by Baylink (guest, #755) [Link] (2 responses)

I believe the substance in question is "depleted uranium", as used in weapons systems, among other things.

A better analogy, IMHO, for when hard realtime response is necessary, would be industrial robotics: if a 400lb swingarm is about to crush a human, guaranteed millisecond response is in fact essential.

But Linus and I had an exchange about this, a few years back, carboned to this very venue, and he convinced me that if what you need is that hard realtime, then you should probably not be doing anything else with that computer.

http://lwn.net/2000/0713/backpage.php3

OT: safer nuclear reactors

Posted Oct 21, 2004 14:15 UTC (Thu) by alext (guest, #7589) [Link] (1 responses)

Generally true with respect to ordinary OS tasks. Often though you want to respond to specific events within a fixed time limit or always do X at interval Y. Neither things using all the CPU resource, leaving gaps to fill. What you do the rest of the time is low priority things that don't matter them not happening bang on interval Yn to within nanoseconds.

That is my experience from automotive engine controllers. On those we do lots of low priority things. The issue that comes in to play is testing and validation. If you are running other tasks on a controller with safety critical tasks generally you want to test everything to the higher standard if you are mixing on a shared host.

Related to running something like Linux as a low priority task under a hard real time system gives the argued (I have my doubts) ability to sandbox the none safety critical tasks so that they can't do things to interfere with the safety critical portion.

OT: safer nuclear reactors

Posted Oct 21, 2004 17:07 UTC (Thu) by Baylink (guest, #755) [Link]

This is, as always, a tradeoff.

Response latency can usefull be characterized as "M% of the time, the system will successfully respond within N ms." The more important it is to you, the closer to 100 M must be.

But the underlying point is that for values of M less than 100.0, it's often possible to combine soft-real-time techniques with throw-hardware-at-it, and get a useful result. And Linus' assertion, with which I agree now, is that if you really need 100.0%, because people may be hurt or killed, or the value of things which may be destroyed is sufficiently high, that at *best* you should indeed be running Linux as a task under a small, tight, HRT kernel.

LinuxRT and RTAI may be good enough; they may not.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds