LCA: Andrew Tanenbaum on creating reliable systems
Posted Jan 18, 2007 4:20 UTC (Thu) by jwb
Parent article: LCA: Andrew Tanenbaum on creating reliable systems
It sounds like the guy doesn't do that much with his computer. People who play games or edit broadcast video are using their computers right up to the very edge of their capabilities. Consider a GPU. Most GPUs cannot survive a stream of bad commands. If you send the wrong command you will deadlock the part and the computer will need to be reset. You could redesign the GPU to analyze the incoming command stream and reject bad commands, returning to a known-good state afterwards. Basically you want the GPU to be a device on a network with its own operating system. That would not be cheap nor easy, and the resulting system will be considerably slower.
High performance software relies on tricks which are, for the most part, quite unsafe. Your 3D game only works because the GPU is allowed unchecked access to main memory. If you were to start being careful about that access, performance will suffer. The same is true of cluster supercomputing with remote DMA.
Perhaps Tanenbaum envisions two classes of computer users: those who are willing to absorb the performance hit (because they only run PINE) as opposed to those who demand all the capabilities technology can offer.
to post comments)