Big endian vs little endian
Posted Mar 2, 2007 19:34 UTC (Fri) by giraffedata
Parent article: How an Accident of Hardware Design Encouraged Open Source (O'ReillyNet)
The article makes some crucial mistakes in describing why there are two systems.
It says the DEC engineers, in contrast to the IBM egineers, wanted to number the bytes the same as the bits. But IBM engineers also numbered the bytes the same as the bits. In all 360-related discussion, IBM numbers the most significant bit 0, just as the most significant byte.
It also talks about numbering bytes left to right, but there is no left and right in computer memory.
The crux of this conflict is that we write numbers so that you read the most significant digit earliest. Earlier times correlate to lower numbers, but higher digit significances are naturally higher numbers. IBM engineers decided to be compatible with the writing convention, while DEC engineers decided to be compatible with arithmetic. The DEC way makes for cleaner programs, but the IBM way is much easier for humans to visualize and talk about.
Besides, the byte ordering decision wasn't a decision about byte numbering; it's the other way around: the bytes were already numbered (addresses) and engineers had to decide in which byte to put the most significant bits -- the high-numbered one or the low-numbered one.
to post comments)