Weekly edition Kernel Security Distributions Contact Us Search Archives Calendar Subscribe Write for LWN LWN.net FAQ Sponsors

# Big endian vs little endian

## Big endian vs little endian

Posted Mar 8, 2007 22:31 UTC (Thu) by giraffedata (subscriber, #1954)
In reply to: Big endian vs little endian by netizen
Parent article: How an Accident of Hardware Design Encouraged Open Source (O'ReillyNet)

while its native mode was Enhanced Binary Coded Decimal for Interchange Code -- EBCDIC

I've heard this said before, but I've done a great deal of programming in later implementations of that architecture and I can't recall the CPU ever being cognizant of what character code I was using except in those instructions that convert between EBCDIC and ASCII. And ISTR there were some with which you could take advantage of the fact that the lower 4 bits of the code for a digit was also the binary reprentation of that digit. (That's true for EBCDIC and ASCII). How is EBCDIC native, and what did the ASCII bit do?

A decimal number, e.g 750124, was converted by the hardware into a series of hexadecimal units.

What is a hexadecimal unit? It sounds a lot like you're talking about what IBM calls packed decimal and everyone else calls BCD: a number code in which each decimal digit is represented in binary in 4 bits and those nybbles are lined up big-endian. But that has nothing to do with hexadecimal.

Also, that was one of two number codings used on S/360. The other was the big-endian pure binary code we've been talking about. Pure binary is easiest to do arithmetic on, but packed decimal is easiest to do input and output on. Many early applications were much more input and output than arithmetic.

Incidentally, Intel 8080 also had instructions for packed decimal/BCD.

the hex-bits laid out the same way, left-to-right, in HEX, BCD, EBCDIC,
But there is no left or right in computer memory, so this doesn't figure into the choice of big-endian or little-endian. Big-endian is not left to right. Big endian is the most significant byte in the location with the lowest address.
Little-endian programming seemed appropriate for such (no insult intended) nickel-and-dime processing using a minimal instruction set.
What is the connection between little-endian and minimal instruction set?
{Intel counted adding to A-reg and adding to B-reg as two different instructions; IBM counted a instruction which could add to any register, 0 through 15, as one instruction.]
I think you're really pointing out that IBM had expensive general purpose registers, while Intel had special purpose registers. In fact, the only Intel CPU of that era that I programmed (8080) had only one register you could add to: A (the accumulator). But I must miss your point anyway; why do we care how people classify the instructions?