|
|
Subscribe / Log in / New account

Couldn't it be faster?

Couldn't it be faster?

Posted Mar 30, 2012 3:13 UTC (Fri) by felixfix (subscriber, #242)
In reply to: Couldn't it be faster? by bshotts
Parent article: Grinberg: Linux on an 8-bit micro?

Earliest 8 bit computer I used (Datapoint 5500) in 1976 had its fastest instruction (LAA, nop, single byte) at 1.2 usec, roughly equivalent to an 8080. The 8008 was slower. I've forgotten memory load/store times but 2.0 usec comes to mind. RAM was 56K (ROM took up the other 8K, more or less). Disk was 2.5MB fixed, 2.5MB removable, 14" platter each. No memory of seek times etc.

Earliest Unix computers were a few years earlier but more core and bigger disks. I'd guess roughly the same cycle time, but 36 bit words (PDP-6, -10) and I would guess 32K or 64K of that. The CDC 7600 super computer from 1968 had a basic instruction time of 27.5 nsec with memory read/write of ten times that, but with pipelining, if memory serves, some small multiple of 64K 60 bit words, ie maybe 1-2MB of core. Full floating point divide was only a few cycles, 2-3? 5? Seymour Cray was a frickin genius.


to post comments

Couldn't it be faster?

Posted Mar 30, 2012 3:53 UTC (Fri) by donbarry (guest, #10485) [Link]

Think you're misremembering. The 7600 like its brethren had a scoreboard reservation system for functional units, and a dedicated divide unit. Thus it could pipeline issue a divide and immediately continue until a dependency arose (like modern processors, but decades earlier). But the slowest instruction on the CDC processors by a significant margin were floating divide and (initially) the population count instruction (how many 1s in the 60-bit word) that rumor has it was added at the request of a nameless intelligence agency).

According to the CDC 7600 hardware instant manual, floating divide (rounded or unrounded) on the processor took 20 minor cycles. Floating product took 5. Add/subtract were 4. Population count (58 cycles on the 6400) was down to 2 cycles on the 7600.

I cut my teeth on a Cyber 74. Ahh, those were the days.

Couldn't it be faster?

Posted Mar 30, 2012 4:59 UTC (Fri) by eru (subscriber, #2753) [Link] (1 responses)

I'd guess roughly the same cycle time, but 36 bit words (PDP-6, -10)

Unix development was done mostly on PDP-11 series machines, which were 16-bit with 8-bit bytes. Force-fitting C to work those 36-bit architectures that were not byte-addressable was done much later, I believe. Probably the C language would have been very different, if Kernighan & Ritchie had been using the word-addressable CPU:s that were common in those days.

Couldn't it be faster?

Posted Apr 21, 2012 1:48 UTC (Sat) by ssavitzky (subscriber, #2855) [Link]

If you look at the PDP-11's instruction set and compare it to C's set of operators, you suddenly realize that C is really just a "structured assembler" for the PDP-11.

LISP and FORTRAN were based on the IBM 709.

Couldn't it be faster?

Posted Apr 6, 2012 8:01 UTC (Fri) by Cato (guest, #7643) [Link]

Unix started on a PDP-7 (18-bit) then was ported to PDP-11 (16-bit) - http://en.wikipedia.org/wiki/Unix#History


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds