IBM details Blue Gene supercomputer (News.com)
IBM has begun building the chips that will be used in the first Blue Gene, a machine dubbed Blue Gene/L that will run Linux and have more than 65,000 computing nodes, said Bill Pulleyblank, director of IBM's Deep Computing Institute and the executive overseeing the project. Each node has a small chip with an unusually large number of functions crammed onto the single slice of silicon: two processors, four accompanying mathematical engines, 4MB of memory and communication systems for five separate networks."
Posted May 9, 2003 15:49 UTC (Fri)
by leandro (guest, #1460)
[Link] (3 responses)
Posted May 9, 2003 18:07 UTC (Fri)
by aguila (guest, #11100)
[Link] (2 responses)
Let's avoid emotions and approach this issue from an equivalent comparison which a physics professor would feel comfortable using. Much has been made of the MHz or GHz issue as "speed" of the processor, so very well let's maintain that misunderstanding (remember 1 Hz = 1 cycle per second; it is not analogous to anything a car does as regards to travel unless we are talking about radio or other scanning systems) and assume Hs is analogous to miles-per-hour. Let's also consider bits as highway lanes into and out of the processor. Now ask what "work" is being done? Or will more work be done with 16, 32, 64 or 128 bits? When you answer that then you'll understand why in serious History: When the IBM Jr. was 8 bits; the Mac was 32 bits. Initially MS Word and Excel could not run on PCs; they were Mac products first. On the pc you had the system beep; on the Mac you had digitized recognizeable sounds, even music and human speech in real time. And photographs as part of your documents with Macwrite, even Textedit could do that. AND within a very short period of that you had Postscript. Which even today a sharp eye can distinguish between a pc produced article or document and the same document produced from a Mac - even though the pc can control the same printer. The Mac's output remains superior. Many count themselves fortunate today that their jobs do not depend upon the quality of their output on paper. Current: Most PCs aren't at 64 bits yet; the Mac is already at 128 bit -- NOW in its G4 Processor incarnations. There are systems much more powerful: Sun, IRIX, etc. but they are also more expensive. For quality and reliable computing available to the masses ; the Mac is it, and the only thing better than that is Yellow Dog Linux running on a Mac. Now that's Power to the People. I don't care about games. For games I play Mah Jong in Linux (YDL 3.0); I have however played Mah Jong within Mac OS X on a friends system and it is a real effort for me walk away, it is an absolutely beautiful rendition of Mah Jong and Apple didn't write it an Open Source programmer did. You fellows who sweat at Open Source, you've my admiration but your work would be absolutely beautiful if it ran on OS X!! Completeness in science is always recognized by the astute as Beauty, and this is exactly where many will grieve; pcs are many things, but none are beautiful in form or function or production. Luminaries such as Bertrand Russel and Lloyd Wright have mentioned statements regarding form, function, simplcity and Beauty; they may be more correct than we know.
Posted May 9, 2003 23:05 UTC (Fri)
by Peter (guest, #1127)
[Link]
What about people who are a bit ... uninformed confusing performance (amount of work you can get done per second) with bus width or register width (32-bit versus 64-bit etc)? Heh, that depends on what definition of "bits" you use. The PCjr memory bus was 8 bits wide, but the CPU registers were 16-bit, and memory addressing was 20-bit. The Mac had (I think) 24-bit memory addressing, 32-bit registers, and either 16- or 32-bit memory bus - I can't remember. But historical perspective is mostly useless when evaluating a computing platform today. Amiga fans are fond of saying "yeah, but the Amiga did that back in 1987" - but it is no longer 1987, so it doesn't really matter. Once again, by what definition of "bits"?
Red herring. What "serious mission critical work" are you talking about? In fields like CFD and FEA, it's much more important to have 64-bit IEEE-compliant floating point operations than to have "all the accuracy you can possibly get". You want results that are repeatable on different runs, and on different platforms - which is what the IEEE spec gives you. For business logic apps (databases etc), you want a 64-bit address space - thus, the Opteron, Power4, IA64, or traditional Unix platforms like SPARCv9 or MIPS-4. Applications that actually want 128-bit quantities (either int or fp) seem to be very few and far between. Yet, on the other hand, there is the curious engineering phenomenon of something that looks horrible but happens to work very well. One is reminded of the well-known quote about the Ethernet architecture: "It works much better in practice than in theory." You read about CSMA/CD, and it all sounds like a horrible hack - but it works so well than Ethernet stomped out all of its major competition long before the technology advanced to the point where CSMA/CD is no longer needed. Just remember: sometimes Beauty means the design was well-thought-out and of high quality; other times, it just means they spent too much money on case design and not enough on real R&D.
Posted May 16, 2003 16:43 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
Nice thing they are using the PowerPC... let's hope this will help make GNU on RISC more on a par with CISC.
IBM details Blue Gene supercomputer (News.com)
"...more on par with CISC", for gods sakes Man, Why??? This is nitpicking regarding processor development, but this is why all you fellows who skipped classes on Digital Design or Processor Design are a bit ... uninformed confusing hype (the amount of software) with performance (amount of bytes processed per cycle). As most know currently acceptable "good" Intel based or like chips are 32 or even 16 bits. The current "top" line, by some reports still at the research only stage, or game market, are 64 bit, again depending on whom the source is.IBM details Blue Gene supercomputer (News.com)
One may consider the processor itself as a large office building with elevators stopping off a particular floors (or registers) at a certain "speed". How "wide" are the registers? See that IS the question to ask, but very, very few do. For simplicity and for this argument we will consider them the same size as the bits. In some designs, this is what varies wildly, especially in pc's.
mission critical work when you need every possible accurate digit to tell you a real answer for a real problem people invariably go RISC or something even "above" that.
IBM details Blue Gene supercomputer (News.com)
This is nitpicking regarding processor development, but this is why all you fellows who skipped classes on Digital Design or Processor Design are a bit ... uninformed confusing hype (the amount of software) with performance (amount of bytes processed per cycle).
History: When the IBM Jr. was 8 bits; the Mac was 32 bits.
Current: Most PCs aren't at 64 bits yet; the Mac is already at 128 bit -- NOW in its G4 Processor incarnations.
Now ask what "work" is being done? Or will more work be done with 16, 32, 64 or 128 bits? When you answer that then you'll understand why in serious mission critical work when you need every possible accurate digit to tell you a real answer for a real problem people invariably go RISC or something even "above" that.
Completeness in science is always recognized by the astute as Beauty, and this is exactly where many will grieve; pcs are many things, but none are beautiful in form or function or production.
You seem to be confusing width of the registers with RISC vs CISC. You have a detailed argument that more bits are better, but the original comment had nothing to do with that.RISC vs CISC
