|
|
Log in / Subscribe / Register

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 6, 2012 13:51 UTC (Tue) by dufkaf (guest, #10358)
In reply to: Raspberry Pi interview: Eben Upton reveals all (Linux User) by ssvb
Parent article: Raspberry Pi interview: Eben Upton reveals all (Linux User)

Well, it may be a bit worse here. The OMAP ROM boots your code and you have full access to whole hardware. Here it is all inside out. The ARM part is just a GPU coprocessor so the GPU first boots its own OS from the card and then gives some RAM to ARM core and enables it. The only part that is free for you to play with is the ARM sandbox with subset of accessible hardware described in http://www.raspberrypi.org/wp-content/uploads/2012/02/BCM2835-ARM-Peripherals.pdf One notable missing part here is the video out HW. There appears to be no access to it from ARM core at all. You just have memory mapped framebufer already preconfigured by GPU. Also the SD/MMC slot is missing in the datasheet too. To me it looks similar to running linux/OtherOS under hypervisor on PS3. Is this worse than having non-upgradable OMAP ROM code? :-)


to post comments

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 6, 2012 13:59 UTC (Tue) by dufkaf (guest, #10358) [Link]

Oh, sorry, the SD/MMC is there, chapter 5 External Mass Media Controller.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 6, 2012 14:56 UTC (Tue) by ssvb (guest, #60637) [Link]

> The OMAP ROM boots your code and you have full access to whole hardware.

Not really full access. Any hardware registers, which are only readable/writable in the secure state, are locked out and need some support in the ROM API for accessing them from the linux kernel. The comments from Russel King are quite interesting here:
http://comments.gmane.org/gmane.linux.linaro.announce.boo...

Beagleboard is an amazing project and a real breakthrough for its time. But even more freedom would be definitely nicer and I'm ready to move to a less restricted hardware any time without any regrets, assuming that it is also competitive in other aspects :)

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 6, 2012 20:09 UTC (Tue) by epa (subscriber, #39769) [Link] (7 responses)

Hmm, why not drop the ARM chip altogether and run the general-purpose OS on the GPU?

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 6, 2012 20:58 UTC (Tue) by khim (subscriber, #9252) [Link] (6 responses)

Contemporary GPUs are not well-suitable for this: single-thread performance is abysmal. They really need thousands of threads if your code have many conditional jumps (and “general-purpose” OS have huge number of them).

But yes, eventually it should become possible.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 2:14 UTC (Wed) by bronson (guest, #4806) [Link]

Exactly right. Be learned this the hard way when they created an entire computer around an AT&T Hobbit DSP. The microbenchmarks were mind-blowing so people were surprised when the OS as a whole felt really sluggish and no amount of optimization helped. You're dead in the water if you can't keep the pipelines full. And DSPs and GPUs have absurdly large pipes.

I don't know if others have learned this lesson the hard way but, since the 80s, who hasn't thought at one time or another, "holy crap, look at the throughput on that sucker!! No need for a CPU!"

The convergence is happening but it's taking longer than most people would have guessed.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 11:02 UTC (Wed) by epa (subscriber, #39769) [Link] (4 responses)

Atrocious as in 'about as good as a PC of 15 years ago', or atrocious as in 'even worse than that'?

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 13:04 UTC (Wed) by khim (subscriber, #9252) [Link] (3 responses)

With DSP it's probably first, with GPU it's second. Single IF may require hundreds of ticks to handle on contemporary GPUs and their frequency is usually much smaller then frequency of contemporary CPUs.

The same processing unit can be used to process other execution threads (that's why you need hundreds of thousands threads to fill the pipeline on GPU with ~1000 execution units), but if it's single-threaded OS… well, 99.99% of GPU power will be spent in waiting.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 14:04 UTC (Wed) by epa (subscriber, #39769) [Link] (2 responses)

I wonder... typically your PC is idle waiting for a keystroke. When it does have to compute something this is either pretty simple, or something computationally intensive like encryption or video decoding which runs well on a GPU. The most CPU-intensive, branch-heavy thing in day-to-day use is probably rendering complex HTML pages.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 16:01 UTC (Wed) by khim (subscriber, #9252) [Link] (1 responses)

The most CPU-intensive, branch-heavy thing in day-to-day use is probably rendering complex HTML pages.

This is kind of obvious: since there are whole OSes which do everything as “complex HTML pages” (webOS, ChromeOS, B2G, etc) literally any task can be covered by that definition. But even “simple”, “easy” things are in reality quite computationally heavy. Think True-Type rendering: basically unavoidable in contemporary OS and truly ubiquitous, yet very branch-heavy and power-hungry. Sure, you can employ some caching and make it more-or-less bearable, but from power supply POV all such things are kind of useless: significantly more energy-effective way is to add some kind of traditional CPU core.

Raspberry Pi interview: Eben Upton reveals all (Linux User)

Posted Mar 7, 2012 17:25 UTC (Wed) by epa (subscriber, #39769) [Link]

Truetype rendering is CPU-intensive but not unmanageable. I remember using outline fonts on an Archimedes with 8MHz ARM processor. They were rendered into bitmaps (with sub-pixel anti-aliasing and hinting) as needed; it took a fraction of a second per glyph after which the bitmap was cached for future use.

You are right, though, that it is much more power-efficient to have a traditional CPU core do these things rather than force a big lump of GPU silicon to do tasks it's not well suited for. I was thinking only of making the cheapest hardware possible for something like the Raspberry Pi, which doesn't run from batteries.

(By HTML I meant rendering only, not Javascript execution; these days with reasonably quick Javascript engines even most web applications spend most of their time idling the CPU, waiting for the next keystroke.)


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds