|
|
Log in / Subscribe / Register

LFCS: A $99 supercomputer

LFCS: A $99 supercomputer

Posted May 9, 2013 8:17 UTC (Thu) by Riba78 (guest, #84615)
Parent article: LFCS: A $99 supercomputer

I think what is left unsaid in the hype about Adapteva Epiphany products is that they are not general purpose CPUs. You can't, for example, take Apache source code and compile it for Epiphany and get performance boost.

The Epiphany architecture is more like a GPU; an array of simple cores with small local memories and shared memory bus. To use Epiphany, you need to port your software specifically to that platform (like the PS3 Cell) or use OpenCL (like for GPUs). This makes it usable only for special purpose tasks, that already are designed for DSPs and GPUs; signal processing, rendering, scientific calculations, etc.

For PCs, you already have hundreds of GPU cores available on any modern display card, programmable for computational tasks via OpenCL. Even the mobile sector is getting more GPUs; Tegra4 seems to have 72 core GPU and claims 92GFLOPS performance. It would be interesting to see Parallella benchmarked against Tegra4.


to post comments

LFCS: A $99 supercomputer

Posted May 9, 2013 13:43 UTC (Thu) by kpfleming (subscriber, #23250) [Link] (2 responses)

That wasn't 'missing' at all, it was the bulk of the content of the post. He said multiple times that software needs to be rewritten to take advantage of wide parallelism, and even that programmers will need to learn new skills (be retrained). Having a $99 board to use to learn such techniques on is a huge step in the right direction.

LFCS: A $99 supercomputer

Posted May 9, 2013 14:00 UTC (Thu) by Funcan (guest, #44209) [Link] (1 responses)

What kpfleming said. Think of this like the rasberrypi - you're probably not going to replace all you existing systems with it, but it is fairly cheap, hopefully great for teaching on, and people will find novel uses for it.

LFCS: A $99 supercomputer

Posted May 11, 2013 3:29 UTC (Sat) by Trelane (subscriber, #56877) [Link]

For teaching supercomputing, the LittleFe cluster design is hard to beat. About 1-2k$, fits in checked luggage, CUDA+OpenMP+MPI.

Of course, this coulsd perhaps seve in a similar system.

LFCS: A $99 supercomputer

Posted May 9, 2013 15:17 UTC (Thu) by pboddie (guest, #50784) [Link] (2 responses)

The Epiphany architecture is more like a GPU; an array of simple cores with small local memories and shared memory bus.

There's no denying that having lots of cores in a single package is going to lead to a certain style of architecture, but could you clarify what you mean by "like a GPU"? Looking at the instruction set for, say, the AMD R700 (PDF), it's quite a different animal from that of the Epiphany. Certainly, the latter has memory instructions that look a lot more general purpose than those of the R700.

LFCS: A $99 supercomputer

Posted May 9, 2013 22:47 UTC (Thu) by Riba78 (guest, #84615) [Link] (1 responses)

The point I was trying to make is that "Epiphany is not a general-purpose CPU". The reference manual reads more like a DSP than a CPU. Based on few web searches I noted that Epiphany has no MMU, it has a memory system that begs consideration when programming, it's not self-hosting so it needs to be programmed via a host platform (via OpenCL or cross-GCC).

Epiphany was touted for it's GFLOPS performance, so it's natural to compare it to GPUs that are the other big GFLOPS-muscle. Both Epiphany and most GPUs are programmable via OpenCL and thus can be compared in capabilities and performance via that API. If you can't run comparable loads, then you might as well compare bogomips. If Epiphany could run Linux, we could use the informal benchmark of checking how long it takes to compile the Linux kernel.

Sorry for my dismissive attitude. Parallella is a nice project, but IMHO it's hyped up more than it warrants.

LFCS: A $99 supercomputer

Posted May 11, 2013 2:39 UTC (Sat) by jmorris42 (guest, #2203) [Link]

Yup, it is truly the Pi of the parallel world, a product with hype far exceeding it's actual usefulness.

I'd really like to see a benchmark showing the 64 computing cell version to be faster than a cell phone GPU, at some practical real world tasks, before getting excited. 32KiB of RAM per node, subdivided into four 8KiB blocks is going to limit it to very specialized tasks. The total available ram is only 2MiB so forget physics modeling. On the other hand a PC's GPU these days has hundreds to thousands of times that RAM available.

That leaves the power argument. Again, a cell phone can't draw 5W. The battery wouldn't last long enough to be useful since most phones are lucky to have 5WHr of capacity, and that label rating valid if discharging in an hour anyway.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds