|
|
Subscribe / Log in / New account

Airlie: raspberry pi drivers are NOT useful

Kernel graphics maintainer Dave Airlie is rather unimpressed with the Raspberry Pi driver release; it is not something that will ever be merged. "Why is this bad? You cannot make any improvements to their GLES implementation, you cannot add any new extensions, you can't fix any bugs, you can't do anything with it. You can't write a Mesa/Gallium driver for it. In other words you just can't."

to post comments

Airlie: raspberry pi drivers are NOT useful

Posted Oct 25, 2012 22:26 UTC (Thu) by asbradbury (guest, #47625) [Link] (12 responses)

It's also worth reading Alan Cox's view (second page of the comments): http://www.raspberrypi.org/archives/2221#comment-35253

Airlie: raspberry pi drivers are NOT useful

Posted Oct 25, 2012 22:40 UTC (Thu) by Zizzle (guest, #67739) [Link] (1 responses)

> I can only assume Dave Airlie is also upset that he can’t step the disk head by himself on his hard disk.

Hmmm... aren't the kernel devs frequently clamoring for direct access to flash devices instead of working around firmware and FTL bugs?

Aren't the kernel devs all about common infrastructure for drivers? Clearly this driver won't be using DRM2 KMS DMA-BUF etc.

Like David points out, there will be no optimization opportunities with this firmware blob abstraction layer type driver.

Also what is the state of X acceleration on the Pi?
Does this firmware blob provide 2D primitives?
The 2D on OpenGL glamor benchmarks I have seen have been less than impressive.

I think it far from wise for senior kernel guys to endorse this model of abstract everything away in a closed firmware blob and provide us some open source RPC stubs.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 9:41 UTC (Fri) by renox (guest, #23785) [Link]

>> I can only assume Dave Airlie is also upset that he can’t step the disk head by himself on his hard disk.
>Hmmm... aren't the kernel devs frequently clamoring for direct access to flash devices instead of working around firmware and FTL bugs?

Yes they do(*), but this doesn't prevent them to include drivers in the kernel for these devices..

*: disks which lie about the state of the data are hugely annoying as well.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 25, 2012 23:21 UTC (Thu) by luto (guest, #39314) [Link] (1 responses)

I tend to agree with that. ISTM that maybe the design of the device is suboptimal, but I don't see what's wrong with the driver given how the hardware is set up.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 3:50 UTC (Fri) by airlied (subscriber, #9104) [Link]

I'm going to get to a follow up post as some point once I've digested and not thrown up more of the code.

But essentially providing any sort of DRI infrastructure on this where you want user processes to directly access the RPC interface, could be a security problem.

The GPU can access anything in RAM it wants afaik, and we've no idea what the complete command space the RPC interface covers. If there is any possibility that you can get the GPU do to something bad via the RPC interface, then you would need to audit all commands sent from userspace against a known list of safe ones in the kernel driver.

Their code afaik doesn't do any of that.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 3:53 UTC (Fri) by bjacob (guest, #58566) [Link] (7 responses)

I'm not a free-as-in-freedom die-hard, but I know a bit about GL stuff, and Alan's comment doesn't make sense to me. The premise in Alan's comment seems to be that GLES2 is a low-level enough, universal enough interface to graphics hardware that you wouldn't in your right mind want any lower-level interface to it. Whence his "move the HDD heads" analogy.

That premise is really false, and the HDD analogy, just wrong. GLES2 is plenty high-level enough that you could reasonably want a lower-level interface to graphics hardware. Cases in point:
1) all GL implementations/drivers that I know have serious bugs that affect stability, security, performance, correct rendering, and memory usage of applications using them, and force serious applications to actively work around them. (I work on the Mozilla Gfx team and we have work-arounds in place for about every GL implementation we've encountered).
2) there exist competing APIs on the same level, especially Direct3D. They have the same issues as in 1).
3) there exists innovation in the area of establishing new interfaces to graphics hardware, either at the same level as GLES2 (e.g. GLES3) or on a lower level than GL's (e.g. Gallium3D).

I hope it's clear that these 3 points are valid reasons why anyone interested in graphics programming, let alone a driver developer, would be disappointed that any given flavor of GL is presented by a hardware vendor as the unique possible interface to their hardware.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 8:09 UTC (Fri) by khim (subscriber, #9252) [Link] (6 responses)

I can understand disappointment, but I can not see what the fuss is all about.

Bugs? CD drives and, especially, chipsets (especially easrly IDE chipsets) have tons of them - and kernel contains quite a set of workarounds. Some are not usable because of bugs.

Different APIs? Well, there were Creative/MKE/Panasonic, Mitsumi, and Sony interfaces - which needed different code in a supporting program. That was before ATAPI won, of course.

Support for different APIs? There were tons of them. Besides three CD-specific ones there were ATAPI, SCSI - and these evolved over time.

IOW: I see absolutely nothing unique in this driver. Well, except for the fact that graphic driver developers feel that they deserve to have lower-level interface then what Broadcom is offering.

Basically everything "bad" people are now saying about these drivers you could have said about CD interfaces when when they were "hot".

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 12:34 UTC (Fri) by bjacob (guest, #58566) [Link] (4 responses)

I disagree again with the disk analogy here: why you were able to list some comparable issues with disks, the issues with graphics APIs are of a far greater magnitude.

Disk drivers don't typically cause your applications to crash. Disk driver bugs may have been problematic in the past but today, either due to being fixed or due to work-arounds, they're not an issue anymore for the user. Graphics driver bugs are a major user-visible issue everyday. We have detailed crash statistics for Firefox and driver issues are a significant cause of crashes. We had to add blacklists for graphics driver to bring crashiness down to acceptable levels. I don't think that any application has disk driver blacklists.

The correct disk analogy would be if a disk could only be used via the stdio and POSIX file API --- fopen, fwrite, fclose, chmod, unlink, etc. You wouldn't be able to access the disk as a block device at all, you couldn't even know or change what filesystem it's formatted as, and for reformatting/partitioning you could only use an opaque vendor-provided tool.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 15:28 UTC (Fri) by renox (guest, #23785) [Link] (2 responses)

> I disagree again with the disk analogy here: why you were able to list some comparable issues with disks, the issues with graphics APIs are of a far greater magnitude.

Very funny, quite a few disk firmware lie(lied?) about the state of the data causing data loss in case of a crash because the firmware said that the data was on the disk even though it wasn't.
So closed source disk firmware is a very big issue, but at least you can try to select providers with good firmware..

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 17:00 UTC (Fri) by magila (guest, #49627) [Link] (1 responses)

Speaking of lies...

Disks only "lie" about write data having been written if they are configured with write cache enabled. In this case they aren't lying at all, they are doing exactly what they were told to do. Turning off write caching will ensure no write completes without having been written to non-volatile storage. There may be isolated incidents where there are legitimate bugs in this functionality, but the overwhelming majority of drives have no issues here.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 27, 2012 7:48 UTC (Sat) by farnz (subscriber, #17727) [Link]

The historical problem to which people are referring is drives lying. There is a command in the ATA specification, FLUSH CACHE, documented as not completing until all data stored in drive caches is written to the disk. Back before ATX power supplies, many desktop drives completed the FLUSH CACHE command as soon as it came in, and then wrote the data from the cache to the platters as if no FLUSH CACHE command had been received, to improve benchmark results; it was sufficiently common a problem that Microsoft issued a hotfix for Windows 98SE and Windows ME that simply put in a two second delay before powering off the computer.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 17:05 UTC (Fri) by khim (subscriber, #9252) [Link]

Disk driver bugs may have been problematic in the past but today, either due to being fixed or due to work-arounds, they're not an issue anymore for the user.

It's quite an argument: because disks have good firmware in 2012 it's Ok to include CD Linux drivers in 1993. So you are saying that GPU firmware is still problematic in 2031 and thats the problem with Broadcom's driver? I wish to have such a precise crystal ball.

We had to add blacklists for graphics driver to bring crashiness down to acceptable levels. I don't think that any application has disk driver blacklists.

I was talking about CD drivers, not about HDD drivers. CD burning software usually has extensive list of workarounds for different devices.

The correct disk analogy would be if a disk could only be used via the stdio and POSIX file API --- fopen, fwrite, fclose, chmod, unlink, etc. You wouldn't be able to access the disk as a block device at all, you couldn't even know or change what filesystem it's formatted as, and for reformatting/partitioning you could only use an opaque vendor-provided tool.

Ah, you mean MTP? Yeah, it's supported by Linux, too. Both as host and as target.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 15:03 UTC (Fri) by Zizzle (guest, #67739) [Link]

> I can understand disappointment, but I can not see what the fuss is all about.

Following the disk analogy, this more like your disk only exposing a filesystem interface.

And then the manufacture claiming it's the most open device ever.

Want to improve performance? Fix bugs? Use a FS more suited to your needs?

Better start begging Broadcom.

But it gets better, the Alan Cox is endorsing this model. Maybe all linux drivers should just be RPC shims in front of closed firmware blobs. Yay freedom.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 25, 2012 22:51 UTC (Thu) by ssam (guest, #46587) [Link] (4 responses)

kind of like a printer that accepts postscript. sounds ok to me.

(maybe different if the GPU can read and write to system RAM (can it).)

Airlie: raspberry pi drivers are NOT useful

Posted Oct 25, 2012 23:04 UTC (Thu) by mansr (guest, #85328) [Link]

The GPU on that chip is in full control of the system. The ARM core can only do what the GPU lets it, not the other way around as people are probably used to.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 8:11 UTC (Fri) by khim (subscriber, #9252) [Link] (2 responses)

Of course it can! But so can your IDE chipset, your soundcard and your Ethernet card. What's so unique about GPU and GLES?

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 8:26 UTC (Fri) by Otus (subscriber, #67685) [Link] (1 responses)

Not if they are behind an IOMMU, like on most of the recent x86 platforms. Right?

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 17:14 UTC (Fri) by khim (subscriber, #9252) [Link]

Right, but IDE and Ethernet drivers were in Linux 20 years ago, IOMMU is relatively recent creation.

If the GPU development will shift to something like Broadcom is pushing I'm pretty sure interface with GPU will receive something like IOMMU in 20 years time.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 4:16 UTC (Fri) by cyanit (guest, #86671) [Link] (3 responses)

How are shaders passed to this implementation?

If it works by passing OpenCL/GLSL text source or tokenized OpenCL/GLSL to the firmware, that's a ridiculous joke, since it's going to be full of bugs and limitations and performance will be shit since a good CPU is needed to run a real optimizing shader compiler.

Plus, how does this thing do memory allocation, considering that an OpenGL implementation obviously needs to be able to allocate an unlimited amount of memory? (and unless the Raspberry Pi has gigabytes of VRAM, which is highly doubtful, that's going to have to be system memory normally allocated by the Linux kernel).

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 4:33 UTC (Fri) by airlied (subscriber, #9104) [Link] (2 responses)

it passes the GLSL text straight from what I can see. Hey I wrote a C equivalent compiler in firmware, trust me I know what I'm doing.

The GPU processor actually controls most things on the board, at boot I think you can set something to pick how much RAM the ARM CPU gets given and how much the GPU does.

What would be a real laugh was if their secondary CPU was running Linux, but I think its some proprietary embedded OS.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 5:19 UTC (Fri) by gdt (subscriber, #6284) [Link] (1 responses)

...I think its some proprietary embedded OS.

ThreadX from Express Logic.

I don't understand the objections based on performance and value for money. The kernel has supported some real shockers over the years. Sure, tell people it is a bad buy if that's what you think, although for A$38 no one is expecting much. But I read your blog posting as excluding any RPi graphics drivers based on some sort of Linux community equivalent to a Consumer reports review.

The whole chip design does have "free as in freedom" issues, because of the predominant position of the VideoCore core and its software. What we are really talking about is a closed platform with a ARM co-processor, and there's no doubt that the RPi Foundation has sold their product as being more than that. But I can't see why that should preclude Linux support for the ARM co-processor.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 5:43 UTC (Fri) by airlied (subscriber, #9104) [Link]

I've lots of reasons to preclude it based on just crappy code.

I doubt anyone will rewrite said crappy code to a level that any kernel hacker would find acceptable, just look at the usb code.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 7:41 UTC (Fri) by ledow (guest, #11753) [Link] (7 responses)

I've been saying something similar since they released it.

The killer question to me is: What's the difference between this and the nVidia binary blob? Because, to me, the issue is the same - we have the source to some shim but how the driver actually does anything is in the blob and unchangeable.

And yet, the nVidia blob is considered the worst-of-the-worst when it comes to proprietariness, and this is being hailed as some wonderful example of open-source openness that's never been seen before.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 8:50 UTC (Fri) by arnd (subscriber, #8866) [Link] (4 responses)

The two are exactly the same in the sense that it's impossible for anybody besides the owner of that blob to debug and fix problems, or to extend it in ways that might be useful.

They are completely different in the sense that the nvidia approach puts the blob (or at least one of them) into kernel space, which is tied closely to a particular OS version, while the Broadcom approach means they have a well-defined interface between the two processors and the ARM host can run any OS and version you want and still use those interfaces by linking in their free shim layer.

For people playing with FreeBSD or Wayland on this particular machine, the release is great news because this is the missing piece they were waiting for. For someone who wants to make use of the GPU for something the blob didn't do (or didn't do right), it changes nothing.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 12:04 UTC (Fri) by edt (guest, #842) [Link]

At the very least they let us upgrade kernels without worrying about an 'old' module...

I would be VERY curious to see how the vendor reacts to bug reports. Why not run every test we have against this driver and submit (prioritized) bugs for each and every one we find.

If they fix nothing we understand the limitations of the firmware, we win with every fix provided.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 14:57 UTC (Fri) by Zizzle (guest, #67739) [Link] (2 responses)

Well another similarity to NVIDIA I expect, is that this Broadcom blob can also scribble over whatever memory it likes.

So should you trust kernel bug reports from RPi systems?

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 20:29 UTC (Fri) by intgr (subscriber, #39733) [Link] (1 responses)

> this Broadcom blob can also scribble over whatever memory it likes

As I understand it, this blob is in the user space. It can only scribble over its own memory.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 20:47 UTC (Fri) by gnb (subscriber, #5132) [Link]

The user-space blob on the ARM side is what BCM have open-sorced. The remaining closed component is the GPU firmware which can indeed scribbl over anything it likes since the GPU is in many ways the main processor.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 13:29 UTC (Fri) by ewan (guest, #5533) [Link] (1 responses)

I think this is a step forward, but it's being claimed by the RPi folks as some sort of giant leap, and it's not. It's not entirely the substance that's upsetting people, it's the misleading billing.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 29, 2012 15:46 UTC (Mon) by mirabilos (subscriber, #84359) [Link]

The substance is bad, too. The RPi is losing the last bits of credibility it had.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 16:38 UTC (Fri) by iabervon (subscriber, #722) [Link] (1 responses)

It actually kind of feels to me like a CPU that only accepts x86 machine code. You've got a complex system with firmware that implements a portable API by generating programs for a non-portable custom microarchitecture.

It's not like we haven't seen CPU microcode bugs, and it's not like the CPU doesn't have access to the whole system. The real issue I see with the raspberry pi drivers is that the GPU is a bit like the 386, where the OS can't effectively constrain what the firmware is willing to do for userspace applications.

And there's the different history: it's like if we'd historically never had an x86 chip, and had been emulating it on the Z80 using a secret proprietary optimizing emulator. And then someone comes out with a microcoded x86 chip, and this release is obviously not informative at all about anything that we didn't know before.

Airlie: raspberry pi drivers are NOT useful

Posted Oct 26, 2012 21:58 UTC (Fri) by cyanit (guest, #86671) [Link]

Not really, it's more like a CPU that accepts only C source code.

What peculiar semantics

Posted Oct 26, 2012 17:13 UTC (Fri) by gowen (guest, #23914) [Link] (46 responses)

I've seen the English language tortured in some pretty interesting ways in my life, but this bizarre redefinition of "useful" may be the most perverse. It's as if usefulness is to be conflated with some nebulous concept of "freely modifiable".

Like the nvidia drivers, they may lack some desirable properties, but "usefulness" (in previously meaningful sense) is not amongst them.

What peculiar semantics

Posted Oct 26, 2012 20:50 UTC (Fri) by sjj (guest, #2020) [Link]

Well, "useful" to a kernel level hacker presumably does mean being able to work with it at the lowest level. I can even understand how frustrating it may be that there is something you *can't* see. But really, unfortunately, that's how modern hardware is - I have 627 firmware blobs on this machine from the distribution.

For the rest of the planet, and with RasPi in particular, it means "can I do something cool with it"?

And I also bet that this isn't the last thing RasPi people manage to get out of Broadcom. I assume getting even this far has taken a huge amount of work - this *is* Broadcom after all. Creating a hugely popular board on their hardware and showing them that they can too profit from the community is a win-win. Some people just want everything right now. We'll see. Glass half full!

Now if I could just get around to doing something *useful* with my two RasPis...

What peculiar semantics

Posted Oct 28, 2012 2:25 UTC (Sun) by Arker (guest, #14205) [Link] (44 responses)

Well, if as was said; you cant make improvements or extensions or even fix bugs when they are inevitably found, it does sound like a pretty useless software release to me. How would you define useful?

What peculiar semantics

Posted Oct 28, 2012 3:38 UTC (Sun) by dlang (guest, #313) [Link] (43 responses)

you can fix bugs in the userspace code that was released. you can write replacement code for other systems (i.e. wayland, *BSD, etc)

you just can't fix bugs on code running on the other processor (the GPU), but nobody ever claimed that you would be able to do so.

nobody ever claimed that this release included the code that runs on the GPU, they said very clearly that this finishes opening up all the code that runs on the ARM processor.

Is this what everyone would like in an ideal world? No.

Is this better than what you have supplied by any other ARM system, Yes.

What peculiar semantics

Posted Oct 28, 2012 4:45 UTC (Sun) by Arker (guest, #14205) [Link] (35 responses)

Well the announcement sure seemed to be oversold, at the very least. Freeing up a shim while keeping the driver in a blob is a very old trick at this point. It seems they have come up with a slight variation on that old tune, not impressed.

What peculiar semantics

Posted Oct 28, 2012 4:53 UTC (Sun) by dlang (guest, #313) [Link] (34 responses)

they had been saying for quite some time that there was not really any interesting stuff in the userspace blob (the part that was just opened), but people kept pestering them, making a big deal of the fact that this userspace blob running on the ARM processor.

It doesn't seem unreasonable for them to loudly announce that this part has been opened after so many people have been loudly critical of them for having this part closed.

What peculiar semantics

Posted Oct 28, 2012 22:33 UTC (Sun) by pboddie (guest, #50784) [Link] (33 responses)

It does make one wonder why it was so difficult to release this code as Free Software in the first place, however.

What peculiar semantics

Posted Oct 28, 2012 22:57 UTC (Sun) by dlang (guest, #313) [Link] (32 responses)

That's easy, for many companies, the default is "no, it won't be released" no matter how trivial. It takes real effort from someone to get things released as releasing something requires that it go through reviews and sign-offs. And before you can even get to that stage you need to go through the justification "why should we go to the effort of releasing this" review (and for something like this, I'll bet that the justification review took more, and more expensive man-hours than the actual review and release did.

This is also why they make such a big deal when they do release something.

It's also why many of us are so upset at all this negativity around the release. I'm not saying that you need to be estatic about it, but a lot of negativity results in the people doing the review of the next batch of code saying "why should we release this, we'll only get more bad publicity from people who want more"

It's fine to point out how little this is, it's fine to not praise them, but calling this a sham, or fraud, or accusing them of hiding the API really hurts future releases.

What peculiar semantics

Posted Oct 28, 2012 23:47 UTC (Sun) by Arker (guest, #14205) [Link] (31 responses)

That's easy, for many companies, the default is "no, it won't be released" no matter how trivial. It takes real effort from someone to get things released as releasing something requires that it go through reviews and sign-offs. And before you can even get to that stage you need to go through the justification "why should we go to the effort of releasing this" review (and for something like this, I'll bet that the justification review took more, and more expensive man-hours than the actual review and release did.

What a horribly inefficient system you describe. I wish we had a market economy in this country, so that dinosaurs like that could die and be replaced by more agile competition.

It's also why many of us are so upset at all this negativity around the release. I'm not saying that you need to be estatic about it, but a lot of negativity results in the people doing the review of the next batch of code saying "why should we release this, we'll only get more bad publicity from people who want more"

In this case, the device is being actively marketed as "an ARM GNU/Linux Box for $25." I think you can make a good case that the negativity now is actually in their best interests as a company, because if I had seen only the press releases and marketing materials without any negativity, I would very likely have just ordered a pile of them. And after I spent my money and received the product and figured out what they had actually sold me I would be extremely upset and with good reason. Unpleasant words like "fraud" would be just the beginning in that case.

It's fine to point out how little this is, it's fine to not praise them, but calling this a sham, or fraud, or accusing them of hiding the API really hurts future releases.

Sorry, but I read the language in the press release, and I read the language further down the blog where they have been pressed, and they are not compatible at all. It's inconceivable that you would see such a disconnect between the headline and the reality in a device that is consciously marketed to the community and not see people replying with some negativity.

What peculiar semantics

Posted Oct 29, 2012 8:10 UTC (Mon) by alex (subscriber, #1355) [Link] (29 responses)

"In this case, the device is being actively marketed as "an ARM GNU/Linux Box for $25." I think you can make a good case that the negativity now is actually in their best interests as a company, because if I had seen only the press releases and marketing materials without any negativity, I would very likely have just ordered a pile of them."

But they are not actively marketing it as some sort of Freedom loving ARM board. Their interest is in making a low cost platform for use in education. The fact the price-point and relative hack-ability has made it popular with a large number of makers/tinkerer type is purely secondary to their primary mission. While I'm sure we could see some educational benefits to being able to code for the GPU directly I doubt it's something that will be widely useful.

For me the limit of hackability being the boundary between ARM and GPU is fine for everything I'll want to play with. I after all do have an Intel GPU in my desktop if I really want to play with low level 3D stuff.

What peculiar semantics

Posted Oct 29, 2012 16:34 UTC (Mon) by Arker (guest, #14205) [Link] (28 responses)

Freedom to hack is the freedom most obviously applicable to educational applications, and you clearly arent getting that here. They ARE marketing it as a Free Software board, I dont know how I can possibly spell that out for you any more clearly. Read their marketing releases. They were bragging that this was the real deal, the most open driver blah blah blah and it's just not. It's worse than Nvidia. The entire driver has been moved to the GPU, and the GPU, NOT the so-called "CPU," is actually in control of the board! In effect it's a proprietary system which will run linux on a coprocessor so you can pretend, without even having the advantages that sort of a system could provide if it were done right. The more I look at it the more unbelievable it seems that anyone would actually buy it understanding what they are getting.

I hadnt paid all that much attention to this project prior to this story, and the more I look into it the more I have to ++ my earlier comment about the "negativity" people are decrying being a good thing on balance, because if I HAD ordered a stack of these things based on their marketing and found out the reality later I WOULD have to find a way to make sure they spent more dealing with me than they made on the deal.

What peculiar semantics

Posted Oct 29, 2012 17:35 UTC (Mon) by anselm (subscriber, #2796) [Link] (1 responses)

The more I look at it the more unbelievable it seems that anyone would actually buy it understanding what they are getting.

I understand that even so one can still do many interesting and challenging things with a Raspberry Pi as it is. (I don't have one, myself.)

As a matter of fact, it is quite probable that the Intel CPU which is »actually in control of the board« of your PC contains loadable non-free code that you don't get to look at or change. In that respect it isn't all that different from the Raspberry Pi SoC. It would be very nice if the Raspberry Pi GPU software were free, but that it isn't doesn't detract from the Raspberry Pi's utility for many purposes. In fact, if the Raspberry Pi's GPU code was in ROM then even the FSF would presumably be happy with that; you can still do what you want with the Linux on the machine.

What peculiar semantics

Posted Oct 29, 2012 18:07 UTC (Mon) by alonz (subscriber, #815) [Link]

Actually I can tell you for certain that some (most?) Intel SoCs contain a second processor that is completely closed-source and has full control of the system, overriding even the x86.

What peculiar semantics

Posted Oct 29, 2012 18:19 UTC (Mon) by dlang (guest, #313) [Link]

> Freedom to hack is the freedom most obviously applicable to educational applications, and you clearly arent getting that here.

Ok, what SoC should they be using? As far as I can tell, no currently manufactured product will meet your requirements.

There is a HUGE amount of room to hack on a system before you get to hacking the video internals.

You don't have to use Linux, you can write your own OS from scratch, you can program to the bare metal if you don't want an OS (making the appropriate API calls to the video chip).

would it be better if the GPU firmware was open as well, ABSOLUTLY.

But to dismiss it as 'worthless' and 'fraudulent' because the GPU firmware isn't open is going too far.

Nothing has been moved

Posted Oct 29, 2012 23:01 UTC (Mon) by alex (subscriber, #1355) [Link] (24 responses)

"The entire driver has been moved to the GPU, and the GPU, NOT the so-called "CPU," is actually in control of the board!"

Nothing has been moved. The architecture was always set up like this (see statements about how uninteresting the ARM side code was). Sure it's disappointing but it doesn't limit the educational drivers for the project as most kids won't be learning basic programming with a highly specialised vectorised graphics processor.

The architecture is a little weird because the SoC they are using was originally developed as a graphics co-processor for mobile phones. The ARM core was originally intended purely for debugging the GPU. However the architecture is what it is and the choice of SoC is what allows the foundation to get to their price point.

I'm sorry if this education foundations product doesn't meet your requirements but there are plenty of other more expensive boards you are welcome to use if you want to get into the GPU side of things.

Nothing has been moved

Posted Oct 31, 2012 7:54 UTC (Wed) by Arker (guest, #14205) [Link] (23 responses)

Nothing has been moved.

In this particular project, no, but versus "conventional" architecture what would normally be driver code is in fact this opaque blog running on the "GPU" right?

The architecture is a little weird because the SoC they are using was originally developed as a graphics co-processor for mobile phones. The ARM core was originally intended purely for debugging the GPU.

That actually makes sense. Unfortunately it means that, as I said, it boils down to getting a completely opaque proprietary system that has a coprocessor you can run linux on. Not quite what I would have expected had I read only their marketing materials and not the "negativity."

I've spent a good part of my career in education and I have certainly seen much worse (we all know how often computer education classes are actually microsoft word classes) and I am not trying to be negative. But there is a very old and tiresome pattern here. Companies see that there is a market for open hardware and then they do absolutely everything they can to make some kind of pseudo-not-really-open hardware designed to take our money without actually providing what we seek. Which is actually pretty insulting and annoying. Blatantly proprietary systems are at least relatively easy to spot, but a product like this seems almost like a land-mine, deliberately designed to trick me into buying it.

Nothing has been moved

Posted Oct 31, 2012 10:09 UTC (Wed) by dlang (guest, #313) [Link] (22 responses)

phoronix has an article up in the last day or so comparing the free radeon drivers to the closed ones, the tests show a massive difference in speed.

It seems to me that if the card firmware had a high level API (like we are talking about here), I would not have to decide between using the latest kernel (self compiled, with various 'odd' config options) or getting good performance.

I would actually prefer a card like that to what I can currently buy for my systems.

Nothing has been moved

Posted Oct 31, 2012 23:36 UTC (Wed) by Arker (guest, #14205) [Link] (21 responses)

Yeah I read that, and it's not surprising. What's also not surprising, however, is that the Free drivers, while slower and having fewer features, are more stable and reliable.

It seems to me that if the card were architected as you say you would like, you might indeed get the performance of the proprietary drivers while still using a Free shim. But you would also get the instability, and there would be absolutely no way you could fix it except to get new hardware. So it doesnt seem like an improvement to me, quite the opposite. With my ATI hardware, I at least have a choice.

I would love to have Free drivers that were stable and reliable and also supported the full feature set and ran as fast or faster than the blobs - that's what we should expect, frankly, but I know we arent getting it right now. Free drivers that are stable predictable reliable at least give me the opportunity to use the ATI hardware in my system without the bugginess of the proprietary driver, at some cost. If it were architected like the pi, it sounds like I would no longer have that option, whatever binary buginess it has will be found in the GPU, which you have no access to, but which by contrast can corrupt or deliberately overwrite anything you do with the ARM chip.

No?

Nothing has been moved

Posted Nov 1, 2012 1:20 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (15 responses)

Problem is, ARM systems don't really have much computing power to spare. On a PC it's OK to lose several 80% of GPU performance - remaining GPU power is still enough to drive a composited desktop and/or a simple game. On ARM systems - not so much.

Nothing has been moved

Posted Nov 1, 2012 22:56 UTC (Thu) by Arker (guest, #14205) [Link] (14 responses)

Certainly they dont have as much computing power as one would like, but please dont pretend it's not enough to run 'a simple game' without help, that's ridiculous. The raspberry pi, without the main processor aka GPU, still ships with a 700 (1000) Mh 32-bit ARM11 cpu that is more than powerful enough to do all kinds of very interesting things with, particularly with half a gigabyte of high speed ram included. It's more powerful than a top of the line desktop CPU from only a few years back, dont be ridiculous.

Free that up from the control of this GPU and associated binary blob and it could be quite useful. If only it were as simple as tearing that chip off the die and soldering on a serial port...

Nothing has been moved

Posted Nov 1, 2012 23:49 UTC (Thu) by dlang (guest, #313) [Link] (13 responses)

you can run a simple game, but not a more complex game.

there is nowhere near enough processing power in the ARM chip to simply scale video from 320x200 to full screen without help from the GPU. I accidently triggered this with mplayer a few weeks ago, and it takes 10-20 seconds to play ONE second worth of video when you are doing the scaling on the ARM chip

Nothing has been moved

Posted Nov 2, 2012 3:20 UTC (Fri) by Arker (guest, #14205) [Link] (11 responses)

you can run a simple game, but not a more complex game.

I wrote and played very complex games on an 8bit processor at around 3mhz with 2 *kilobytes* of RAM. If you cant do the same with many thousands of times the resources, you are doing something wrong.

there is nowhere near enough processing power in the ARM chip to simply scale video from 320x200 to full screen without help from the GPU. I accidently triggered this with mplayer a few weeks ago, and it takes 10-20 seconds to play ONE second worth of video when you are doing the scaling on the ARM chip

Then you need to look at your software stack because the hardware is MORE than capable of it. I could do that smoothly with no problems well over 10 years ago on a 386sx with 1 megabyte of ram and a simple svga card. Presumably you are using an encoding with a significantly higher overhead than MPEG-1 but you are also looking at a system with many *hundreds* of times the horsepower. If it were programmed specifically for the task it could probably drive several different videos of that size to several different monitors at once without dropping a frame.

I hear all the time that things which were done routinely in earlier decades with far less power are now 'impossible' and it makes me laugh. These things arent impossible. Programmers have just been trained that their time is too valuable to spend it optimising anything, and the proper solution is to throw more hardware at it instead. It sells hardware I guess.

Nothing has been moved

Posted Nov 2, 2012 3:28 UTC (Fri) by dlang (guest, #313) [Link] (5 responses)

did your 386 have a 1920x1080 32 bit screen? or was it a 640x480 8 bit screen (i.e.VGA)?

the added screen data does make a significant difference.

I agree that lots of stuff is very bloated today, but your 386 was not expected to do full motion HD video output without GPU assistance, and it would not have been able to do so.

Nothing has been moved

Posted Nov 2, 2012 3:43 UTC (Fri) by Arker (guest, #14205) [Link] (4 responses)

It would actually go up to 1024x768 but I didnt think the monitor looked as good like that, so I usually ran it in 800x600 instead. Of course it didnt do "HD" that buzzword wasnt invented yet, but full screen full motion video without dropping a frame it could definitely do and did many times, and 'accelerated graphics' was also something yet to come, at least in my price range.

Now I had a very special software setup to do this, of course. A 'stock' configuration on the same machine would crap itself trying to play much smaller videos, in fact I started that project as a dare because the buddy that owned that machine was complaining it was obsolete because it was performing just like you described with your pi - taking a minute to play a second or two of video - with tiny lowres files even. But the hardware was still perfectly capable of doing the job.

Nothing has been moved

Posted Nov 2, 2012 16:19 UTC (Fri) by bronson (subscriber, #4806) [Link] (3 responses)

The job being talked about is pushing 1920x1080x4x24 (24 at a minimum) = ~190MB/s to the screen. You're talking about pushing 800x600x2?x24 = ~20MB/s.

Time moves on, ya know?

Nothing has been moved

Posted Nov 2, 2012 23:44 UTC (Fri) by Arker (guest, #14205) [Link] (2 responses)

I dont want to beat it to death, but please. Using your figures the video has scaled up 9.5 times (190/20). Comparing the clock rates of the processors, the hardware has increased in the same amount of time by a factor of 83 1/3rd (1000/16.)

And this blunt comparison is a *severe* underestimate of the real difference, because an ARM11 can do a lot more with a clock cycle. That 386 chip didnt even have a floating point unit, let alone tricks like SIMD, branch prediction, out of order completion... clock for clock the ARM chip would still be far more powerful. And that's before you even consider the cache architecture, the system bus... over 500 times the main memory.

I have no doubt at all that if you could get a few thousand of those arm chips in the hands of promising young programmers WITHOUT the fancy GPU to fall back on, one of them would shock you all by making it do things you think are impossible. But if he's told instead he has to use the high level interface and pass OpenGL to a blob he cannot inspect or modify, he'll probably just pass messages until he gets bored, or finds a bug he cant fix, and then move onto something less frustrating than proprietary computing, like playing football with a bunch of guys twice his size or having molars extracted for fun.

Nothing has been moved

Posted Nov 3, 2012 0:19 UTC (Sat) by dlang (guest, #313) [Link]

> I have no doubt at all that if you could get a few thousand of those arm chips in the hands of promising young programmers WITHOUT the fancy GPU to fall back on, one of them would shock you all by making it do things you think are impossible. But if he's told instead he has to use the high level interface and pass OpenGL to a blob he cannot inspect or modify, he'll probably just pass messages until he gets bored, or finds a bug he cant fix, and then move onto something less frustrating than proprietary computing, like playing football with a bunch of guys twice his size or having molars extracted for fun.

nobody is disputing that more access would be better, but you are making the assumption that doing new and interesting things with the video is the primary purpose of all users of the device.

It may surprise you that most people who use computers aren't going to try and debug video drivers or firmware, even where they do have that capability. They will usually just download the latest version to see if it's fixed, live with the problem, or revert to a prior version.

We saw this with the Intel video drivers a few years ago, fully open-source drivers, but when there were problems in the drivers in a ubuntu release, 99.999+% of the people just stuck with an older version.

For those people, the difference between a high-level API and a low-level API is meaningless. To be fair, probably 90% of them wouldn't care if the entire driver was a binary blob, but that still leaves a very large group of people who benefit from having all the kernel and userspace stuff being open, even while the firmware is closed and has a high-level API

Nothing has been moved

Posted Nov 3, 2012 15:28 UTC (Sat) by bronson (subscriber, #4806) [Link]

Comparing processor clocks is just silly. The framebuffer is not stored in L1 cache.

On the Pi, the FB is in shared RAM clocked at 400MHz. RAM probably has a bandwidth of around 250MB/sec (wild ass guess based on parts). If you're driving 1080p, that doesn't leave much bandwidth for anything else. Plus ca change, eh?

Nothing has been moved

Posted Nov 2, 2012 7:09 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

Nonsense. 386sx couldn't even do anything useful along with fluid full-screen animation in 13h mode (that's 320x200 with 256 colors) without palette tricks or hacks like Wolf3D's runtime code generation.

386dx was a little bit better - it could run simple games like Doom 3D, though it had to use pageflipping with non-standard display modes, because blitting 64kb framebuffer of data was too taxing for these systems.

Nothing has been moved

Posted Nov 2, 2012 11:03 UTC (Fri) by Arker (guest, #14205) [Link] (3 responses)

Nonsense.

This is exactly what I was talking about. You are so secure in your knowledge. Yet have you actually sat down and done the math?

I know it's possible because I did it, so I know that if you actually did the math it would have to be possible. There is a huge difference in what it takes to decode and display a video on top of a multi-user general purpose software stack mostly written in ultra-high languages and essentially unoptimised, versus what is actually possible given a highly optimised decoder running without interference on the bare hardware.

Even given the significant increases in resolution, and the modern codecs which require quite a bit more processor time, the increase in demand on the hardware is orders of magnitude off in comparison to the actual increase we have seen in hardware capability over time. That 386 ran at 16mghz and it was overclocked to do it, and clock for clock it was vastly inferior to your ARM11 which is running at over 60 times the clock frequency. Not to mention having 512 times the RAM on a much faster system bus...

Nothing has been moved

Posted Nov 2, 2012 15:55 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

I'm old enough to actually have used a 386sx-based computer. And to write simple demos on it - it was possible to write fluid (30fps) animation on it, but simply filling screen with a solid color was already taxing its RAM bandwidth.

Anyway, Pi's bandwidth is barely enough for full HD video as it is. If you throw in non-trivial rendering - it's simply not enough, again.

Unless, of course, you're ready to limit yourself to "Tetris" or may be "Digger".

Nothing has been moved

Posted Nov 2, 2012 17:24 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

Er, *none* of the video playback and decoding engines are written in 'ultra-high languages': the inner loops of most of them are handcrafted assembler. None of them are 'essentially unoptimized'. The pixel display routines in X are also, these days, mostly done by pixman and to a considerable degree done in handcrafted assembler, taking advantage of SSE and the like.

I note that your 386 almost certainly did not have to decompress compressed video and blit it at the same time as everything else.

Nothing has been moved

Posted Nov 2, 2012 23:50 UTC (Fri) by Arker (guest, #14205) [Link]

I didnt say the codecs are written in ultra-high level languages, although I wouldnt be shocked if you found an instance of it particularly on a less common architecture like ARM. But what I did say was that the rest of the system is often written so. And regardless of how good your codec is, it is still running inside of a much larger, looser system which has very significant performance costs.

Nothing has been moved

Posted Nov 2, 2012 16:56 UTC (Fri) by intgr (subscriber, #39733) [Link]

> there is nowhere near enough processing power in the ARM chip to simply scale video from 320x200 to full screen without help from the GPU. I accidently triggered this with mplayer a few weeks ago

The fact that MPlayer cannot do it isn't evidence that the CPU is incapable. Are you sure that MPlayer is actually utilizing everything that the ARM core has to offer? SIMD instructions etc?

It might just be that MPlayer on ARM is using some generic C scaling routine that nobody has bothered to optimize, because common x86 desktops are all running another assembly-optimized implementation.

Nothing has been moved

Posted Nov 1, 2012 6:29 UTC (Thu) by dlang (guest, #313) [Link] (4 responses)

the question is if the stability issues are due to the driver, or due to the driver's interaction with the rest of the kernel.

Personally, I suspect that most of the problems are in the latter category.

The closed driver is having to interact in a multi-threaded environment with other processes manipulating memory, with allocating memory in the same space as the rest of the kernel, and with all the locking that the rest of the kernel expects (and in some cases requires). And the closed driver is trying to do this without being modified from kernel version to kernel version, even though the rules for the kernel are changing (the locking rules in particular, although memory management changes somewhat as well).

If stuff running on the GPU limits itself to reading and writing buffers that are explicitly allocated for it, almost all of the problems mentioned go away, and the remaining 'shim' driver can evolve along with the rest of the kernel.

In this case, they talked about how part of the difference was the closed drivers supporting a newer version of opengl, with the high level interface this would not vary from driver to driver (unless specific drivers required different versions of the firmware), so I would expect that things would be a lot closer to feature parity between the two modes.

In any case, even with the ATI mode of doing things, if there is bugginess in the firmware, it can cause problems for the overall system

Nothing has been moved

Posted Nov 1, 2012 23:04 UTC (Thu) by Arker (guest, #14205) [Link] (3 responses)

the question is if the stability issues are due to the driver, or due to the driver's interaction with the rest of the kernel.

A distinction without a difference. The driver has no purpose and no function other than inside the kernel.

If stuff running on the GPU limits itself to reading and writing buffers that are explicitly allocated for it, almost all of the problems mentioned go away, and the remaining 'shim' driver can evolve along with the rest of the kernel.

But as long as the stuff running on the GPU is an opaque blob we cannot audit or replace there is absolutely no way we can ever have any confidence that it is limited like that.

Nothing has been moved

Posted Nov 1, 2012 23:47 UTC (Thu) by dlang (guest, #313) [Link] (2 responses)

>> the question is if the stability issues are due to the driver, or due to the driver's interaction with the rest of the kernel.

> A distinction without a difference. The driver has no purpose and no function other than inside the kernel.

Actually, in this case it is a very important distinction.

let's put it another way.

Are the bugs in the graphics logic, or in the interaction with the rest of the kernel.

If the bugs are in the graphics logic, then they would remain if they were separated the way the Pi broadcom driver is.

If the bugs are in the interaction with the rest of the kernel, then an API like the Pi has would allow us the best of both worlds, good graphics performance, and clean interaction with the kernel

The driver vendors keep wanting to have a stable API for their interaction with the kernel, and the kernel devs (for good reason) refuse to freeze the kernel internal APIs. But if the API to the device is defined and frozen by the firmware interface, everybody wins (except those people who want to make the graphics hardware do different things)

Yes, the graphics hardware could start scribbling to any part of memory that it wants, but technically, so could any bus-mastering controller card, and there have been very few cases where bus-mastering network or drive interface cards have caused problems from this.

Nothing has been moved

Posted Nov 2, 2012 3:32 UTC (Fri) by Arker (guest, #14205) [Link] (1 responses)

But if the API to the device is defined and frozen by the firmware interface, everybody wins

No, I dont agree. There is no win there for me at all (other than simply not buying it.) The current situation with my ATI card is far preferable. You may call it a win if you get what you want out of it, but you do not get to define it as a win for me. What I want is a system where there is nothing running that I did not put there, nothing that I cannot edit, no code that I cannot audit - that is the whole point to free software. The hardware I pay for should respond to my commands, not anyone elses. Your 'solution' gives me exactly zero of what I want, it's not a compromise, it's a total loss.

Nothing has been moved

Posted Nov 2, 2012 7:25 UTC (Fri) by dlang (guest, #313) [Link]

> No, I dont agree. There is no win there for me at all

you conveniently left off the caveat that covered you

>> (except those people who want to make the graphics hardware do different things)

What peculiar semantics

Posted Oct 29, 2012 11:19 UTC (Mon) by union (guest, #36393) [Link]

>What a horribly inefficient system you describe. I wish we had a market >economy in this country, so that dinosaurs like that could die and be >replaced by more agile competition.

Why is it horrible/inefficient?

If this company's major target market was free software people, then sure. But this is likely one time deal, or if we are lucky one product line. Why optimize something you rarely do and will likely never repeat?

Of course if this becomes popular product line, and free software support becomes important, then they will likely streamline the effort.

Like it or not, but free software support is not high on anyones

And besides it needs legal review and sign-offs because of legal climate (not just US). Anyone not doing this is just asking for trouble down the line.

What peculiar semantics

Posted Oct 28, 2012 16:02 UTC (Sun) by wookey (guest, #5501) [Link] (6 responses)

Is this better than what you have supplied by any other ARM system, Yes.

Just to be clear that should be qualified as: "Any other recent ARM system with a GPU". There have been plenty of ARM systems in the past which were entirely free of proprietary firmware blobs, and there may well be current GPU-less devices which are blob-free (it's fair to say that I can't keep up any more with so much hardware coming out, so don't know for sure).

And, just to keep the nitpicks complete: obviously if an otherwise blob-free device has USB then you can plug things in which themselves needed firmware blobs, but I think it's reasonable for that not to count against the core ARM board.

ARM hardware didn't used to have this terrible 'proprietary blob' problem - it has arrived relatively recently, due largely to graphics hardware vendors' behaviour (they think they are special, and have all patented themselves into corners), but also increasing device complexity and the way 3rd party SOC sections (I refuse to describe them as 'IP') get integrated/sold and engineered, which also tends to engender blobs.

What peculiar semantics

Posted Oct 28, 2012 20:31 UTC (Sun) by dlang (guest, #313) [Link] (3 responses)

> Just to be clear that should be qualified as: "Any other recent ARM system with a GPU"

Yes, this is a good point. The problem is with the GPU vendors. In the ARM space, they are where NVIDIA and ATI were about 8 years ago, but in this case, it's all vendors, including Intel.

What peculiar semantics

Posted Oct 29, 2012 9:13 UTC (Mon) by khim (subscriber, #9252) [Link] (2 responses)

What Intel chips are your talking about? Intel stopped producing ARM chips more then five years ago...

What peculiar semantics

Posted Oct 29, 2012 18:04 UTC (Mon) by dlang (guest, #313) [Link] (1 responses)

Intel's mobile chipsets are just like everyone else's (except broadcom now), closed binary blobs running either in the kernel or in userspace.

What peculiar semantics

Posted Oct 30, 2012 0:27 UTC (Tue) by cyanit (guest, #86671) [Link]

It seems that Intel might eventually replace Atom with scaled down versions of their laptop CPUs, and those come with documented GPUs with open source drivers.

What peculiar semantics

Posted Oct 29, 2012 11:27 UTC (Mon) by njwhite (guest, #51848) [Link] (1 responses)

> they think they are special, and have all patented themselves into corners

It's ironic that patents have forced them to be even more secretive about the workings of their products, given that the entire point is supposed to be reducing secrecy. Not that anybody here needs convincing that patents in our space are harmful...

What peculiar semantics

Posted Oct 31, 2012 0:36 UTC (Wed) by JanC_ (guest, #34940) [Link]

All of them are afraid that they are probably unknowingly using some patented technology (they can't check a zillion patents), and opening up would make a lawsuit about that more likely...

So yes, patents are having the opposite effect as to what they were intended for.


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds