User: Password:
|
|
Subscribe / Log in / New account

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Over at the Light Blue Touchpaper blog, there is a summary of a paper [PDF] presented in late June at the 2014 International Symposium on Computer Architecture about Capability Hardware Enhanced RISC Instructions (CHERI). "CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust."
(Log in to post comments)

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 3, 2014 23:38 UTC (Thu) by BrucePerens (guest, #2510) [Link]

Remember the IAPX-432?

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 0:10 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Yup, it was a nice experiment. It could have worked if Intel tried harder to fix obvious flaws in compilers.

Not just the compilers

Posted Jul 6, 2014 13:10 UTC (Sun) by CChittleborough (subscriber, #60775) [Link]

The iAPX-432 was more than a little overdesigned. It is not a good sign when a CPU takes 20ms to make a procedure call. (Yes, 20 milliseconds. A sequence of calls to empty procedure would run at 50 calls per second.)

Things like bit-aligned instructions did not help, either ...

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 0:11 UTC (Fri) by pjdc (subscriber, #6906) [Link]

Yes! The project was not a total loss, as it yielded one of my favourite Wikipedia sentences ever (now sadly edited into oblivion):
The resulting CPU was very slow and expensive, and so Intel's plans to replace the x86 architecture with the iAPX 432 ended miserably.
which I enjoy quoting in the hope that the reader will assume, before they reach the end, that it is referring to IA-64.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 8:04 UTC (Fri) by khim (subscriber, #9252) [Link]

Well, if you'll think about it this is actually shows that IA-64 was quite successful. Intel tried to obsolete x86 architecture three times. Once with iAPX 432, once with i860 and last time it was, of course Itanic. But note that while first one failes mizerably, second one was actually used (although not as much as earlier, simpler, i960 variant)—but mostly in very niche applications, but third one actually gotten port of Windows (at the time when Microsoft closed one port of Windows NT after another!) and we still recall it even today, more than a decade after introduction. Perhaps next attempt will actually be successful.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 12:03 UTC (Fri) by roblucid (subscriber, #48964) [Link]

And the much despised AMD, actually did supercede IA32 by providing a good enough 64bit architecture which runs IA32 well .. most of us use x86-64, which Intel didn't want.

It's far more likely we'll end up on an ARM variant than Intel supplanting current 64bit arch.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 16:39 UTC (Fri) by khim (subscriber, #9252) [Link]

AMD didn't supercede IA32. It just extended it. Originally Intel even used IA32e to name that architecture—it was quite fitting name except it created mass confusion.

As for ARM… yes it could happen. But will depend to a huge degree on the leven of success of Intel on Mobile front. Intel is throwing huge amount of money on that problem so anything could happen.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 9:47 UTC (Sat) by roblucid (subscriber, #48964) [Link]

Oh but it did supercede. An evolution not a revolution, but it is still different.

Changed the ABI and moved the base assumptions up, more registers fewer quirks, and easy to support for optimising compilers, which was what was needed practically rather than being stuck using -march=i386, whilst providing good binary compatability, which was a key requirement in practice.

AMD's hack, was a lot neater than Intel's Itanic vision into 64bit, and as the x86 decode problem seems to be less and less significant on modern day chips, there's less reason for a more regular, less quirky clean design which fails to run current software without emulation.

Intel still had quirks, just other day I read about Dolphin emulator needing performance hacks even on Core2 chip, due to slow-down when filling cacheline across boundaries. Fraid I haven't found the link to page.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 11:33 UTC (Sat) by khim (subscriber, #9252) [Link]

You could not realistically say that x86-86 superceded x86, period. Difference between 8086 CPU architectures and 80386 architecture is much bigger than difference between 80386 and IA32e.

You can either claim that Intel successfully replaced it's 80386 architecture with it's 80286 architecture and then again with it's 80386 architecture which AMD then replaced with it's x86-64 architecture or you must accept that x86-64 is just a version of x86, nothing more. One of the most important changes which happened with switch from -march=i386 to -march=x86-64 was introduced not by AMD in Athlon64, but by Intel in Pentium4, after all.

Indeed, Wikipedia includes x86-64 into the following list of x86 extensions: X87 FPU, MMX, 3DNow!, SSE, PAE, x86 virtualization. And this is correct and proper characterization of x86-64 IMNSHO.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 19:23 UTC (Sat) by roblucid (subscriber, #48964) [Link]

I can, understand this. Yes, it's similiar for compatability reasons to i386.. but it is different, IA32 was 32bits. Now we are on 64bit AMD, with a whole load more registers, it's NOT the Itanic IA64, end of and your comments about Pentium4 seem to involve re-writing history, as it was the Opteron which introduced x86-64. :)

Now read this.. enjoy, if you don't believe this one, just do some more digging until you're convinced - http://www.theinquirer.net/inquirer/news/1010366/linus-to...

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 23:38 UTC (Sat) by khim (subscriber, #9252) [Link]

Yes, it's similiar for compatability reasons to i386.. but it is different, IA32 was 32bits.

Sure. But 80286 and 8086 were 16bit while 80386 was 32bit so why 80386 is considered an x86 and Opteron is suddenly not an x86?

If switch from 16bit CPU to 32bit CPU is not considered a new architecture then switch from 32bit CPU to 64bit CPU should not treated as such. Or else we'll be forced to treat Pentium!!! with it's new SSE a new architecture which makes no sense whatsoever since it's basically a minor alteration of Pentium II. Heck, AVX-512 doubled number of SIMD registers and made them larger as well—yet it's still considered an x86, not something new!

Now we are on 64bit AMD, with a whole load more registers,

Wow. What an achievent. Sorry, but it's very similar to the introduction of SIB byte in 80386. Which instead of two base registers (BX and BP) and two index registers (SI and DI) gave us eight (with some limitations for ESP). This was quite nice change, but of course it was not a new architecture.

end of and your comments about Pentium4 seem to involve re-writing history, as it was the Opteron which introduced x86-64.
Sure, Operon (not Athlong64, my bad) introduced x86-64 architecture, I never said otherwise. But significant difference between -march=i386 to -march=x86-64 is the use of FPU in -march=i386 mode and SSE2 in -march=x86-64 mode. SSE2 registers and the appropriate instructions were introduced in Pentium4, not in Opteron.

Thus yes, AMD introduced valuable extension for the x86 architecture, obviously, no one says it's not true. But to say that it introduced the whole new architecture… nope, didn't happen. It's still good old x86—both internally and externally.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 6:31 UTC (Sun) by roblucid (subscriber, #48964) [Link]

If you look at the examples you give from point of view of what style assembly instructions a compiler generates, you'll see similarities.

If you look at things from point of view of memory model, what C source code compiles and runs, you'll see differences.

The modern x86-64, 64bit ABI is down to AMD it's NOT Intel's idea and no amount of hair splitting changes that.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 9:50 UTC (Sun) by khim (subscriber, #9252) [Link]

The modern x86-64 is a minor tweak on top of the existing Intel-produced architecture. No amount of handwaving will change that fact.

The claim that AMD “invented” “new” x86-64 architecture is as ridiculous as claim that Benigno Castiglione built Duomo di Milano at the end of last century. Sure, he did some work on it, but his additions were relatively minor and not really all that significant. All the important decisions were made centuries before. Even more ridiculous look the claim that x86-64 superceded an x86: it's continuation of the very same thing! How could one supercede itself? The fact that for once AMD invented something which Intel also adopted (it tried many times: 3DNow!, XOP and so on, but was ignored both by Intel and programmers for the most part) is commendable, but it does not suddenly make an x86-64 a new architecture which could superceded x86. It's still the same old Duomo di Milano x86, just with some modern tweaks.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 10:26 UTC (Sun) by PaXTeam (guest, #24616) [Link]

> The modern x86-64 is a minor tweak on top of the existing Intel-produced architecture. No amount of handwaving will change that fact.

heh, 'minot tweak' (do you have any clue about amd64?) and handwaving. amd64 is a new arch, no question about it, even according to intel, just read the titles of their own manuals: http://www.intel.com/content/www/us/en/processors/archite....

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 12:35 UTC (Sun) by khim (subscriber, #9252) [Link]

Even according to intel, just read the titles of their own manuals: http://www.intel.com/content/www/us/en/processors/archite....

So now we treat marketing BS as a gospel of truth? Come on. That's just stupid.

Do you have any clue about amd64?

Do you? I know that people who actually work with said architecture treat it as a single one (with significant tweaks here and there, sure). Both GCC and Linux kernel developers put files for x86 and x86-64 into a single subdirectory of their arch directory for a reason. Note that Linux developers initially treated it as two different architectures, but it made no sense and they were eventually merged. Yes, it's quite significant extension, but so is Sparc64 or 32bit and 64bit versions of POWER architecture.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 15:20 UTC (Sun) by PaXTeam (guest, #24616) [Link]

it's called 'developer manual', not 'marketing BS' but then it's not the first time you've had reading comprehension problems, is it ;).

the directory structure of a source code tree has no bearing on what is a separate architecture (e.g., the BSDs have kept i386/amd64 separate). if anything, it reflects on some wise engineerig design decisions of AMD that kept the two archs close enough that some linux devs managed to drive through the x86 'unification'. as for how successful the x86 merge is:

$ find linux-3.15.3/arch/x86/ | wc -l
1255
$ grep -rwn CONFIG_X86_32 linux-3.15.3/arch/x86/ | wc -l
414
$ grep -rwn CONFIG_X86_64 linux-3.15.3/arch/x86/ | wc -l
472
$ grep -rn 'def.*CONFIG_X86_32' linux-3.15.3/arch/x86/ | wc -l
341
$ grep -rn 'def.*CONFIG_X86_64' linux-3.15.3/arch/x86/ | wc -l
403
$ find linux-3.15.3/arch/x86/ -name '*_32.[ch]' | wc -l
59
$ find linux-3.15.3/arch/x86/ -name '*_64.[ch]' | wc -l
61
$ find linux-3.15.3/arch/x86/ -name '*_32.[chS]' | wc -l
81
$ find linux-3.15.3/arch/x86/ -name '*_64.[chS]' | wc -l
101

so after several years(!) of effort there're almost as many exceptions as 'unified' files, or in other words, it wasn't done because i386/amd64 are the same arch (which is, ironically, something you'd know yourself if you *actually* worked with these archs). so you tell me who's treating marketing BS as a gospel of truth.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 12:46 UTC (Sun) by roblucid (subscriber, #48964) [Link]

khim: just stick to your 16bit 8088 and 80286 programs, OK :)

Everyone else will use a -march value that is x86-64

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 7, 2014 14:20 UTC (Mon) by rriggs (subscriber, #11598) [Link]

This goes down as the stupidest argument I have ever witnessed on LWN. Congratulations.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 7, 2014 15:40 UTC (Mon) by farnz (subscriber, #17727) [Link]

I would claim that there have been the following architectures in Intel's 8086-compatible CPU range:

  1. 8086, as implemented by the 8086 and 8088, and extended by the 80186.
  2. 80286, as implemented by the 80286.
  3. 80386, as implemented by the 80386 and all later CPUs that didn't implement AMD64.
  4. AMD64.

80186 was an extension; nothing in the programming model changes, but there's some instructions to make common tasks faster; same for MMX, SSE, AVX, AES-NI, TSX-NI etc. OTOH, 80286 has its own programming model, as does 80386 - you can switch back to the 8086 programming model, but that's backwards compatibility cruft as needed to make the chip sell.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 11:37 UTC (Fri) by dps (subscriber, #5725) [Link]

The last time I heard about it IA-64 was new design that could be very fast if you could exploit the large number of registers and explicitly parallel instructions. Unfortunately the wished for compilers that could do this proved impossible to write.

Compilers for Itanium

Posted Jul 6, 2014 14:45 UTC (Sun) by CChittleborough (subscriber, #60775) [Link]

Yep. The distinctive feature of the Itanium architecture is Explicitly Parallel Instruction Computing (EPIC) which expects the compiler to generate and annotate code for predicated execution and other fancy things. In theory, EPIC allows faster execution than CPUs which have to recognise and exploit implicit parallelism to get the best use of their out-of-order and/or multiple-issue execution. In practice, even Intel could not produce compilers that made good enough use of EPIC.

On the other hand, the Itanium architecture does have excellent, compiler-friendly support for loop unrolling. Since a lot of demand for supercomputer time is for Fortran programs with deeply nested DO loops and run times measured in days, Itanium-based supercomputers were quite popular for a while.

(BTW, am I being excessively cynical when I wonder if an unwritten design goal for the Itanium architecture was to given Intel a natural monopoly by being too complex for anyone to clone?)

Compilers for Itanium

Posted Jul 8, 2014 12:46 UTC (Tue) by james (subscriber, #1325) [Link]

I think your cynicism is misplaced. The preferred route was allegedly patents held by a joint Intel-HP company, which would presumably not have been covered by any cross-licensing deals Intel had.

Making a processor too complex for anyone else to clone means that it's too complex for you to make a compatible processor, when the time comes to replace the initial implementation.

(Also, Itanium was notably better at floating point performance than the integer workloads it mostly ended up running. The Fortran programs you mention sound like the sort of programs where compilers might be able to extract parallelism, either into EPIC or SSE models. But I'm very sure other subscribers know more about that than I do!)

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 0:44 UTC (Sat) by gmaxwell (guest, #30048) [Link]

Wikipedia cites iAPX 432 as being 5-10x slower than contemporary parts. I'd jump at the the ability to get strong hardware enforced security for only a 5x slowdown compare to commercial CPUs on similar process. ... not for every application but there are plenty of places where you're already I/O bound (especially with hardware crypto support) and security is much more important.

Perhaps a nice project to couple with seL4: http://ssrg.nicta.com.au/projects/seL4/

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 11:20 UTC (Sat) by khim (subscriber, #9252) [Link]

Wikipedia also includes the project's failures part which explains that 5-10x slowdown was not intrinsic limitation, but basically a consequence of immature compiler technology and some bad design choices. It's perfectly possible to make it much faster. It'll probably never be as fast as a system without all these checks, but 20-30% slowdown is much easiler to swallow than 5-10x slowdown.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 13:52 UTC (Fri) by ortalo (subscriber, #4654) [Link]

Have you noted Cambridge & SRI intend to release their work on these processors as open source hardware?
What an interesting (and timely) echo to the recent article on open source hardware (http://lwn.net/Articles/604049/).

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 15:25 UTC (Fri) by roblucid (subscriber, #48964) [Link]

Would be ironic if Open Source hardware lets vendors lock a box down and eliminates rooting, unless you're willing to re-program the FPGA!

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 4, 2014 23:34 UTC (Fri) by ajb44 (guest, #12133) [Link]

They do, and they seem quite serious about it since they've gone to the trouble of making their own non-profit to receive contributor agreements. Note however that it's implemented in bluespec, which AFAICT is still a proprietary language; although maybe the situation has changed since I last checked, which was a while ago. Although the practical difference may be minimal, since last I heard you basically need to use proprietary tools to synthesise the standardised HDLs anyway.

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 6, 2014 13:28 UTC (Sun) by alvieboy (subscriber, #51617) [Link]

since last I heard you basically need to use proprietary tools to synthesise the standardised HDLs anyway.

This is true, and probably will ever be because there is a very close relationship between the synthesizer and the FPGA model/family/vendor.
There have been some attempts to have at least some synthesizer working for some of the Xilinx Spartan3 parts, including the bitfile generation, but AFAIK they don't work properly.

However, if you'd use VHDL/Verilog or any other standardized language, you'd be at least able to simulate it using Icarus Verilog or GHDL. With bluespec (whatever that is, I see many high level HDL languages these days - is it SystemVerilog?) you cannot do that. Although it seems to export to Verilog (and that could indeed work for simulation, but you still need bluespec to re-export any changes).

Alvie

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 7, 2014 21:09 UTC (Mon) by brouhaha (subscriber, #1698) [Link]

Note however that it's implemented in bluespec, which AFAICT is still a proprietary language;
It's Bluespec SystemVerilog (BSV), which their reference manual states to be a synthesizeable subset of SystemVerilog. SystemVerilog is a standard language (IEEE 1800), not proprietary.
Although the practical difference may be minimal, since last I heard you basically need to use proprietary tools to synthesise the standardised HDLs anyway.
Even with an open source synthesizer (e.g. Icarus Verilog), one needs proprietary back end tools for specific FPGAs. However, while those tools are proprietary, at least some of them are available at no charge with some limitations (e.g., Xilinx WebPack, Altera Quartus II Web Edition).

The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)

Posted Jul 5, 2014 18:04 UTC (Sat) by walex (subscriber, #69836) [Link]

That is the same approach used by Honeywell for the SCOMP variant of their Level 6 minicomputers.

Only 30-35 years ago :-).

Makes the Novena Open Laptop more that a curio.

Posted Jul 8, 2014 6:21 UTC (Tue) by k3ninho (subscriber, #50375) [Link]

The open-hardware of Bunnie Huang's Novena Open Laptop[1] has a XiLinx FPGA, which I'd thought was an interesting addon. Having software tooling to create your own robust software stack -- converting processes which show up as subverted into honeypots, perhaps -- turns the Novena product from interesting-looking hardware to a genuinely useful tool. I suspect that you'd need to write a re-boot-strapping setup if you were going to banish all the paranoia about trusting software repositories, source code and trusting trust. These projects together make that feasible.

K3n.

1: https://www.crowdsupply.com/kosagi/novena-open-laptop


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds