The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
CHERI is an instruction-set extension, prototyped via an FPGA-based soft processor core named BERI, that integrates a capability-system model with a conventional memory-management unit (MMU)-based pipeline. Unlike conventional OS-facing MMU-based protection, the CHERI protection and security models are aimed at compilers and applications. CHERI provides efficient, robust, compiler-driven, hardware-supported, and fine-grained memory protection and software compartmentalisation (sandboxing) within, rather than between, addresses spaces. We run a version of FreeBSD that has been adapted to support the hardware capability model (CheriBSD) compiled with a CHERI-aware Clang/LLVM that supports C pointer integrity, bounds checking, and capability-based protection and delegation. CheriBSD also supports a higher-level hardware-software security model permitting sandboxing of application components within an address space based on capabilities and a Call/Return mechanism supporting mutual distrust."
Posted Jul 3, 2014 23:38 UTC (Thu)
by BrucePerens (guest, #2510)
[Link] (23 responses)
Posted Jul 4, 2014 0:10 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
Posted Jul 6, 2014 13:10 UTC (Sun)
by CChittleborough (subscriber, #60775)
[Link]
Things like bit-aligned instructions did not help, either ...
Posted Jul 4, 2014 0:11 UTC (Fri)
by pjdc (guest, #6906)
[Link] (18 responses)
Posted Jul 4, 2014 8:04 UTC (Fri)
by khim (subscriber, #9252)
[Link] (14 responses)
Well, if you'll think about it this is actually shows that IA-64 was quite successful. Intel tried to obsolete x86 architecture three times. Once with iAPX 432, once with i860 and last time it was, of course Itanic. But note that while first one failes mizerably, second one was actually used (although not as much as earlier, simpler, i960 variant)—but mostly in very niche applications, but third one actually gotten port of Windows (at the time when Microsoft closed one port of Windows NT after another!) and we still recall it even today, more than a decade after introduction. Perhaps next attempt will actually be successful.
Posted Jul 4, 2014 12:03 UTC (Fri)
by roblucid (guest, #48964)
[Link] (13 responses)
It's far more likely we'll end up on an ARM variant than Intel supplanting current 64bit arch.
Posted Jul 4, 2014 16:39 UTC (Fri)
by khim (subscriber, #9252)
[Link] (12 responses)
AMD didn't supercede IA32. It just extended it. Originally Intel even used IA32e to name that architecture—it was quite fitting name except it created mass confusion. As for ARM… yes it could happen. But will depend to a huge degree on the leven of success of Intel on Mobile front. Intel is throwing huge amount of money on that problem so anything could happen.
Posted Jul 5, 2014 9:47 UTC (Sat)
by roblucid (guest, #48964)
[Link] (11 responses)
Changed the ABI and moved the base assumptions up, more registers fewer quirks, and easy to support for optimising compilers, which was what was needed practically rather than being stuck using -march=i386, whilst providing good binary compatability, which was a key requirement in practice.
AMD's hack, was a lot neater than Intel's Itanic vision into 64bit, and as the x86 decode problem seems to be less and less significant on modern day chips, there's less reason for a more regular, less quirky clean design which fails to run current software without emulation.
Intel still had quirks, just other day I read about Dolphin emulator needing performance hacks even on Core2 chip, due to slow-down when filling cacheline across boundaries. Fraid I haven't found the link to page.
Posted Jul 5, 2014 11:33 UTC (Sat)
by khim (subscriber, #9252)
[Link] (10 responses)
You could not realistically say that x86-86 superceded x86, period. Difference between 8086 CPU architectures and 80386 architecture is much bigger than difference between 80386 and IA32e. You can either claim that Intel successfully replaced it's 80386 architecture with it's 80286 architecture and then again with it's 80386 architecture which AMD then replaced with it's x86-64 architecture or you must accept that x86-64 is just a version of x86, nothing more. One of the most important changes which happened with switch from Indeed, Wikipedia includes x86-64 into the following list of x86 extensions: X87 FPU, MMX, 3DNow!, SSE, PAE, x86 virtualization. And this is correct and proper characterization of x86-64 IMNSHO.
Posted Jul 5, 2014 19:23 UTC (Sat)
by roblucid (guest, #48964)
[Link] (8 responses)
Now read this.. enjoy, if you don't believe this one, just do some more digging until you're convinced - http://www.theinquirer.net/inquirer/news/1010366/linus-to...
Posted Jul 5, 2014 23:38 UTC (Sat)
by khim (subscriber, #9252)
[Link] (7 responses)
Sure. But 80286 and 8086 were 16bit while 80386 was 32bit so why 80386 is considered an x86 and Opteron is suddenly not an x86? If switch from 16bit CPU to 32bit CPU is not considered a new architecture then switch from 32bit CPU to 64bit CPU should not treated as such. Or else we'll be forced to treat Pentium!!! with it's new SSE a new architecture which makes no sense whatsoever since it's basically a minor alteration of Pentium II. Heck, AVX-512 doubled number of SIMD registers and made them larger as well—yet it's still considered an x86, not something new! Wow. What an achievent. Sorry, but it's very similar to the introduction of SIB byte in 80386. Which instead of two base registers ( Thus yes, AMD introduced valuable extension for the x86 architecture, obviously, no one says it's not true. But to say that it introduced the whole new architecture… nope, didn't happen. It's still good old x86—both internally and externally.
Posted Jul 6, 2014 6:31 UTC (Sun)
by roblucid (guest, #48964)
[Link] (6 responses)
If you look at things from point of view of memory model, what C source code compiles and runs, you'll see differences.
The modern x86-64, 64bit ABI is down to AMD it's NOT Intel's idea and no amount of hair splitting changes that.
Posted Jul 6, 2014 9:50 UTC (Sun)
by khim (subscriber, #9252)
[Link] (5 responses)
The modern x86-64 is a minor tweak on top of the existing Intel-produced architecture. No amount of handwaving will change that fact. The claim that AMD “invented” “new” x86-64 architecture is as ridiculous as claim that Benigno Castiglione built Duomo di Milano at the end of last century. Sure, he did some work on it, but his additions were relatively minor and not really all that significant. All the important decisions were made centuries before. Even more ridiculous look the claim that x86-64 superceded an x86: it's continuation of the very same thing! How could one supercede itself? The fact that for once AMD invented something which Intel also adopted (it tried many times: 3DNow!, XOP and so on, but was ignored both by Intel and programmers for the most part) is commendable, but it does not suddenly make an x86-64 a new architecture which could superceded x86. It's still the same old
Posted Jul 6, 2014 10:26 UTC (Sun)
by PaXTeam (guest, #24616)
[Link] (2 responses)
heh, 'minot tweak' (do you have any clue about amd64?) and handwaving. amd64 is a new arch, no question about it, even according to intel, just read the titles of their own manuals: http://www.intel.com/content/www/us/en/processors/archite....
Posted Jul 6, 2014 12:35 UTC (Sun)
by khim (subscriber, #9252)
[Link] (1 responses)
So now we treat marketing BS as a gospel of truth? Come on. That's just stupid. Do you? I know that people who actually work with said architecture treat it as a single one (with significant tweaks here and there, sure). Both GCC and Linux kernel developers put files for x86 and x86-64 into a single subdirectory of their
Posted Jul 6, 2014 15:20 UTC (Sun)
by PaXTeam (guest, #24616)
[Link]
the directory structure of a source code tree has no bearing on what is a separate architecture (e.g., the BSDs have kept i386/amd64 separate). if anything, it reflects on some wise engineerig design decisions of AMD that kept the two archs close enough that some linux devs managed to drive through the x86 'unification'. as for how successful the x86 merge is:
$ find linux-3.15.3/arch/x86/ | wc -l
so after several years(!) of effort there're almost as many exceptions as 'unified' files, or in other words, it wasn't done because i386/amd64 are the same arch (which is, ironically, something you'd know yourself if you *actually* worked with these archs). so you tell me who's treating marketing BS as a gospel of truth.
Posted Jul 6, 2014 12:46 UTC (Sun)
by roblucid (guest, #48964)
[Link]
Everyone else will use a -march value that is x86-64
Posted Jul 7, 2014 14:20 UTC (Mon)
by rriggs (guest, #11598)
[Link]
Posted Jul 7, 2014 15:40 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
I would claim that there have been the following architectures in Intel's 8086-compatible CPU range:
80186 was an extension; nothing in the programming model changes, but there's some instructions to make common tasks faster; same for MMX, SSE, AVX, AES-NI, TSX-NI etc. OTOH, 80286 has its own programming model, as does 80386 - you can switch back to the 8086 programming model, but that's backwards compatibility cruft as needed to make the chip sell.
Posted Jul 4, 2014 11:37 UTC (Fri)
by dps (guest, #5725)
[Link] (2 responses)
Posted Jul 6, 2014 14:45 UTC (Sun)
by CChittleborough (subscriber, #60775)
[Link] (1 responses)
On the other hand, the Itanium architecture does have excellent, compiler-friendly support for loop unrolling. Since a lot of demand for supercomputer time is for Fortran programs with deeply nested DO loops and run times measured in days, Itanium-based supercomputers were quite popular for a while.
(BTW, am I being excessively cynical when I wonder if an unwritten design goal for the Itanium architecture was to given Intel a natural monopoly by being too complex for anyone to clone?)
Posted Jul 8, 2014 12:46 UTC (Tue)
by james (subscriber, #1325)
[Link]
Making a processor too complex for anyone else to clone means that it's too complex for you to make a compatible processor, when the time comes to replace the initial implementation.
(Also, Itanium was notably better at floating point performance than the integer workloads it mostly ended up running. The Fortran programs you mention sound like the sort of programs where compilers might be able to extract parallelism, either into EPIC or SSE models. But I'm very sure other subscribers know more about that than I do!)
Posted Jul 5, 2014 0:44 UTC (Sat)
by gmaxwell (guest, #30048)
[Link] (1 responses)
Perhaps a nice project to couple with seL4: http://ssrg.nicta.com.au/projects/seL4/
Posted Jul 5, 2014 11:20 UTC (Sat)
by khim (subscriber, #9252)
[Link]
Posted Jul 4, 2014 13:52 UTC (Fri)
by ortalo (guest, #4654)
[Link] (4 responses)
Posted Jul 4, 2014 15:25 UTC (Fri)
by roblucid (guest, #48964)
[Link]
Posted Jul 4, 2014 23:34 UTC (Fri)
by ajb44 (guest, #12133)
[Link] (2 responses)
Posted Jul 6, 2014 13:28 UTC (Sun)
by alvieboy (guest, #51617)
[Link]
Posted Jul 7, 2014 21:09 UTC (Mon)
by brouhaha (subscriber, #1698)
[Link]
Posted Jul 5, 2014 18:04 UTC (Sat)
by walex (guest, #69836)
[Link]
Only 30-35 years ago :-).
Posted Jul 8, 2014 6:21 UTC (Tue)
by k3ninho (subscriber, #50375)
[Link]
K3n.
Remember the IAPX-432?
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The iAPX-432 was more than a little overdesigned. It is not a good sign when a CPU takes 20ms to make a procedure call. (Yes, 20 milliseconds. A sequence of calls to empty procedure would run at 50 calls per second.)
Not just the compilers
Yes! The project was not a total loss, as it yielded one of my favourite Wikipedia sentences ever (now sadly edited into oblivion):
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The resulting CPU was very slow and expensive, and so Intel's plans to replace the x86 architecture with the iAPX 432 ended miserably.
which I enjoy quoting in the hope that the reader will assume, before they reach the end, that it is referring to IA-64.
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
-march=i386 to -march=x86-64 was introduced not by AMD in Athlon64, but by Intel in Pentium4, after all.The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Yes, it's similiar for compatability reasons to i386.. but it is different, IA32 was 32bits.
Now we are on 64bit AMD, with a whole load more registers,
BX and BP) and two index registers (SI and DI) gave us eight (with some limitations for ESP). This was quite nice change, but of course it was not a new architecture.end of and your comments about Pentium4 seem to involve re-writing history, as it was the Opteron which introduced x86-64.
Sure, Operon (not Athlong64, my bad) introduced x86-64 architecture, I never said otherwise. But significant difference between -march=i386 to -march=x86-64 is the use of FPU in -march=i386 mode and SSE2 in -march=x86-64 mode. SSE2 registers and the appropriate instructions were introduced in Pentium4, not in Opteron.The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Duomo di Milano x86, just with some modern tweaks.The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Even according to intel, just read the titles of their own manuals: http://www.intel.com/content/www/us/en/processors/archite....
Do you have any clue about amd64?
arch directory for a reason. Note that Linux developers initially treated it as two different architectures, but it made no sense and they were eventually merged. Yes, it's quite significant extension, but so is Sparc64 or 32bit and 64bit versions of POWER architecture.The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
1255
$ grep -rwn CONFIG_X86_32 linux-3.15.3/arch/x86/ | wc -l
414
$ grep -rwn CONFIG_X86_64 linux-3.15.3/arch/x86/ | wc -l
472
$ grep -rn 'def.*CONFIG_X86_32' linux-3.15.3/arch/x86/ | wc -l
341
$ grep -rn 'def.*CONFIG_X86_64' linux-3.15.3/arch/x86/ | wc -l
403
$ find linux-3.15.3/arch/x86/ -name '*_32.[ch]' | wc -l
59
$ find linux-3.15.3/arch/x86/ -name '*_64.[ch]' | wc -l
61
$ find linux-3.15.3/arch/x86/ -name '*_32.[chS]' | wc -l
81
$ find linux-3.15.3/arch/x86/ -name '*_64.[chS]' | wc -l
101
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Yep. The distinctive feature of the Itanium architecture is Explicitly Parallel Instruction Computing (EPIC) which expects the compiler to generate and annotate code for predicated execution and other fancy things. In theory, EPIC allows faster execution than CPUs which have to recognise and exploit implicit parallelism to get the best use of their out-of-order and/or multiple-issue execution. In practice, even Intel could not produce compilers that made good enough use of EPIC.
Compilers for Itanium
I think your cynicism is misplaced. The preferred route was allegedly patents held by a joint Intel-HP company, which would presumably not have been covered by any cross-licensing deals Intel had.
Compilers for Itanium
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Wikipedia also includes the project's failures part which explains that 5-10x slowdown was not intrinsic limitation, but basically a consequence of immature compiler technology and some bad design choices. It's perfectly possible to make it much faster. It'll probably never be as fast as a system without all these checks, but 20-30% slowdown is much easiler to swallow than 5-10x slowdown.
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
What an interesting (and timely) echo to the recent article on open source hardware (http://lwn.net/Articles/604049/).
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
since last I heard you basically need to use proprietary tools to synthesise the standardised HDLs anyway.
This is true, and probably will ever be because there is a very close relationship between the synthesizer and the FPGA model/family/vendor.The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
There have been some attempts to have at least some synthesizer working for some of the Xilinx Spartan3 parts, including the bitfile generation, but AFAIK they don't work properly.
However, if you'd use VHDL/Verilog or any other standardized language, you'd be at least able to simulate it using Icarus Verilog or GHDL. With bluespec (whatever that is, I see many high level HDL languages these days - is it SystemVerilog?) you cannot do that. Although it seems to export to Verilog (and that could indeed work for simulation, but you still need bluespec to re-export any changes).
Alvie
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Note however that it's implemented in bluespec, which AFAICT is still a proprietary language;
It's Bluespec SystemVerilog (BSV), which their reference manual states to be a synthesizeable subset of SystemVerilog. SystemVerilog is a standard language (IEEE 1800), not proprietary.
Although the practical difference may be minimal, since last I heard you basically need to use proprietary tools to synthesise the standardised HDLs anyway.
Even with an open source synthesizer (e.g. Icarus Verilog), one needs proprietary back end tools for specific FPGAs. However, while those tools are proprietary, at least some of them are available at no charge with some limitations (e.g., Xilinx WebPack, Altera Quartus II Web Edition).
The CHERI capability model: Revisiting RISC in an age of risk (Light Blue Touchpaper)
Makes the Novena Open Laptop more that a curio.
