User: Password:
|
|
Subscribe / Log in / New account

MINIX 3.2.1 released

MINIX 3.2.1 released

Posted Feb 23, 2013 10:12 UTC (Sat) by mabshoff (guest, #86444)
In reply to: MINIX 3.2.1 released by hadrons123
Parent article: MINIX 3.2.1 released

> Another pointless software with hurd and BSDs.

Really? At lwn.net things are usually a little more substantiated than this rather thin attempt at trolling. You might want to read this sentence on the comment page:

> ''Please try to be polite, respectful, and informative, and to provide a useful subject line.''

While I would agree that the usefulness of Minix is limited given its 20+ years of development and the same applies to the Hurd IMHO, the BSDs, especially FreeBSD, certainly has its use. None of them have been able to compete with Linux for a general purpose system, but if you want filestorage or routing FreeBSD is a pretty decent option for example. There are just fewer development resources for the BSDs, so I plain and simply do not see them catching up to Linux anytime soon without some major miracle.

And the whole 'micro kernel are better' seems to have met the cold hard reality outside the ivory tower. Symbian's kernel got taken behind the barn and shot, QNX is struggling to grow, L4Sec has shipped a billion devices or so, but that is due to Qualcom's cellular radios only. So I see hardly a mircro kernel in a user visible system. [RT] Embedded Linux seems to have certainly curtailed the growth of RTOS and other embedded OSes.

Cheers,

Michael


(Log in to post comments)

MINIX 3.2.1 released

Posted Feb 23, 2013 12:26 UTC (Sat) by hadrons123 (guest, #72126) [Link]

Well it is possible one might assume my post as trolling. When you look deeper, things are not so bright for this OS as per your post.

But what has this minix have really achieved?

I dusted my old P4 to see if this really works, but then I got kernel panic and the screen went blank. Didn't want to try anymore beyond that. Happened the same last year with the previous release.
So practically pointless release with 2 CD wasted.

MINIX 3.2.1 released

Posted Feb 23, 2013 12:57 UTC (Sat) by Zack (guest, #37335) [Link]

>But what has this minix have really achieved?

Well, presumably, AST is managing to scratch his itch by it, and the rest , I imagine, are having fun hacking on it and using it, which is all the justification a free software project would ever need when it comes to achievement.

MINIX 3.2.1 released

Posted Feb 23, 2013 19:12 UTC (Sat) by patrick_g (subscriber, #44470) [Link]

Except AST received a 2.5 Million euros grant from European Union (see this interview). What has been achieved with this money ?

MINIX 3.2.1 released

Posted Feb 23, 2013 19:32 UTC (Sat) by Zack (guest, #37335) [Link]

>What has been achieved with this money ?

Ostensibly, at least the release 3.2.1 of MINIX.

MINIX 3.2.1 released

Posted Mar 12, 2013 22:08 UTC (Tue) by Baylink (guest, #755) [Link]

Well, I thought we got a lot of great political commentary on the last election out of it. :-)

MINIX 3.2.1 released

Posted Feb 23, 2013 14:15 UTC (Sat) by mabshoff (guest, #86444) [Link]

> Well it is possible one might assume my post as trolling. When you look deeper, things are not so bright for this OS as per your post.

I would hardly see how anyone could interpret my reply in this way. I wrote about FreeBSD specifically and I thought made it clear that there is a difference. The Hurd as well as Minix are research systems IMHO. That is usually a nice way of saying that they tend not to go anywhere.

> But what has this minix have really achieved?

Not much TBH, you have for example actual implementations of network drivers that can be restarted upon crash and have its state restored, but I somehow have my doubts that this is really useful in the real world. It is neat, but unless you show that working on a storage driver I am less than impressed.

AST is 68, so I would be surprised if he keeps working on Minix much longer. He has gotten EU funding for resilient OS research, but it all looks like it will not really help the commercialization of Minix since I see no reason to pick it over Linux or a BSD even if that project bears fruit beyond nice demos, because its bad points (hardware support, performance, available talent) outweigh its potential plus points. The same applies to its ARM port which has been talked about for years and most people around here will know that ''it boots'' is quite different than ''it is stable and works well for real workload''. And it does not even boot on any ARM platform yet, much less on a wide variety of ARM platforms.

> I dusted my old P4 to see if this really works, but then I got kernel panic and the screen went blank. Didn't want to try anymore beyond that. Happened the same last year with the previous release.

Yeah, I am not surprised. Today's main problem for toy and research OSes is hardware support, so I see the potential tendency of them running well on hypervisors just like on the IBM 390 decades ago. But once you do that you kind of miss the point of running an alternative OS in the first place, i.e. if you do RT most of the potential goes away if you run it on top of say Xen.

> So practically pointless release with 2 CD wasted.

Maybe for you and most other people, but I am always surprised to meet people hacking on ''pointless'' code. It is really not my prerogative to come to that conclusion since it is not my time.

In the end I agree with you that Minix is pointless, but the brevity of your comment and the lumping in of the BSDs just made it look trollish. Had you written the second comment as your first I would have pretty much agreed with you.

Cheers,

Michael

MINIX 3.2.1 released

Posted Feb 23, 2013 15:47 UTC (Sat) by ibukanov (subscriber, #3942) [Link]

> Today's main problem for toy and research OSes is hardware support, so I see the potential tendency of them running well on hypervisors just like on the IBM 390 decades ago. But once you do that you kind of miss the point of running an alternative OS in the first place,

With IOMMU visualization like Intel's Vt-d one can run a toy OS against a single piece of real hardware like a network card while the rest will be provided by a hypervisor. That can bear very useful results like isolation of increasingly complex network drivers and protocols behind a hardened special-purpose OS. This reduces the attack surface against other software running in the hypervisor.

MINIX 3.2.1 released

Posted Feb 23, 2013 22:56 UTC (Sat) by mabshoff (guest, #86444) [Link]

> With IOMMU visualization like Intel's Vt-d one can run a toy OS against a single piece of real hardware like a network card while the rest will be provided by a hypervisor.

Absolutely, but given that the Hurd does currently not have any USB support (At least it did not have it toward the end of 2012 even though I think that a USB DDEKit is being worked on also by the Minix folks) the IOMMU support for something like Hurd or Minix seems unlikely.

> That can bear very useful results like isolation of increasingly complex network drivers and protocols behind a hardened special-purpose OS. This reduces the attack surface against other software running in the hypervisor.

Yeah, I still think that if you took some of the ideas/goals from the Hurd and tried to implement them on top of the Linux kernel they would have gotten much further along, but then you would have had to compromise. These days there are plenty of userspace driver infrastructure bits in the Linux kernel. I cannot imagine that the theoretical advantage of the Hurd microkernel design will even pay off because most of the interesting bits can likely be done with the Linux kernel and no one should care about the boring driver bits, but the cool stuff.

Cheers,

Michael

MINIX 3.2.1 released

Posted Feb 23, 2013 23:17 UTC (Sat) by ibukanov (subscriber, #3942) [Link]

> IOMMU support for something like Hurd or Minix seems unlikely.

I meant running Minix or other toy/research OS under a hypervisor like XEN or KVM that supports IOMMU so Minix could manage a piece of the real hardware like a network card. Such OS can implement complex network protocols or WIFI drivers isolating the rest of the system from bugs there.

I hope such setups would be more widespread allowing once again small teams or even a single person to try new OS ideas against latest hardware.

MINIX 3.2.1 released

Posted Feb 23, 2013 23:41 UTC (Sat) by mabshoff (guest, #86444) [Link]

> I meant running Minix or other toy/research OS under a hypervisor like XEN or KVM that supports IOMMU so Minix could manage a piece of the real hardware like a network card. Such OS can implement complex network protocols or WIFI drivers isolating the rest of the system from bugs there.

Ok, got your point. That certainly makes sense and if for example you think about VFIO coming from the Cisco folks it does not take much imagination why those folks were motivated to do that work since instead of porting their various routing OSes to various hardware platforms just take Linux with KVM and hand control of the networking hardware to the routing OS. That sidesteps the whole GPL issue and isolates the routing OS from the boring hardware bits.

> I hope such setups would be more widespread allowing once again small teams or even a single person to try new OS ideas against latest hardware.

I think it is already happening. I would be hard pressed to name a OS that does not run on VMWare, i.e. Haiku, the Hurd and Minix all run on top of it. Even OS/2 Warp and later is a supported configuration, but I might have thought about some earlier OS/2 releases which IIRC did some strange things in ring 2, but I am too tired to research it at this time.

I am not sure about the quality of those OSes running on top of say VMWare since I recall strange stability issues with FreeBSD 8.3 on some ESXi targets for example, but that is a different problem. Jump five years ahead and I cannot imagine anything but the various hypervisors being a mandatory target platform for any research OS out there. IIRC last year's linux.conf.au had a session about using Linux as the L4sec boot loader for example for some ARM target. That just sounds like an insane thing to do unless you think about what it would take to write all those drivers for L4sec I assume :p.

Cheers,

Michael

MINIX 3.2.1 released

Posted Feb 23, 2013 17:45 UTC (Sat) by andreasb (subscriber, #80258) [Link]

> The Hurd as well as Minix are research systems IMHO. That is usually a nice way of saying that they tend not to go anywhere.

Minix used to be a simple OS for teaching OS concepts (not researching, AFAIK). It tries to be usable as a general purpose (embedded) OS now, so that's not really an excuse anymore.

The Hurd is GNU's replacement for Unix, not a research project.

Research OSs may not go anywhere in most cases, however that does not make OSs that are not going anywhere research OSs.

MINIX 3.2.1 released

Posted Feb 23, 2013 23:00 UTC (Sat) by mabshoff (guest, #86444) [Link]

> Minix used to be a simple OS for teaching OS concepts (not researching, AFAIK).

True IMHO for Minix before the 3.0 release.

> It tries to be usable as a general purpose (embedded) OS now, so that's not really an excuse anymore.

Yeah, but I would consider the resilience work done via the EU grant mentioned above in a comment puts it into the research OS space. It certainly tries to be embedded, but I think AST is kidding himself if he believes that he can outcompete the BSDs, much less Linux or commercial options like QNX if one desires a pure RTOS.

Cheers,

Michael

MINIX 3.2.1 released

Posted Feb 23, 2013 23:10 UTC (Sat) by mabshoff (guest, #86444) [Link]

Oops, forgot about this one:

> The Hurd is GNU's replacement for Unix, not a research project.

Well, it certainly started out as a intended Unix replacement to complete the GNU ecosystem since a GPLed kernel was the last missing piece. But looking at its history and detours with the attempted replacement of Mach with L4 and Coyotos I think they definitely strayed into the research space. It was probably never intended that way, but things tend to change a bit over 20 years :).

> Research OSs may not go anywhere in most cases, however that does not make OSs that are not going anywhere research OSs.

Yep. I still think it applies to both the current Hurd as well as Minix 3.0 to some extend, but the discussion about what is a research OS and what not is about as decisive as talking about hybrid kernels (see [1]), i.e. the NT as well as OSX kernels are prime example where nebulous claims just cloud up the whole discussion.

Cheers,

Michael

[1] http://en.wikipedia.org/wiki/Hybrid_kernel

MINIX 3.2.1 released

Posted Feb 24, 2013 16:33 UTC (Sun) by keesj (guest, #55221) [Link]

Hi,

I have started working for the MINIX 3 team about a year ago. I have mostly been busy with the ARM port. Having done a lot of Linux work in the past I think I might be able to answer some questions.

The design MINIX 3 allows drivers to crash just like other userland programs under Linux. If you do nothing special at best your driver will be restated. To make transparent restarability a fact
you need to do some additional work like splitting your drivers to split the program state and the driver itself. This splitting has been done for at least network *and* block drivers. There is more work going on to allow hot replacement of components (The Linux analog would be something like ksplice but better).

Next week, at embedded world, we will be giving a restartability demo (running on ARM) of the crash and recovery of a graphics driver. This is quite unique and unseen feature in the Linux world. I think there is a market for MINIX 3 on ARM. The system is small and simple enough for people to tweak and modify to their own needs.

Hope this helps.

As last tip. If you have problems running MINIX 3 try interacting with the community. The only reason I still have CD's it probably because of MINIX :p

MINIX 3.2.1 released

Posted Feb 24, 2013 17:22 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> Next week, at embedded world, we will be giving a restartability demo (running on ARM) of the crash and recovery of a graphics driver. This is quite unique and unseen feature in the Linux world.
Uhm...

Linux GPU drivers have had support for hang detection and reset for _years_.

MINIX 3.2.1 released

Posted Feb 24, 2013 21:02 UTC (Sun) by drag (subscriber, #31333) [Link]

hehe.

It seems that the biggest problem with all of this is that when the driver crashes it puts the hardware into a bad state were recovery is just not going to happen.

That is why I suppose people don't notice that hang detection and reset exists.

MINIX 3.2.1 released

Posted Feb 25, 2013 10:09 UTC (Mon) by adobriyan (guest, #30858) [Link]

> Linux GPU drivers have had support for hang detection and reset for _years_.

Oops inside kernel driver was always either-or event.
If you're lucky, kernel continues to run, no restart needed!

MINIX 3.2.1 released

Posted Feb 25, 2013 13:23 UTC (Mon) by ibukanov (subscriber, #3942) [Link]

> MINIX 3 allows drivers to crash just like other userland programs under Linux

Does MINIX support IOMMU when available to really prevent the driver from affecting the rest of the system?

MINIX 3.2.1 released

Posted Mar 1, 2013 13:45 UTC (Fri) by renox (subscriber, #23785) [Link]

> Does MINIX support IOMMU when available to really prevent the driver from affecting the rest of the system?

I doubt it, a googling showed this "Not assigned yet":
http://wiki.minix3.org/en/StudentProjects/DependabilityIn...

But Genode/NOVA seems to have it:
http://www.osnews.com/story/26819/Genode_13_02_supports_I...

MINIX 3.2.1 released

Posted Mar 2, 2013 11:42 UTC (Sat) by keesj (guest, #55221) [Link]

>> MINIX 3 allows drivers to crash just like other userland programs under Linux
>Does MINIX support IOMMU when available to really prevent the driver from affecting the rest of the system?

No, this is not supported but would certainly fit design goals of MINIX.

Microkernels are better

Posted Feb 25, 2013 15:25 UTC (Mon) by Wol (guest, #4433) [Link]

Actually, I *don't* think they've met "the reality outside the ivory tower". I think they've met "the reality that is Intel".

Linux is written to be portable across multiple processors, but it started on Intel. It assumes it has just two rings available, a privileged ring for the kernel and an unprivileged ring for user space. And ON INTEL ring-switching is an expensive operation.

I worked on Pr1me 50-series, and Pr1mos was multics-based. The hardware was segmented-memory, and ring-switching was FAST FAST FAST. (Okay, in those days 1MHz hardware was fast! :-)

But put a microkernel on modern 50-series-style hardware, with the kernel in ring 0, the drivers in ring 1, and user-space in ring 3, and you'd probably have a system that could give a monolithic kernel a run for its money for speed, and blow it away for security.

(Pr1mos never got ported to Intel, shame, but I think the 386 (as it was then) was a very poor hardware match and they just couldn't get it to work.)

Cheers,
Wol

Microkernels are better

Posted Feb 25, 2013 20:08 UTC (Mon) by khim (subscriber, #9252) [Link]

It assumes it has just two rings available, a privileged ring for the kernel and an unprivileged ring for user space.

And this is related to Intel architecture... how exactly? Intel has four rings.

Okay, in those days 1MHz hardware was fast

Yup - and that's why ring-switching was FAST FAST FAST.

But put a microkernel on modern 50-series-style hardware, with the kernel in ring 0, the drivers in ring 1, and user-space in ring 3, and you'd probably have a system that could give a monolithic kernel a run for its money for speed, and blow it away for security.

For that you need to first make such hardware. Think about it: quarter-century ago 1MHz (Ok, more like 3-4MHz) was "fast" but memory latency was about 150ns and today 1GHz (Ok, more like 3-4GHz) is "fast" but memory latency is about 15ns. How come memory speedup was just 10x but CPU speedup was 1000x? The answer is well-known, of course: bunch of caches and deep, deep, pipelining. This approach is pretty hard to combine with FAST FAST FAST ring switching: either you keep all the data for different rings "in the ready" all the time (which increses latency of, e.g., L1 cache from 3 ticks to 4 ticks) and slow down all the other things by 30-50%, too or you have slow ring switch but fast CPU.

I doubt 30-50% slower CPU with FAST FAST FAST ring switching can beat faster CPU no matter the OS: in the end most of the time spent doing "real work", not CPU ring switching even with microkernel.

Microkernels are better

Posted Feb 25, 2013 20:31 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Actually, hardware with fast process switching exists. ARM actually has a version of it, but it's not really used.

Microkernels are better

Posted Feb 25, 2013 21:15 UTC (Mon) by khim (subscriber, #9252) [Link]

I'm not all that sure Cortex-A15 has fast switching. Previous cores were a joke speed-wise (and even Cortex-A15 is not all that fast: it's fair competitor is Atom, not Core i7). We'll see if it'll retain this advantage in the future when ARM will become comparable in speed to something from AMD, Intel, or IBM.

Microkernels are better

Posted Feb 25, 2013 22:50 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

I think it has, but it's not used that much because it's not compatible with Linux process model. L4 kernel can use it, though.

It will get more and more complicated as the speeds go up, but it might be actually still feasible.

Microkernels are better

Posted Feb 25, 2013 23:56 UTC (Mon) by Wol (guest, #4433) [Link]

Yup, you need to make that fast hardware BUT.

Firstly, you're going on about deep pipelines. Which causes processor stalls. Which was, I believe, a major reasoning for abandoning the Pentium 4 architecture - it was so prone to massive stalls it wasn't true.

And secondly, while I can't remember / don't know an awful lot about 50-series architecture, I don't understand why ring-switching should be slow. It's something to do with the memory segmentation, but the point was the segmentation gave you fast AND SAFE switching.

The Intel architecture won. Intel architecture cannot do a fast ring-switch. Doesn't mean that other architectures can't, doesn't mean that Intel architecture is the best. It just happened to be the one that gained the market share needed for network effects to knock out the competition.

If Pr1me hadn't lost out in the market, and had continued development of their cpus, I'm sure they could have taken advantage of all the same things as Intel, and we would expect fast ring-switching as a matter of course. Iirc, the difference in speed between a same-ring and a ring-switch call (for the second invocation, first was I believe somewhat slower) was pretty near nothing.

The Pentium 4 was the last gasp of the MegaHurtz wars - it wasn't a good architecture - it was a good marketecture which blew up badly in the real world.

Cheers,
Wol

Microkernels are better

Posted Feb 26, 2013 9:18 UTC (Tue) by renox (subscriber, #23785) [Link]

What I find quite amusing in your post, it's that Intel had (a limitated form of) segmentation(*) whereas most other architectures (PPC, MIPS, ARM) don't have it.

*:in x86-32 mode, it lost it in the x86-64 mode

Microkernels are better

Posted Feb 26, 2013 9:56 UTC (Tue) by khim (subscriber, #9252) [Link]

*:in x86-32 mode, it lost it in the x86-64 mode

It lost it before that. If you'll try to use segment with non-zero base address on Atom you'll see 4-5x slowdown. Other, larger, cores are not affected as much but still some operations become slower. AMD and Intel keep the segmentation supported for compatibility sake, but in reality it was already pushed to the slow-path before x86-64 was introduced. And situation is the same with fancy high-level instructions like BOUND. It's all uneven, of course: BOUND is fast on most AMD CPUs (may be on all, but I've not measured latest AMD creations) but it's fantastically slow on Intel CPUs.

That's what I'm talking about: fat is squeezed out of fast-path to raise CPU frequency.

Microkernels are better

Posted Feb 26, 2013 10:38 UTC (Tue) by Wol (guest, #4433) [Link]

Didn't Intel use that to get round a small address bus? And wasn't it absolutely useless for security?

iirc, assuming a 4K segment, addressing 4K+1 in segment 1 would get you the first byte of segment 2. On Primes, different segment meant different memory.

Cheers,
Wol

Microkernels are better

Posted Feb 26, 2013 10:46 UTC (Tue) by khim (subscriber, #9252) [Link]

Didn't Intel use that to get round a small address bus? And wasn't it absolutely useless for security?

They are designed to be used for security and, e.g. OS/2 1.x used them for security. Sadly they broken the backward-compatibility on the way: you could use segments solely to extend memory or for security on 80286 — but not simultaneously. 80386 solved that problem but it introduced UNIX-like paging model and everyone forgot about segments.

iirc, assuming a 4K segment, addressing 4K+1 in segment 1 would get you the first byte of segment 2. On Primes, different segment meant different memory.

You can do that on 80386, too: it really depends on how your GDT/IDT/LDT are arganized. You can even change sizes of segments on the fly. Actually this architecture was pretty sophisticated and flexible, but it was pushed to the slow-path (and eventually eradicated in x86-64) when AMD and Intel found out that nobody uses it.

Microkernels are better

Posted Feb 26, 2013 10:51 UTC (Tue) by mpr22 (subscriber, #60784) [Link]

  • real mode: 16-bit segment register; left-shift segment by four bits and add the offset to get a 20-bit physical address.
  • 16-bit protected mode (80286 and later): 16-bit segment register; when loading a segment register, use the top 14 bits to do a table lookup to get the base address of the segment; add the offset to the base address to get a 24-bit physical address.
  • 32-bit protected mode (80386 and later): 16-bit segment register; when loading a segment register use the top 14 bits to do a table lookup to get the virtual base address and size of the segment; add the offset to the base address to get a 32-bit virtual address, which is then resolved to a physical address either on a 1:1 basis (paging disabled) or via the paging unit.

Microkernels are better

Posted Feb 26, 2013 9:48 UTC (Tue) by khim (subscriber, #9252) [Link]

Firstly, you're going on about deep pipelines. Which causes processor stalls. Which was, I believe, a major reasoning for abandoning the Pentium 4 architecture - it was so prone to massive stalls it wasn't true.

Well, it had 31 stages and was able to execute up to three μops. Which meant you need to always have almost hundred μops in flight. It was unfeasible. Today fastest CPUs have 16 stages and can execute up to four μops. That's two times smaller but still is pretty hard to keep all these pipes filled, yes. What's your point? That you can reduce size of the pipe and this will solve most problems? Yes, but speed will suffer: you'll have larger stages in the pipes and they will be naturally slower.

And secondly, while I can't remember / don't know an awful lot about 50-series architecture, I don't understand why ring-switching should be slow. It's something to do with the memory segmentation, but the point was the segmentation gave you fast AND SAFE switching.

No matter how exactly switching is done it changes context. Either you need more context to keep all rings "in the loop" (which means larger pieces of CPU core which means slower frequency which means slower CPU overall) or you need to load and unload said context (which means ring switch is slow).

The Intel architecture won. Intel architecture cannot do a fast ring-switch.

Yes, but why do you think it's coincidence? It's not. The fact that Intel won the war may be an accident, but the fact that architecture which won can't do fast ring-switch is not a coincidence. The very some tricks which brings you more raw speed for the same price (and that is how Intel architecture won) make it harder to have fast ring switch.

Doesn't mean that other architectures can't, doesn't mean that Intel architecture is the best. It just happened to be the one that gained the market share needed for network effects to knock out the competition.

Yes and no. Intel architecture won because it was faster. And it was faster because it used tricks to make smaller CPU core pieces (that's the only way to keep frequency of CPU high enough) and to have smaller CPU core pieces you need smaller number of stuff in them.

If Pr1me hadn't lost out in the market, and had continued development of their cpus, I'm sure they could have taken advantage of all the same things as Intel, and we would expect fast ring-switching as a matter of course.

Nope. To make fast CPU you need to make it's synchronously-executing pieces small. And that means you need to push "useless fat" out of them. You make fast-path which only executes the most important pieces and slow-path which does everything else. Either you keep the machinery needed for optional rare things like ring switch in the fast path or you keep them on slow path. In the first case you have slow CPU (basically CPU has 2-3-4x slower frequency then streamlined AMD's, IBM's or Intel's CPU) in the second case you have slow ring switch.

P.S. PowerPC 601 had 32 KiB cache back in 1992. Latest and greatest Intel's CPU still have 32 KiB L1 cache. Think about it and about implications for fancy techniques (like GC support or fast ring-switching or… whatever can you stuff in the CPU core to simplify life for OS and pplication writers). Twenty years ago "fancy techniques" meant "bigger price" — and thus people used them where price was not the most important aspect. But fifteen or ten years ago (and most definitely today) trade-offs changes and "fancy techniques" started to mean "slower CPU". And people have chosen "faster CPU" over fancy techniques. The fact that all these interesting architectures have died off at that time and were replaced by dull AMD's, IBM's, Intel's (and for some time SGI's and Sun's) creations is not a coincidence.

Microkernels are better

Posted Feb 27, 2013 14:11 UTC (Wed) by gmatht (subscriber, #58961) [Link]

No matter how exactly switching is done it changes context. Either you need more context to keep all rings "in the loop" (which means larger pieces of CPU core which means slower frequency which means slower CPU overall) or you need to load and unload said context (which means ring switch is slow)."
How much context do we need per ring? According to Wikipedia, ring switches can be relatively fast, presumably because they don't need to reload the page table.

Microkernels are better

Posted Feb 27, 2013 14:36 UTC (Wed) by khim (subscriber, #9252) [Link]

How much context do we need per ring?

Enough to distinguish access from ring-0 to the access from ring-3, heh. Either you add tags to all the commands and all the data in the pipelines or you flush the pipeline after flush.

Basically the question is: if "mov [some_address], register" should succeed in ring-0 and fail in ring-3 then how do you detect this? Either you keep this metainformation near the information itself (that is: when you assign registers you now have 2-3-4x more physical registers and thus more complex logic to assign them) or you need to flush the pipeline after ring switch. First approach will mean larger core pieces (and thus slower CPU frequency), second approach will mean slow ring switch.

According to Wikipedia, ring switches can be relatively fast, presumably because they don't need to reload the page table.

The key word here is "relatively". If you flush the pipeline then there are 15-20 ticks stall and in that time CPU can execute about 30-40 simple commands.

Microkernels are better

Posted Feb 27, 2013 18:09 UTC (Wed) by ARealLWN (guest, #88901) [Link]

Although a number of points you make are accurate, lets at least do our best to not rewrite history. Intel won the war because they were the processor architecture used in the ibm pc. If you want to talk about what was faster, the DEC Alpha was faster then the Pentium. If you want to talk about what was more affordable, Be made a computer with a pair of processors that ran faster then a 386 for less money. You could build a computer with a cheap risc cpu and a dsp that would have much better mips per dollar/pound/franc then something with an intel processor. Intel won because they offered decent price performance which was able to still be reasonably competitive with offers from workstations by having a standardized way to add components to the cpu or motherboard chipset thereby allowing competition to thrive in a commodity market. I thought everyone knew this.

Microkernels are better

Posted Feb 27, 2013 18:57 UTC (Wed) by hummassa (subscriber, #307) [Link]

I think you just revealed your age... ;-)
(and I'm probably half a dozen years older)

Microkernels are better

Posted Feb 27, 2013 20:34 UTC (Wed) by khim (subscriber, #9252) [Link]

Intel won the war because they were the processor architecture used in the ibm pc.

Nope. Intel got money for the war because it built the architecture used in the ibm pc, that's true. But it won the war because it was faster. Do you think developers of monsters in top500 list care about ibm pc compatibility? Nope: they care about performance. And this list was dominated by x86 CPUs for years.

If you want to talk about what was faster, the DEC Alpha was faster then the Pentium.

For tasks with floating point — may be at first, but for tasks which only use integers it was actually slower. And when you compare Alpha 21364 with Pentium 4 HT 3.06… it was no longer faster even for floating point.

You could build a computer with a cheap risc cpu and a dsp that would have much better mips per dollar/pound/franc then something with an intel processor.

Then why people are not doing it? Take a look on the list once more: 75% Intel x86-64, 12% AMD x86-64, 12% IBM POWER, and 1% SPARC. Where are these risc cpus and dsps? Why there are so few of them in the list?

Microkernels are better

Posted Feb 27, 2013 21:51 UTC (Wed) by dlang (subscriber, #313) [Link]

the fact that the x86 was used on the most common platform meant that there was more money for speeding up the x86 chips, which made them more popular, which provided more money for speeding them up......

This is why small companies like Transmeta folded, they were compatible, but they didn't have the R&D budgets and manufacturing capability to compete with Intel. AMD is barely hanging on, and if Intel hadn't made the Itanioum blunder (leaving the gap open for the AMD-64 chips), I doubt if AMD would have survived.

network effects matter, when everyone is running binary software, being binary compatible matters. Since the IMB PC became the standard, any chips that weren't PC compatible became marginal and the popularity -> money -> R&C -> speed -> popularity cycle started.

With mobile devices NOT being x86 compatible, we are seeing a resurgence in competition at the architecture level again for consumer devices (enabled by Linux's cross platform support), and Microsoft and Intel have been trying for years to ignore and block this, but now they are having to really recognize the competition.

Microkernels are better

Posted Feb 27, 2013 22:07 UTC (Wed) by khim (subscriber, #9252) [Link]

Since the IMB PC became the standard, any chips that weren't PC compatible became marginal and the popularity -> money -> R&C -> speed -> popularity cycle started.

Sure, but even if you have enough money you are still constrained by law of physics.

With mobile devices NOT being x86 compatible, we are seeing a resurgence in competition at the architecture level again for consumer devices

Sure, but will fast ring switching survive this push? I very much doubt it. Note that POWER (which actully slightly faster then x86 although more expensive) is also not all that fast with the context switches AFAICS.

Microkernels are better

Posted Feb 27, 2013 22:41 UTC (Wed) by dlang (subscriber, #313) [Link]

I am not trying to say that context switches will be fast, I was merely responding to the logic of why x86 architecture won. It isn't because it's the best, it's because it's had the most R&D effort pumped into it to work around it's problems

This includes to a large extent, being produced on the most advanced fab processes, if you took the competing designs and produced them at the same resolution that Intel uses for their x86 chips, they would be much smaller, cheaper, faster, and use significantly less power than they currently do. The fact that with all these handicaps they are competitive to Intel chips in many uses is a good indication of how bad the x86 architecture is.

Microkernels are better

Posted Mar 1, 2013 1:22 UTC (Fri) by ARealLWN (guest, #88901) [Link]

I would like to argue that the Itanium chip wasn't really a blunder on the part of Intel. The technical merits of it can certainly be called into question but it killed off the DEC Alpha, SGI's interest in MIPS technology, and the PA-RISC architecture of HP simply with marketing because everyone bought into the idea of EPIC being the future of high performance computing. They eliminated a large class of potential threats to their interests in the server and workstation market before shipping any silicon. That hardly seems like a blunder to me. The easiest way to make sure you win a race is to make sure anyone faster then you doesn't show up. Intel simply diverted attention away from other competition to make certain players who might pose a more immediate risk were out of the equation first. IMHO.

Microkernels are better

Posted Feb 28, 2013 15:20 UTC (Thu) by deater (subscriber, #11746) [Link]

> Then why people are not doing it? Take a look on the list once more:

I did. Notice in the November 2012 list that Intel doesn't make the top 5 at all. Yet Power and SPARC do, both considered RISC chips by most people I think (although Power is debatable).

x86 got to the top just because of economies of scale and because it is good enough, relatively cheap. Being able to buy things off-the-shelf does help. Having spent some time in an HPC group I can tell you that x86 is used because it's there, not because it has any real benefits. How long has it taken them to get a fused multiply-add instruction?

Microkernels are better

Posted Mar 1, 2013 2:23 UTC (Fri) by ARealLWN (guest, #88901) [Link]

I believe that power (or powerpc) claims to be a performance optimized risc architecture (source would be Orielly publishing High Performance Computing, second edition). As I understand it that means that they say that they are risc based but will include additional instructions if it seems like they could improve the performance of software written for the architecture. I do appreciate that you have given backing to my initial statements and would like to thank you for doing so.

Microkernels are better

Posted Mar 1, 2013 1:57 UTC (Fri) by ARealLWN (guest, #88901) [Link]

I was going to type a rebuttal stating how you are wrong and don't know what you're talking about. After carefully reading you're reply I must say that I don't think I expressed my statements clearly the first time and that I believe you are probably correct about Intel being faster. I was trying to state that Intel has not been faster or faster per money invested in the past, not currently. Currently if you want a fast general purpose processor Intel isn't a bad choice. If you want pure processing you can get a gpu but those don't work well as general processing and only work with certain workloads, much like a dsp in the past. I'm not sure if anyone ever made a processing system based on risc and dsp architecture but in the past someone developed a system based on a bunch of TI dsp with 2mb ram on a 72 simm modules (if memory serves) that had the best performance per dollar for it's time. In order to break DES the EFF developed a machine with custom chips which certainly weren't intel compatible but had much better performance. Building a computer with good performance depends as much on what applications you are running as what cpu you choose and which peripherals you put inside it. As far as your reference to the pentium 4 compared to an early alpha processor, I won't comment except to mention that alpha was dead by then as far as DEC was concerned and had been for a while. The engineers had moved to AMD or some other company and the Athlon was competing with that processor very favorably in SPEC benchmarks without needing a 6ghz alu.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds