|
|
Subscribe / Log in / New account

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Android Police takes a look at a new OS from Google. "Enter “Fuchsia.” Google’s own description for it on the project’s GitHub page is simply, “Pink + Purple == Fuchsia (a new Operating System)”. Not very revealing, is it? When you begin to dig deeper into Fuchsia’s documentation, everything starts to make a little more sense. First, there’s the Magenta kernel based on the ‘LittleKernel’ project. Just like with Linux and Android, the Magenta kernel powers the larger Fuchsia operating system. Magenta is being designed as a competitor to commercial embedded OSes, such as FreeRTOS or ThreadX." Fuchsia also uses the Flutter user interface, the Dart programming language, and Escher, "a renderer that supports light diffusion, soft shadows, and other visual effects, with OpenGL or Vulkan under the hood".

to post comments

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 19:30 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Microkernel. Written in C, with toy implementation of the core primitives.

Yawn.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 23:06 UTC (Mon) by csamuel (✭ supporter ✭, #2624) [Link]

But with an MIT license, not GPL, and so I suspect more attractive to Google for that reason. :-(

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 9:23 UTC (Wed) by k3ninho (subscriber, #50375) [Link] (1 responses)

[meta: this feels like the lwn.net equivalent of "No Wireless. Less Space Than a Nomad. Lame." :-) ]

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 9:24 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

LOL. Would be interesting to be proven wrong.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 20:35 UTC (Mon) by mm7323 (subscriber, #87386) [Link] (51 responses)

I was always surprised they didn't buy QNX and port Android to that. QNX had really really good POSIX compliance and supported gcc so that porting most userspace code was quite easy. Being a true RTOS with a strong history in automotive entertainment and dash systems seemed like it wasn't far off the mark for a smart phone either. And it had light weight containers almost a decade ago.

When Blackberry started producing Android compatibility on top of QNX, I though Google might take an interest then.

Alas, perhaps they have bigger plans.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 21:17 UTC (Mon) by davidstrauss (guest, #85867) [Link] (1 responses)

> When Blackberry started producing Android compatibility on top of QNX, I though Google might take an interest then.

When BlackBerry started doing that, they already owned QNX -- not just a license, but the entire product.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 5:46 UTC (Tue) by mm7323 (subscriber, #87386) [Link]

Yep - Google missed the chance to buy QNX that time around, but as BlackBerry fell on hard times, I wondered if Google might have liked to buy off QNX and the Android layer, or maybe all of Blackberry.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 23:30 UTC (Mon) by drag (guest, #31333) [Link] (38 responses)

You don't want a 'true RTOS' when it comes to things like phones and tablet. You'll just end up getting lousy performance and lousy battery life. RTOS does not mean it's fast. RTOS means that it's deterministic. It means you can predict how long it takes to do things. When you have consumer-facing products the only thing people care about is getting it done as fast as possible.

QNX worked really well in things like gas station pumps because it only had one application that needed to be ran at a time. You know what is going on and you only have one application to deal with so it's relatively easy to keep everything all deterministic.

With phones or tablets you have lots of stuff going on with audio servers and network servers and identity servers and so on and so forth. Plus you are dealing with a JIT memory collected language used for most of the OS itself, so how are you going to get deterministic performance even if you are using a deterministic OS kernel? What is the point?

Linux running Java was the defacto embedded platform for anything beyond the most resource strapped devices. Google choosing to reproduce and improve on it for Android was a exceptionally good decision.

> When Blackberry started producing Android compatibility

Blackberry had a chance to save itself by porting their messaging platform over to Android. Nobody gave a damn about their OS. Nobody ever gives a crap about OSes. It's their messaging platform that was valuable. They could of marketed 'Blackberry on Android' , slapped on a decent physical keyboard, and people would of bought them in droves. It would be the defacto standard for corporate phones given out to employees.

---------

One approach to 'Internet Of Things' it to take the entire flash, cpu, basic networking, OS, and userland environment into a single Integrated Component. People don't want to design their devices around 'running linux' or run a bunch of different chips... They want a simple generic-as-hell very robust jelly-bean component they can pick up from Digikey in the thousands they can slap on their device and get some sort of basic network/internet functionality. Something to spend 2-5 dollars on.

The 'IoT chip', if this sort of OS works out, would just be something like a AVR or PIC microcontroller-on-steriods that just has a more OS-like environment on it. Something that can program itself and do basic multitasking.

Having RTOS environment means they may be able to save money by having to avoid a companion microcontroller to do the automation. If they had a device that would control a robot servo or garage door or some sensor for a car... then if they used Linux they would probably still need a second programmable microcontroller that would do the 'realtime' function. If they can avoid needing that then it would save a lot of people a lot of money. If they can use the same 'generic chip' for lots of different things then that will save a lot of time and effort on the part of developers so they only need to learn one platform.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 23:59 UTC (Mon) by xenu (guest, #95849) [Link] (1 responses)

Windows Phone (7.x) used to be based on Windows CE, which is a RTOS. However, they quickly moved to the NT kernel.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 2:49 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

WinCE is only incidentally an RTOS. It was pretty much an afterthought during the OS design, so it's only soft RT and Microsoft haven't even bothered to upgrade it to a true hard-realtime OS.

It's not too complicated to create a simple RTOS, so pretty much all small OSes are.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 14:00 UTC (Tue) by mm7323 (subscriber, #87386) [Link] (35 responses)

> RTOS means that it's deterministic.

Determinism is a pretty useful property. Not only does it make it easy to keep things like the UI consistently responsive and have animation and media running smoothly, even when the CPU is at full load doing other lower priority things, but a lot of bugs can lurk in non-determinisitic execution. You can often simplify hardware design and reduce buffering if you can guarantee the software will respond within a set time too.

> You'll just end up getting lousy performance and lousy battery life.

I'm not sure why an RTOS causes that - an RTOS doesn't need to add overhead. Perhaps you were thinking of microkernel designs where the message passing overhead can stack up.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 14:40 UTC (Tue) by zdzichu (subscriber, #17118) [Link] (32 responses)

Just how guarantee like “there UI will respond in at most 10 seconds” makes it responsible?
Again: realtime is about having upper boundary in response time. This boundary can be in *minutes* and it's still real time if it exists.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 17:14 UTC (Tue) by mm7323 (subscriber, #87386) [Link] (31 responses)

I think you are being silly. Practically RTOSes on the types of processor found in smart phones are measuring their bounded latencies in fractions of a microsecond. See this page for an example http://rtos.com/products/threadx/ where they give sub-microsecond latencies apparently on a 200MHz processor. It is marketing material, so needs a large grain of salt, but pretty good values.

Sure you could technically have an RTOS with limits in minutes, hours or days, but I don't think there would be much market for such a thing.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 17:51 UTC (Tue) by Wol (subscriber, #4433) [Link] (30 responses)

And I'm sorry but you're not using your brain! In order for it to be an RTOS, it has to be possible to calculate the minimum time required. And then if you're trying to do several things at once (like run a gui) you need to add them all together. And then you need to calculate how much they're going to interfere with each other.

Which promptly falls foul of the indeterministic factor of what the user is going to try and run. If I decide I want to calculate !1024000, it's a pretty safe bet any RTOS is either (a) going to throw its hands in the air and say "all bets are off", or (b) it's going to shunt my calculation off into swap space so it can get on with providing a deterministic response to everything else.

You can NOT have a deterministic online system. It's not going to happen. Blame it on the users.

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 22:05 UTC (Tue) by zlynx (guest, #2285) [Link] (14 responses)

All RTOS run multiple things at the same time. They're worthless otherwise. They'd be DOS.

The key is priorities. An RTOS _will_ run the highest priority task. It _will_ finish first. If the programmer does not screw this up by waiting on unpredictable external resources.

Many modern programmers have forgotten how to do time-sensitive programming and have their code go off doing stupid things while it should be focused on its one task. Like loading icons from disk or running JIT while rendering desktop compositor frames. But it doesn't have to be that way.

A user focused device _does_ have hard real time requirements. It is responding to its user and the input to response loop should be under 20 ms. User response should be in the top five priorities. The only things higher are keeping the hardware alive.

Modern hardware should be able to do this easily even if its running at 200 MHz low clock power-save without any data in CPU cache. I had a Sharp Zaurus with about those specs back in 2005 and it easily hit 20 ms response.

People seem to think 200 MHz is slow or something. That's roughly _200_million_ operations per second. If your software can't get an interrupt, read the data, compute the necessary response to the user and update the display in 4 million instructions, please turn in your programmer card. Note that's a response to the user, not the final result. If it is taking a lot of time just indicate that things are indeed happening.

Back to RTOS, calculating the worst possible time isn't hard. Use the lowest possible clock speed and assume no cached data and maximum RAM contention from the GPU and other CPUs.

If Google wants to be smart about their new OS they might consider requiring applications to be more like BeOS and implement a dedicated user event thread which would get special handling. It should also get an application error abort if that thread ever touches any non-realtime resource or exceeds its allocated time-slice.

BeOS, again

Posted Aug 16, 2016 23:14 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (5 responses)

Well you mentioned BeOS, so let's get into that

In BeOS what _everybody_ actually did was divide their app into essentially two parts. One half, often made of dozens or even hundreds of threads is just UI processing. It does very, very simple things, and all the actual work is handed off to the other part. The extraneous threads used make it looks like you've really done a great job dividing up the app, but actually they offer negligible advantage over having a single thread. Just BeOS baked in the inheritance (literally in C++) of the Thread class into the Window class, and this weird choice was spun as a marvellous new way to write apps.

All the actual work is done in the other half, often one remaining thread, it looks like everybody else's apps do on every platform, a big message loop and then blocking operations like disk accesses, network requests, it takes locks, it fires timers, very conventional. It processes one message from the UI layer at a time.

And on the surface this seems to work really well, very responsive. But just below the surface you realise that your app is stalled just the same as the equivalent app on Windows or any other OS, except that the UI is "responsive" in the sense that you can still click on buttons and there's no busy cursor. There's never a busy cursor, the OS doesn't have one. It _is_ in fact busy under the same circumstances of course, the buttons won't DO anything when you click them, and you won't get your result any sooner.

In the past few weeks a lot of people have been experiencing what this feels like inadvertently. Pokemon Go is an app you run on a phone AND of course it's a huge distributed system behind the scenes. The app is responsive, you can grab your map view and spin it around, press things, very responsive, nice. BUT, what happens when you press things needs the huge distributed system to figure stuff out. That (due to a mixture of bugs and system load) was not very responsive. So beneath the thin veneer of "responsive UI" is a slow, clunky system still. That's what the BeOS does. Why bother?

The "worst possible" stuff you mentioned isn't how anybody ended up doing anything in BeOS either by the way. It's all finger in the air estimates. 100ms here, 250ms there, round numbers all over the place. There was no calculation, not even on the back of an envelope.

Travis said it himself, writing operating systems is something he enjoys doing. Sometimes an employer actually pays him to do it, and in this case it's Google. Is it a _good idea_ to keep writing new ones? Not his department.

BeOS, again

Posted Aug 17, 2016 7:59 UTC (Wed) by mm7323 (subscriber, #87386) [Link]

Interestingly the slashdot coverage of this notes that Travis Geiselbrech and Brian Swetland who both worked on BeOS will be involved in this project.

> In BeOS what _everybody_ actually did was divide their app into essentially two parts.

Isn't this a bit like how a good Android app is structured? i.e. one or more disposable Activities implementing the UI behaviour and screens, with some Service persiting in the background to do any heavy lifting or long running work?

> And on the surface this seems to work really well, very responsive. But just below the surface you realise that your app is stalled just the same as the equivalent app on Windows or any other OS, except that the UI is "responsive" in the sense that you can still click on buttons and there's no busy cursor.

In this example, you have a poorly programmed app. The UI could quite happily display a busy cursor while it waits on the lower priority backend to complete some task - if the app has been programmed to do so. Nothing about the split architecture prevents use of busy cursors or other visual user feedback. The difference with one big event loop is that the UI stops responding, even maybe stopping repainting, and is effectively frozen while some other activity completes (e.g. a trivial disk access which may be held up due to some other heavy disk load). This can look glitchy and laggy to the user.

With a split foreground/background type of achitecture, using different threads for each processing, you also have to option to offer the user 'canel operation' or 'abort' operations which can be handled by the foreground UI thread to dispatch a message to the background task to stop it (granted the background task may need to poll it's message queue or similar to pickup that it should stop.

Given today's mobile apps seem to be all about having whizzy UIs with lots of smooth animations, scrolling, transitions and other eye-candy, the BeOS approach you describe maybe appropriate.

BeOS, again

Posted Aug 17, 2016 8:38 UTC (Wed) by renox (guest, #23785) [Link] (3 responses)

I think that you're a bit too critic of BeOS here:
1) it booted very fast (much faster than Linux or Windows)
2) I don't really care how it was done under the hood, it really felt responsive (much, much more than Linux or Windows).

IMHO the reason why it was responsive is that their first harware target was a dual core computer, so they wrote applications really using multiple threads and it felt very responsive even on single core CPU, nice!

The only real reason I would say 'why bother reproducing what BeOS did' is SSD, now with SSD even normal OS seems responsive..
That said I wonder how Haiku would feel on a SSD equipped computer..

BeOS, again

Posted Aug 17, 2016 21:07 UTC (Wed) by bronson (subscriber, #4806) [Link] (2 responses)

I loved the crisp feeling too but you might care how it was done... It was responsive because the window manager and all drawing code was linked right into each application (instead of being separate processes, like all other OSes). BeOS even mapped all graphics card registers and the framebuffer into each application's memory space! Without context switches, yes, it felt lightning fast.

Problem is, now it's easy to program the graphics card to DMA anything to anywhere. If not, the network card or PCI bridge chips were happy to oblige. Basically, zero inter-application security. Or kernel for that matter. And we think X Windows keyboard capture vulns are bad!

This was fixed post-Apple-deal iiuc, but that removed some of Be's unique magic.

BeOS, again

Posted Aug 17, 2016 22:09 UTC (Wed) by pboddie (guest, #50784) [Link] (1 responses)

Which Apple deal? Was there some kind of arrangement between the companies before Apple bought NeXT? Or was it a consequence of Apple not buying Be?

BeOS, again

Posted Aug 17, 2016 22:23 UTC (Wed) by bronson (subscriber, #4806) [Link]

Right, I should have said "failed Apple deal." At the time, Be was doing everything it possibly could to get bought by Apple. Well, everything except lowering the asking price to something a little more reasonable. :/

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 6:57 UTC (Wed) by Felix (guest, #36445) [Link] (5 responses)

> A user focused device _does_ have hard real time requirements.

No - at least in my text books "hard real time" is needed if the failure to match a deadline endangers life or causes great physical/economical damage.

Let's try to not to redefine words - otherwise there is no sensible discussion if "real time" just means "very fast, consumes only very few resources, basically flawless". In computer science you have to accept trade-offs. A free lunch is a very rare thing.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 10:24 UTC (Wed) by nye (subscriber, #51576) [Link] (4 responses)

>No - at least in my text books "hard real time" is needed if the failure to match a deadline endangers life or causes great physical/economical damage.

Hard real time just means that certain nominated jobs (sometimes all jobs, for simpler systems used in predictable ways) have a guaranteed bound to their worst case execution time. Whatever reason you might have for wanting that is orthogonal.

>Let's try to not to redefine words - otherwise there is no sensible discussion if "real time" just means "very fast, consumes only very few resources, basically flawless".

Nobody said that.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 12:20 UTC (Wed) by farnz (subscriber, #17727) [Link] (3 responses)

That's not the definition I learnt; there are three grades of "real time" task in the text books I used, distinguished by the effects of missing a wall-clock deadline (note that all three types need real time guarantees from the OS):

Interactive
Interactive tasks are those where the value of the result is inversely proportional to its lateness - animating a progress bar, for example, where the longer it takes per frame, the less useful the progress bar is.
Soft real time
Soft real time tasks are those where the result is worthless after the deadline, but where missing the deadline is a transient failure. An example might be Opus decoding for playback - if I can't complete decoding the sample by the time it's due for playback, then I might as well abort decoding that sample; however, once I start meeting deadlines again, I resume flawless playback.
Hard real time
Hard real time tasks are those where missing the deadline causes a complete failure of the task; e.g. a pacemaker is hard real time, as missing the deadline can render the pacemaker worthless.

In this model, a mobile or desktop OS benefits from being soft real time, but does not need to be hard real time.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 13:58 UTC (Wed) by bfields (subscriber, #19510) [Link] (2 responses)

"In this model, a mobile or desktop OS benefits from being soft real time, but does not need to be hard real time."

As one counterexample, it's increasing common to use ordinary mobile or desktop OS's to create live music.

So, maybe you have a midi controller keyboard plugged into your ipad, and when you press middle C, the keyboard sends a message to your ipad, and a software synth on your ipad does a bunch of data processing to produce a sound.

If that takes too long (I'm not sure exactly--a few 10's of ms?), then people notice. Nobody dies, but the ipad has failed at the job it's been bought for, and it's a failure that somebody making their living as a performer can't afford with any frequency.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 14:05 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

That doesn't make it hard real time - in both soft and hard real time tasks, the failure to meet the deadline is a failure of the task. The distinction is in terms of recovery if you miss some number of deadlines then start meeting deadlines again; in a soft real time task, you fail every time you miss a deadline, but you recover once you start meeting deadlines again. In a hard real time task, missing a deadline means that the task can never recover without external assistance.

So, the situation you've described is soft real time - you've failed by missing the deadline, but once you start meeting deadlines again, the device is useful for the job it was bought for. If it were a hard real time situation, missing the deadline would mean that until you rebooted the ipad, the software synth did not function at all - no sound came out.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 16:07 UTC (Wed) by zlynx (guest, #2285) [Link]

If the goal is to make application responsiveness a priority then I would _make_ it hard real time by crashing the app when it failed to hit the deadline. During app verification I would force it to run under simulated worst case conditions.

In such a situation the app has completely failed by not hitting its timing requirements and it can't recover to be soft real time because it is dead. And I consider this completely reasonable. Maintaining 60 Hz interactive response should be _trivial_ on modern hardware.

It isn't any more artificial a limitation than in robotics. A robot arm that fails to stop in time does not necessarily mean "hard" real time either, since after bashing its way through the target it could just keep going. As long as it reaches _most_ of its timing targets everything is fine right? No, not really, so if that does happen, safety features force the hardware to shut down, which is the same deal.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 16:15 UTC (Wed) by drag (guest, #31333) [Link] (1 responses)

> All RTOS run multiple things at the same time. They're worthless otherwise. They'd be DOS.

But RTOS makes running multiple things at the same time slower and less responsive then if you are using something like Linux.

> It is responding to its user and the input to response loop should be under 20 ms.

Right. If you want things to be done ASAP you shouldn't be using RTOS. ASAP is not RT.

If the UI doesn't redraw every 20 ms do you want it to simply give it a blank image to the user? If that is what you want then using a RTOS makes sense.

If you want a redraw to go as fast as possible and try to keep it under 20ms, but if it can't then 25 ms is better then nothing... then you want a OS that is optimized for that type of performance and RTOS is not that.

> If Google wants to be smart about their new OS they might consider requiring applications to be more like BeOS and implement a dedicated user event thread which would get special handling.

BeOS did it wrong by forcing every application to have a 'rendering thread'.

Apple did it right with OS X.. which I mean they use composition in the display. Applications render into a buffer and that buffer is used as a texture in a rendered display. That way the UI you are interacting with is not tied to the rendering performance of each and every application you happen to have open and some part is visible.

You end up with a responsive display that is very efficient because applications now only have to render if something in their window changes. It looks nicer, too, because the UI is rendered as one big image rather then a bunch of little sections.

Composition gets a bad rap in Linux-land, however. Not because it is slow or it sucks, but because X is incapable of doing it efficiently. It's not up to the task.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 18, 2016 11:16 UTC (Thu) by HenrikH (subscriber, #31152) [Link]

Hmm, the way Apple does it sounds a lot like how Intuition on the Amiga worked. And isn't this how Wayland does things also?

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 8:26 UTC (Wed) by mm7323 (subscriber, #87386) [Link] (14 responses)

Hi Wol!

>In order for it to be an RTOS, it has to be possible to calculate the minimum time required.

Possibly you mean maximum.

Still, I think you may be forgetting that an RTOS generally is fully-preemtive with task priorities. At any point in time the highest priority task is expected to be running and will run until completion or it blocks. The RTOS will specify the upper bounds on a task switch latency such that you can be sure the highest priority ready-to-run task is running within some number of microsecond of a task becoming ready to run. Becoming ready-to-run generally means receiving a message, or a shared lock or semaphore being released/signalled.

With this guarantee, you can be sure that even at maximum load, the highest priority tasks are able to continue running in the same way as they would on an unloaded system.

> And then if you're trying to do several things at once (like run a gui) you need to add them all together. And then you need to calculate how much they're going to interfere with each other.

Yes. And an RTOS gives you the guarantees that enable you to make and rely upon such a calculation.

QNX also had a nice feature a bit like containers where you could group task and set CPU, RAM and IO limits. You could also enable any unused CPU resource from a container to be passed to other containers rather than going idle. As a tool, this makes it even a bit easier to dimension resources to different sub-systems.

> If I decide I want to calculate !1024000, it's a pretty safe bet any RTOS is either (a) going to throw its hands in the air and say "all bets are off", or (b) it's going to shunt my calculation off into swap space so it can get on with providing a deterministic response to everything else.

It would do neither. It would just continue running things according to the task priorities and limits set. Those limits may also include IO use, and so your swapping will only delay lower priority tasks.

Cheers,

mm

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 13:25 UTC (Wed) by Wol (subscriber, #4433) [Link] (11 responses)

> > If I decide I want to calculate !1024000, it's a pretty safe bet any RTOS is either (a) going to throw its hands in the air and say "all bets are off", or (b) it's going to shunt my calculation off into swap space so it can get on with providing a deterministic response to everything else.

> It would do neither. It would just continue running things according to the task priorities and limits set. Those limits may also include IO use, and so your swapping will only delay lower priority tasks.

Except you miss my point. The UI is no longer responsive. It is not responding to my request to calculate !1024000.

To me, a responsive system is a system that gets back to me, with what I asked it for, quickly. And to me, the difference between a system and UI is irrelevant.

In other words, a real-time, on-line, responsive system is an oxymoron. You can't have it. Sorry, laws of physics and all that ... :-)

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 14:35 UTC (Wed) by nye (subscriber, #51576) [Link] (3 responses)

>To me, a responsive system is a system that gets back to me, with what I asked it for, quickly. And to me, the difference between a system and UI is irrelevant.

And to me, there's no difference between water and potato, and the "Attack of the Snarblefarx" was the greatest film that Orson Welles ever made, and anyone who doesn't agree that blue tastes hot is clearly deaf.

I demand that you use my definitions of these words, because to me they're correct, and the only way it could be otherwise would require changing the fundamental laws of physics.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 22:57 UTC (Wed) by Wol (subscriber, #4433) [Link] (2 responses)

Except I didn't say the system and the UI are the same thing. I said the difference is irrelevant :-)

What's that saying about X? That the best use for a window manager is to run multiple xterms at once?

Seriously. As a user, what's the point of having a fast UI response, if it's actually doing sweet fa under the covers?

And oh, for those people who said that providing a fast UI should be a high priority, linux (the kernel) doesn't agree with you. Last I knew, the defaults prioritised throughput over responsiveness.

Actually getting the job done is seen as more important than being smooth and slick - we all know the type, the snake-oil salesmen...

Cheers,
Wol

Point of a fast UI response

Posted Aug 20, 2016 4:50 UTC (Sat) by gmatht (guest, #58961) [Link] (1 responses)

Seriously. As a user, what's the point of having a fast UI response, if it's actually doing sweet fa under the covers?
Of the top of my head:
1) If the computation takes longer than expected having a still functioning Cancel button is great.
2) Immediate feedback that the interaction was accepted, so we don't think "I can't have pressed hard enough" and try again.
3) So that we can enter data while the last request is still processing. E.g. imaging that it takes 4 seconds for me to type "lwn.net" on my tiny keypad and 3 seconds for Firefox to load, if I we can do both at the same time, Firefox would seem to load instantly.
4) Don't break users abstractions. If I push a page up I don't expect the page to stay still and then suddenly shoot up seconds after I have touched it.
5) Immediate Feedback is important for some tasks. For example if I want to push the word "Fuchsia" to the top of the screen it will be immediately clear if I have pushed far enough (unless the GUI lags).
6) Users usually find a responsive GUI "nicer". Even when it doesn't make it faster to use, life is to be enjoyed. A buttery interface is less unhealthy pleasure than a buttery cake!

Also if a phone needs to do hard core computation, it is often best to push it out to the cloud where it won't drain the phones battery.
And oh, for those people who said that providing a fast UI should be a high priority, Linux (the kernel) doesn't agree with you. Last I knew, the defaults prioritized throughput over responsiveness.
This is a little circular, since we are discussing whether Linux's priorities are appropriate for mobile devices. Linux is mainly tuned for servers where latency can't be less than your ping time anyway.

Even for desktops it is not clear this is a good default. About once a month I find my self I find myself power-cycling Linux because I accidentally opened a couple too many tabs, and the reset button was the only thing that would respond within a hour. This kind of jank never happened on my old 6502 machine which was obviously very fast much faster than me... it had an 1MHz processor! Even if I could just reserve 1% of my CPU and RAM so that top, kill, and perhaps even xterm would be fast even under heavy load that would be great.

Point of a fast UI response

Posted Aug 21, 2016 17:55 UTC (Sun) by flussence (guest, #85566) [Link]

>Even if I could just reserve 1% of my CPU and RAM so that top, kill, and perhaps even xterm would be fast even under heavy load that would be great.
That sounds like exactly the problem the “Automatically cgroup processes by tty session” hack added to the kernel was supposed to solve…

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 18, 2016 14:02 UTC (Thu) by robbe (guest, #16131) [Link] (6 responses)

> The UI is no longer responsive. It is not responding to my request
> to calculate !1024000.

Is this reverse² polish notation for factorial?

You want the UI to do what exactly? Give you an answer within 20ms, and you are not accepting „I’ll get back to you“ as an intermediate response? If I understood you correctly, you don’t want a RTOS, you want a magic wand.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 19, 2016 2:05 UTC (Fri) by Wol (subscriber, #4433) [Link] (5 responses)

:-)

Yes I would like a magic wand :-)

But put yourself in the end user's shoes. Does the end user want an "I'll get back to you", or do they want what they asked for?

Yes I know today's "internet generation" expect everything yesterday, but unfortunately it seems to be the case that responsiveness - "a crisp UI" - and throughput (ie actually getting the job done) are mutually incompatible.

And imho "a crisp UI" can actually be a real pain the neck. If I'm trying to get a job done, what I do *NOT* need is a nice responsive UI shoving stuff I don't want in my face. Case in point. I was trying to take a photo with my phone a couple of days back, and the phone was determined to tell me the battery was dying. I couldn't get at the camera function, because the UI wouldn't let me! I regularly curse this sort of design (mis)feature!

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 19, 2016 3:55 UTC (Fri) by bronson (subscriber, #4806) [Link] (1 responses)

I can't tell if you're talking about RTOSes or not. Doesn't sound like it (?).

The end user of an RTOS wants (nay, demands) guaranteed latency.

The end user of a phone will be more forgiving and, yes, will probably want higher throughput and quicker answers.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 19, 2016 11:10 UTC (Fri) by Wol (subscriber, #4433) [Link]

There are people who want the desktop to have real-time response. Might float their boat. But for people who actually want to get the job done, as others have said, a real-time desktop is actually a hindrance. Don't force it on everybody, just because you think it's a good idea ... :-)

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 19, 2016 18:11 UTC (Fri) by nybble41 (subscriber, #55106) [Link] (2 responses)

> But put yourself in the end user's shoes. Does the end user want an "I'll get back to you", or do they want what they asked for?

The user wants the rest of the UI to remain responsive so that they can go browse the web (or whatever) while waiting for the factorial computation to complete.

If you only have one task to complete then it doesn't matter what kind of OS you use; all the available resources (CPU time, memory, etc.) will be dedicated to that one task and it will complete as soon as physically possible. Realistically, most computers above the level of 8-bit microcontrollers are continually multitasking, and real-time scheduling ensures that the system can deal with these competing demands on its resources without creating priority inversions (a higher-priority task unable to meet its requirements due to interference from a lower-priority task).

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 19, 2016 22:08 UTC (Fri) by micka (subscriber, #38720) [Link] (1 responses)

The user also probably wants any task they launch to progress, so you can add "no starvation" to "no priority inversion".

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 20, 2016 3:00 UTC (Sat) by nybble41 (subscriber, #55106) [Link]

True, though one could approach "no starvation" as a special case of "no priority inversion" by granting the thread an elevated priority for some fraction of each cycle, in much the same way that one can limit the CPU time a Linux thread can spend at real-time priority. In practice, for most threads, you would probably want something closer to Linux's fair scheduling than the fixed real-time priorities common in RTOS environments, with exceptions for the most critical and time-sensitive functions.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 15:44 UTC (Wed) by drag (guest, #31333) [Link] (1 responses)

> Possibly you mean maximum.

Yes, sorry.

> Still, I think you may be forgetting that an RTOS generally is fully-preemtive with task priorities.

No I don't.

My point is that RTOS is antithetical to performance and responsiveness. RTOS does not mean 'things happen faster'.

To have this 'real time performance' you have to pre-empt things continuously. When processes get shuffled around in the scheduler they lose their processor cache and have to go back to main memory and sit and wait around until the OS gets back to them.

The more of these interrupts that you have then the worse the performance gets. The more 'things' you are doing the more interrupts you need and the worse performance gets.

When you combine that with the fact that userland applications in Android are programmed using VM-using GC-using language and cannot be predictably schedualed anyways... then you lose any benefit you have to having a real RTOS system.

If you cannot predict how long it takes to do X then what is the point to taking the performance hit of a RTOS kernel?

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 21:31 UTC (Wed) by mm7323 (subscriber, #87386) [Link]

> RTOS does not mean 'things happen faster'.

Correct. The CPU runs at it's clock rate, which isn't a function of OS type.

> To have this 'real time performance' you have to pre-empt things continuously.
> The more of these interrupts that you have then the worse the performance gets. The more 'things' you are doing the more interrupts you need and the worse performance gets.

Nope. You have to pre-empt things when _necessary_ in an RTOS, and only that. You might be thinking of time-slicing schedulers using a tick, which RTOSes generally don't do (though some offer configuration for time slicing priority levels if needed). More generally in a fully pre-emptive priority based system, processes run until one of two things happens, either a) a higher priority task becomes ready to run or b) the current process blocks to allow the next highest priority task to run. A context switch, with it's overhead, is needed in both these cases. Case a) occurs either when an interrupt comes in and unblocks a higher priority task (this could be a peripheral or a hardware timer interrupt triggering a higher priority task), or the current task directly unblocks a higher priority task (e.g. by posting a message to it, or releasing some shared lock/semaphore). Interrupts happen as and when needed, but the unblocking of higher priority tasks is in the control of the system architect/designer/programmer. Case b) generally means that the current task has completed it's useful work and is now idle.

Compared with a 'tick' based time sliced scheduler, an RTOS scheme may be reasonably efficient and not need any tick interrupt at all. With careful design, tasks can easily perform a unit of processing and run to completion without ever being pre-empted, which is most cache efficient. Bad design with an RTOS can have lots of overhead though, particularly if a low priority task continually enables a higher priority task and causes excessive context switching back and forth.

Like all things, there is no silver bullet, but an RTOS gives the programmer control and predictability over when context switches occur, and as such, good and bad designs can be implemented with corresponding performance gains or losses. With time-slicing schedulers, you just have constant overhead of a regular tick, except in the special cases that there is only 1 runnable task on a CPU or the CPU is idle and the scheduling tick can be disabled. With time slicing, the OS picks your scheduler tick rate to be between high (lower latency but more context-switch overhead) and low (better for bulk processing, but more latency). With an RTOS, the design and architecture of the task set determines the context switch rate.

> When you combine that with the fact that userland applications in Android are programmed using VM-using GC-using language and cannot be predictably schedualed anyways... then you lose any benefit you have to having a real RTOS system.

Yeah - this is a good point. You can write native apps for Android, and parts of Android are undoubtedly using native libs for things like media codecs, display drivers and hardware supervision which may benefit from an RTOS. But actually, almost all SoCs will come with a Linux BSP for this stuff so it's at huge cost to move these things to an RTOS. Either Google see other benefits as well, or this is them trying something else to keep options open.

A bit of serious competition in the OS space can only be good though.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 16:38 UTC (Tue) by excors (subscriber, #95769) [Link] (1 responses)

You're always going to get trouble with contention over things like memory bandwidth and thermal budget. You need a lot of bandwidth to render a fancy 3D UI, and a lot of bandwidth to encode a high-res video, and to run the user's favourite CPU benchmarking app, and to Miracast the display over the wifi network, etc, and they all generate heat too - and the user might want to run any combination of those features simultaneously.

If you wanted hard-real-time guarantees that all of those features would run with perfect performance and never miss a deadline, even when all turned on at once, you'd have to massively overprovision the hardware, making it expensive and power-hungry. Alternatively you could choose hardware for the expected 'reasonable' use cases, design everything to cope sensibly with failures to meet performance targets (e.g. cleanly dropping frames), have some QoS mechanism to keep everything reasonably balanced so one feature doesn't get completely starved, and design the UI to generally keep the user within the set of reasonable use cases (e.g. don't allow multiple arbitrary apps on screen at once). The second approach seems more sensible for non-safety-critical feature-rich devices like phones.

>> RTOS means that it's deterministic.

Depends who you talk to - from what I've seen in the context of mobile SoCs and IoT, nobody ever means hard real-time, they just use "RTOS" to mean any lightweight OS (as in, a few tens of thousand of lines of code that provide threads and mutexes and interrupt handlers and a memory allocator and probably not very much else). That works okay for soft real-time requirements because the OS stays out of the application's way, and because there's only a single application and it's usually only responsible for a fairly small and predictable set of features (since the SoCs usually contain multiple small processors all running their own independent RTOS instances).

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 17:58 UTC (Tue) by Wol (subscriber, #4433) [Link]

>> RTOS means that it's deterministic.

> Depends who you talk to - from what I've seen in the context of mobile SoCs and IoT, nobody ever means hard real-time, they just use "RTOS" to mean any lightweight OS (as in, a few tens of thousand of lines of code that provide threads and mutexes and interrupt handlers and a memory allocator and probably not very much else).

And there we have the Humpty-Dumpty syndrome - people (who should know better) taking a word with a clear and well defined meaning, and using it to mean something else - like computer salesmen using the word "memory" to mean "disk space" :-( Or the computer professor I had an email contre-temps with because he said "on line" was the new "real time", and didn't see any need for true real time systems any more, so didn't see anything wrong with (ab)using the term.

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 5:55 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

QNX kernel itself is nice. Everything above it... not so much.

Ultimately, there's no free lunch and if you want to implement something that is comparable in scope to Linux then you'll get complexity of about the same level.

Which is also why this effort to develop Fuchsia looks kinda ridiculous.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 9:17 UTC (Tue) by roc (subscriber, #30627) [Link] (7 responses)

Seems to me that if you were going to go the trouble of writing a new OS you'd want to do something significantly different from Linux and other existing OSes, like write it in a better programming language, or use formal verification, or use a radically different API design. Otherwise, why bother?

Hopefully eventually someone will explain what Fuschia's raison d'être is. I hope it's not just the license.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 9:31 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 10:03 UTC (Tue) by Wol (subscriber, #4433) [Link]

roflol

:-)

Cheers,
Wol

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 11:56 UTC (Tue) by petur (guest, #73362) [Link]

I fondly remember working with ATL and thought it a shame that WTL never took off, it was what MFC should have been and contrary to the linked article exposed you to the internals, not hide it. Sadly the whole ATL/WTL thing was a small team product. </OT>

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 18:51 UTC (Tue) by markhb (guest, #1003) [Link] (1 responses)

Did anyone else see the initialism "MTS" in that article and think of something very different?

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 19:05 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Sure: https://en.wikipedia.org/wiki/MTS_(network_provider)

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 21:38 UTC (Tue) by martin.langhoff (subscriber, #61417) [Link]

Glad Fuchsia will solve DLL hellˆWˆWdriver maintenance at last.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 18, 2016 11:34 UTC (Thu) by HenrikH (subscriber, #31152) [Link]

The article author missed the Visual C Redistributable Hell that Microsoft introduced in Visual Studio .Net in 2003, this time though it was not due to NIH, at least I don't think so.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 18:19 UTC (Tue) by jspaleta (subscriber, #50639) [Link]

It was okay posix compliance...

but really qnx wasn't scaling to modern processors that are showing up in cellphones and tablets. sure its rtos, but it had some very large limitations with regard running a lot of threads due to its clock handling. It needed work to be able to do significant multi-tasking on something like a quad core ARM like you are seeing in cellphones now. Just trying to get qnx to give me reliable sub 10 millisecond sleeps/timers running just a handful of threads was problematic.. it was using linux kernel 2.0-like concepts of clock ticks and you could quickly overwhelm the qnx kernel's abaility to service timers with just a few timers trying to reach for 1 ms clock ticks. It needed work.

I'm not saying it wasn't possible to fix..but compared to obtainable linux kernel.. android based linux was absolutely the better choice based on my experience trying to work with qnx in my day job.

-jef

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 15, 2016 20:46 UTC (Mon) by ssmith32 (subscriber, #72404) [Link]

"Full blown desktop kernels, like Linux" - the year of desktop linux has arrived! Woohoo! :P

Yeah, not the best article. But interesting to note...

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 19:57 UTC (Tue) by xtifr (guest, #143) [Link]

Hmm, not necessarily a very smart choice of name, given that "fuchsia" is probably the most misspelled color name ever. So much so that even Google has trouble with it! :)

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 18, 2016 14:06 UTC (Thu) by robbe (guest, #16131) [Link]

How is FreeRTOS „commercial“? Because it’s developed mainly by one company? Like … Magenta?


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds