|
|
Subscribe / Log in / New account

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 16, 2016 22:05 UTC (Tue) by zlynx (guest, #2285)
In reply to: Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police) by Wol
Parent article: Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

All RTOS run multiple things at the same time. They're worthless otherwise. They'd be DOS.

The key is priorities. An RTOS _will_ run the highest priority task. It _will_ finish first. If the programmer does not screw this up by waiting on unpredictable external resources.

Many modern programmers have forgotten how to do time-sensitive programming and have their code go off doing stupid things while it should be focused on its one task. Like loading icons from disk or running JIT while rendering desktop compositor frames. But it doesn't have to be that way.

A user focused device _does_ have hard real time requirements. It is responding to its user and the input to response loop should be under 20 ms. User response should be in the top five priorities. The only things higher are keeping the hardware alive.

Modern hardware should be able to do this easily even if its running at 200 MHz low clock power-save without any data in CPU cache. I had a Sharp Zaurus with about those specs back in 2005 and it easily hit 20 ms response.

People seem to think 200 MHz is slow or something. That's roughly _200_million_ operations per second. If your software can't get an interrupt, read the data, compute the necessary response to the user and update the display in 4 million instructions, please turn in your programmer card. Note that's a response to the user, not the final result. If it is taking a lot of time just indicate that things are indeed happening.

Back to RTOS, calculating the worst possible time isn't hard. Use the lowest possible clock speed and assume no cached data and maximum RAM contention from the GPU and other CPUs.

If Google wants to be smart about their new OS they might consider requiring applications to be more like BeOS and implement a dedicated user event thread which would get special handling. It should also get an application error abort if that thread ever touches any non-realtime resource or exceeds its allocated time-slice.


to post comments

BeOS, again

Posted Aug 16, 2016 23:14 UTC (Tue) by tialaramex (subscriber, #21167) [Link] (5 responses)

Well you mentioned BeOS, so let's get into that

In BeOS what _everybody_ actually did was divide their app into essentially two parts. One half, often made of dozens or even hundreds of threads is just UI processing. It does very, very simple things, and all the actual work is handed off to the other part. The extraneous threads used make it looks like you've really done a great job dividing up the app, but actually they offer negligible advantage over having a single thread. Just BeOS baked in the inheritance (literally in C++) of the Thread class into the Window class, and this weird choice was spun as a marvellous new way to write apps.

All the actual work is done in the other half, often one remaining thread, it looks like everybody else's apps do on every platform, a big message loop and then blocking operations like disk accesses, network requests, it takes locks, it fires timers, very conventional. It processes one message from the UI layer at a time.

And on the surface this seems to work really well, very responsive. But just below the surface you realise that your app is stalled just the same as the equivalent app on Windows or any other OS, except that the UI is "responsive" in the sense that you can still click on buttons and there's no busy cursor. There's never a busy cursor, the OS doesn't have one. It _is_ in fact busy under the same circumstances of course, the buttons won't DO anything when you click them, and you won't get your result any sooner.

In the past few weeks a lot of people have been experiencing what this feels like inadvertently. Pokemon Go is an app you run on a phone AND of course it's a huge distributed system behind the scenes. The app is responsive, you can grab your map view and spin it around, press things, very responsive, nice. BUT, what happens when you press things needs the huge distributed system to figure stuff out. That (due to a mixture of bugs and system load) was not very responsive. So beneath the thin veneer of "responsive UI" is a slow, clunky system still. That's what the BeOS does. Why bother?

The "worst possible" stuff you mentioned isn't how anybody ended up doing anything in BeOS either by the way. It's all finger in the air estimates. 100ms here, 250ms there, round numbers all over the place. There was no calculation, not even on the back of an envelope.

Travis said it himself, writing operating systems is something he enjoys doing. Sometimes an employer actually pays him to do it, and in this case it's Google. Is it a _good idea_ to keep writing new ones? Not his department.

BeOS, again

Posted Aug 17, 2016 7:59 UTC (Wed) by mm7323 (subscriber, #87386) [Link]

Interestingly the slashdot coverage of this notes that Travis Geiselbrech and Brian Swetland who both worked on BeOS will be involved in this project.

> In BeOS what _everybody_ actually did was divide their app into essentially two parts.

Isn't this a bit like how a good Android app is structured? i.e. one or more disposable Activities implementing the UI behaviour and screens, with some Service persiting in the background to do any heavy lifting or long running work?

> And on the surface this seems to work really well, very responsive. But just below the surface you realise that your app is stalled just the same as the equivalent app on Windows or any other OS, except that the UI is "responsive" in the sense that you can still click on buttons and there's no busy cursor.

In this example, you have a poorly programmed app. The UI could quite happily display a busy cursor while it waits on the lower priority backend to complete some task - if the app has been programmed to do so. Nothing about the split architecture prevents use of busy cursors or other visual user feedback. The difference with one big event loop is that the UI stops responding, even maybe stopping repainting, and is effectively frozen while some other activity completes (e.g. a trivial disk access which may be held up due to some other heavy disk load). This can look glitchy and laggy to the user.

With a split foreground/background type of achitecture, using different threads for each processing, you also have to option to offer the user 'canel operation' or 'abort' operations which can be handled by the foreground UI thread to dispatch a message to the background task to stop it (granted the background task may need to poll it's message queue or similar to pickup that it should stop.

Given today's mobile apps seem to be all about having whizzy UIs with lots of smooth animations, scrolling, transitions and other eye-candy, the BeOS approach you describe maybe appropriate.

BeOS, again

Posted Aug 17, 2016 8:38 UTC (Wed) by renox (guest, #23785) [Link] (3 responses)

I think that you're a bit too critic of BeOS here:
1) it booted very fast (much faster than Linux or Windows)
2) I don't really care how it was done under the hood, it really felt responsive (much, much more than Linux or Windows).

IMHO the reason why it was responsive is that their first harware target was a dual core computer, so they wrote applications really using multiple threads and it felt very responsive even on single core CPU, nice!

The only real reason I would say 'why bother reproducing what BeOS did' is SSD, now with SSD even normal OS seems responsive..
That said I wonder how Haiku would feel on a SSD equipped computer..

BeOS, again

Posted Aug 17, 2016 21:07 UTC (Wed) by bronson (subscriber, #4806) [Link] (2 responses)

I loved the crisp feeling too but you might care how it was done... It was responsive because the window manager and all drawing code was linked right into each application (instead of being separate processes, like all other OSes). BeOS even mapped all graphics card registers and the framebuffer into each application's memory space! Without context switches, yes, it felt lightning fast.

Problem is, now it's easy to program the graphics card to DMA anything to anywhere. If not, the network card or PCI bridge chips were happy to oblige. Basically, zero inter-application security. Or kernel for that matter. And we think X Windows keyboard capture vulns are bad!

This was fixed post-Apple-deal iiuc, but that removed some of Be's unique magic.

BeOS, again

Posted Aug 17, 2016 22:09 UTC (Wed) by pboddie (guest, #50784) [Link] (1 responses)

Which Apple deal? Was there some kind of arrangement between the companies before Apple bought NeXT? Or was it a consequence of Apple not buying Be?

BeOS, again

Posted Aug 17, 2016 22:23 UTC (Wed) by bronson (subscriber, #4806) [Link]

Right, I should have said "failed Apple deal." At the time, Be was doing everything it possibly could to get bought by Apple. Well, everything except lowering the asking price to something a little more reasonable. :/

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 6:57 UTC (Wed) by Felix (guest, #36445) [Link] (5 responses)

> A user focused device _does_ have hard real time requirements.

No - at least in my text books "hard real time" is needed if the failure to match a deadline endangers life or causes great physical/economical damage.

Let's try to not to redefine words - otherwise there is no sensible discussion if "real time" just means "very fast, consumes only very few resources, basically flawless". In computer science you have to accept trade-offs. A free lunch is a very rare thing.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 10:24 UTC (Wed) by nye (subscriber, #51576) [Link] (4 responses)

>No - at least in my text books "hard real time" is needed if the failure to match a deadline endangers life or causes great physical/economical damage.

Hard real time just means that certain nominated jobs (sometimes all jobs, for simpler systems used in predictable ways) have a guaranteed bound to their worst case execution time. Whatever reason you might have for wanting that is orthogonal.

>Let's try to not to redefine words - otherwise there is no sensible discussion if "real time" just means "very fast, consumes only very few resources, basically flawless".

Nobody said that.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 12:20 UTC (Wed) by farnz (subscriber, #17727) [Link] (3 responses)

That's not the definition I learnt; there are three grades of "real time" task in the text books I used, distinguished by the effects of missing a wall-clock deadline (note that all three types need real time guarantees from the OS):

Interactive
Interactive tasks are those where the value of the result is inversely proportional to its lateness - animating a progress bar, for example, where the longer it takes per frame, the less useful the progress bar is.
Soft real time
Soft real time tasks are those where the result is worthless after the deadline, but where missing the deadline is a transient failure. An example might be Opus decoding for playback - if I can't complete decoding the sample by the time it's due for playback, then I might as well abort decoding that sample; however, once I start meeting deadlines again, I resume flawless playback.
Hard real time
Hard real time tasks are those where missing the deadline causes a complete failure of the task; e.g. a pacemaker is hard real time, as missing the deadline can render the pacemaker worthless.

In this model, a mobile or desktop OS benefits from being soft real time, but does not need to be hard real time.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 13:58 UTC (Wed) by bfields (subscriber, #19510) [Link] (2 responses)

"In this model, a mobile or desktop OS benefits from being soft real time, but does not need to be hard real time."

As one counterexample, it's increasing common to use ordinary mobile or desktop OS's to create live music.

So, maybe you have a midi controller keyboard plugged into your ipad, and when you press middle C, the keyboard sends a message to your ipad, and a software synth on your ipad does a bunch of data processing to produce a sound.

If that takes too long (I'm not sure exactly--a few 10's of ms?), then people notice. Nobody dies, but the ipad has failed at the job it's been bought for, and it's a failure that somebody making their living as a performer can't afford with any frequency.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 14:05 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

That doesn't make it hard real time - in both soft and hard real time tasks, the failure to meet the deadline is a failure of the task. The distinction is in terms of recovery if you miss some number of deadlines then start meeting deadlines again; in a soft real time task, you fail every time you miss a deadline, but you recover once you start meeting deadlines again. In a hard real time task, missing a deadline means that the task can never recover without external assistance.

So, the situation you've described is soft real time - you've failed by missing the deadline, but once you start meeting deadlines again, the device is useful for the job it was bought for. If it were a hard real time situation, missing the deadline would mean that until you rebooted the ipad, the software synth did not function at all - no sound came out.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 16:07 UTC (Wed) by zlynx (guest, #2285) [Link]

If the goal is to make application responsiveness a priority then I would _make_ it hard real time by crashing the app when it failed to hit the deadline. During app verification I would force it to run under simulated worst case conditions.

In such a situation the app has completely failed by not hitting its timing requirements and it can't recover to be soft real time because it is dead. And I consider this completely reasonable. Maintaining 60 Hz interactive response should be _trivial_ on modern hardware.

It isn't any more artificial a limitation than in robotics. A robot arm that fails to stop in time does not necessarily mean "hard" real time either, since after bashing its way through the target it could just keep going. As long as it reaches _most_ of its timing targets everything is fine right? No, not really, so if that does happen, safety features force the hardware to shut down, which is the same deal.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 17, 2016 16:15 UTC (Wed) by drag (guest, #31333) [Link] (1 responses)

> All RTOS run multiple things at the same time. They're worthless otherwise. They'd be DOS.

But RTOS makes running multiple things at the same time slower and less responsive then if you are using something like Linux.

> It is responding to its user and the input to response loop should be under 20 ms.

Right. If you want things to be done ASAP you shouldn't be using RTOS. ASAP is not RT.

If the UI doesn't redraw every 20 ms do you want it to simply give it a blank image to the user? If that is what you want then using a RTOS makes sense.

If you want a redraw to go as fast as possible and try to keep it under 20ms, but if it can't then 25 ms is better then nothing... then you want a OS that is optimized for that type of performance and RTOS is not that.

> If Google wants to be smart about their new OS they might consider requiring applications to be more like BeOS and implement a dedicated user event thread which would get special handling.

BeOS did it wrong by forcing every application to have a 'rendering thread'.

Apple did it right with OS X.. which I mean they use composition in the display. Applications render into a buffer and that buffer is used as a texture in a rendered display. That way the UI you are interacting with is not tied to the rendering performance of each and every application you happen to have open and some part is visible.

You end up with a responsive display that is very efficient because applications now only have to render if something in their window changes. It looks nicer, too, because the UI is rendered as one big image rather then a bunch of little sections.

Composition gets a bad rap in Linux-land, however. Not because it is slow or it sucks, but because X is incapable of doing it efficiently. It's not up to the task.

Google is developing an OS called “Fuchsia,” runs on All the Things (Android Police)

Posted Aug 18, 2016 11:16 UTC (Thu) by HenrikH (subscriber, #31152) [Link]

Hmm, the way Apple does it sounds a lot like how Intuition on the Amiga worked. And isn't this how Wayland does things also?


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds