|
|
Log in / Subscribe / Register

Google Chrome OS and the community

By Jonathan Corbet
July 8, 2009
On July 7, Google let the world know about a project called "Google Chrome OS." It is a new operating system, meant to run (initially) on netbooks. As would be expected from Google, there will be a strong emphasis on web applications; much work is also apparently going into fast booting, security, and a simplified user interface. Google promises to open-source the code toward the end of the year; commercial shipments are expected in the latter half of 2010.

Much of the mainstream press sees this move as a frontal assault on Microsoft, and that it may well be. Microsoft appears to have regained the upper hand on the netbook platform for now, but Windows does not come across as a perfect fit for that sort of platform. But it might not just be Microsoft which feels discomfort from this new operating system; it's not clear that this effort will be good for Linux either. Much depends on how Google works with the free software community; past experience suggests that there could be cause for worry.

Those who would criticize Linux like to point at the vast number of distributions available. They charge (rightly) that fragmentation did a lot of damage to proprietary Unix; Linux, they say, is far more fragmented than Unix ever was. In truth, fragmentation has been a relatively small problem for Linux. It is worth spending a moment to look at why.

One of the reasons, clearly, is that all Linux distributions are based on the same kernel. Some distributors apply more patches than others, but it is, for all practical purposes, the same platform underneath. The accelerated development process adopted for 2.6 has helped in this regard; useful code gets into the mainline quickly enough that there is little reason for distributors to patch significant functionality into their own kernels. On top of that, the "upstream first" ethic ensures that enhancements to the kernel are available to all distributors and, thus, to all users.

On top of that, much of the "plumbing layer" on top of the kernel is also common to all distributors. The availability and management of libraries works well enough that it's often possible to move complicated binaries between distributions and expect them to run. That is a high degree of compatibility for a "fragmented" platform. The end result is almost zero lock-in for most Linux users. The ability to move to a different distribution while still running Linux is one of the greatest strengths of the platform; it is a direct manifestation of the value of free software for users. As long as the ability to switch remains such a fundamental feature of Linux, we need not fear fragmentation.

So the real question is: will Google's new operating system play by the rules which have provided such consistency across Linux distributions? The real answer won't be known for some time. But Google Chrome OS will not be Google's first Linux-based operating system; that distinction belongs to Android. So perhaps we can get a foreshadowing of how things will work by looking at what was done with Android:

  • The kernel was indeed Linux, but what Android ships is far removed from a mainline release. A great deal of code was added behind closed doors and committed to the platform before any sort of public release or review. Much of it has no real hope of getting into the mainline kernel ever. Even now, Android kernel code, while being available in a public git tree, is developed separately from the mainline. With some small exceptions, nobody from Google is making any real effort to get Google's code reviewed in the wider community or merged into the official kernel.

  • The plumbing layer is totally different; Google rolled its own C library for Android. The motivations for this work are not entirely clear, but it does seem that Google has gone out of its way to avoid GPL-licensed code, and code owned by the Free Software Foundation in particular.

  • Several of the applications are proprietary.

The end result is that, while Android is based on the Linux kernel, it does not, in its default form, feel much like a Linux system. Ordinary Linux applications do not just run on Android. With effort, one can supplement Android with the features needed to run "normal" Linux; one can even put a full Debian environment onto it. But it's an add-on, not part of the platform itself.

One could argue that Android sits in a special niche: it runs on mobile phones and must, among other things, operate in a way acceptable to handset manufacturers and cellular providers - not always the most accommodating sorts of companies. Google Chrome OS is, instead, aimed at desktop-like applications. It will operate in a niche where ordinary Linux can be found; perhaps, as a result, it will be more like ordinary Linux. Time for a closer look at the announcement:

  • Code is to be released "later this year." But this is a project which has been underway for a while, and which will, undoubtedly, proceed quickly during this time. So we are not starting with community-based development; we'll get another code dump some months from now.

  • Google is "completely redesigning the underlying security architecture of the OS". How that security model will be enforced is unclear - it could involve kernel changes, or it could be embedded within a virtual machine. Either way, it does not sound like a feature which will enhance compatibility with other Linux distributions. Security is important, and it does not come out well when designed behind closed doors. If Google has a better way to do security on Linux, it should be sharing its ideas and getting community input now; presenting a new security model as a fait accompli months from now will not be helpful.

  • There will be a new windowing system; no more details than that are available. How new and different will it be? Will Google Chrome OS be able to run X applications?

The picture which emerges looks a lot like Android: a platform which takes a number of pieces from Linux, but which is not like Linux, and which does not really give back to Linux.

Perhaps that picture is wrong. Perhaps Google is secretly working with one or more Linux distributors, or with projects like Moblin or Maemo, which are doing a great job of achieving many of the objectives Google has set for its new operating system. Just maybe, Google is working to strengthen the projects its work is based upon, rather than trying to supplant them. Possibly, when Google says:

We have a lot of work to do, and we're definitely going to need a lot of help from the open source community to accomplish this vision.

it really means to work with the community and not just absorb work from the community. Your editor very much hopes so, but your editor also recognizes that this would require a different approach to the community than Google has shown in the past.

Android is a good thing: it has brought Linux to a new class of platforms and created a new development community based on free software. The Android developers have taken the time to rethink how the system works and to attempt some innovative new approaches; we can never have too much of that. There can be no doubt that the same will be true of Google Chrome OS; it will be interesting to see what they come up with. But also can be no doubt that Google Chrome OS could be a lot better if it were developed within the community instead of on top of it. Your editor wishes Google the best of luck with this ambitious project and hopes that the larger community will truly be able to be a part of it.


to post comments

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 3:10 UTC (Thu) by khim (subscriber, #9252) [Link] (26 responses)

The availability and management of libraries works well enough that it's often possible to move complicated binaries between distributions and expect them to run.

Sorry, but this is not true at all. When you grab stuff from older distribution you need to install some kind of compat package. If you grab stuff from newer distribution - you often need new version of some vital library (like glibc). Sometimes you can use alien libraries and LD_LIBRARY_PATH instead (this is how I used 7z from Hardy in Dapper). Are these obstacles insurmountable? Not at all - there are a lot of people who can do it. Millions of them - 1% of PC users is a good astimate, I guess - and coincidently this is the size of Linux "desktop penetration"...

Fragmentation does not hurt Linux development... much, but... it sure as hell does hurt Linux adoption. Things are not working "out of the box" if you go beyond your distribution's repository - and this is a big deal for Joe Average...

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 7:03 UTC (Thu) by Frej (guest, #4165) [Link] (7 responses)

Almost agree ;) It's basicly imposible to distribute linux software as an ISV (i hate using that term). Sure if you like to manage software or run a server, packages are very nice. But it's all very centralized.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 7:47 UTC (Thu) by halla (subscriber, #14185) [Link] (3 responses)

We manage alright. We have a fairly complicated application that uses Qt and about a dozen other
third party libraries. Our binaries (http://www.hyves.nl/hyvesdesktop/download/) have been
reported to work successfully on many linux distributions.

Of course, we ran into trouble when we wanted to support sound. Qt's phonon uses GStreamer as a
backend and that's a mess. We came to the conclusion that we'd better use QSound, since the
alternative would have been to redistribute the right gstreamer with all plugins ourselves.
Something compiled against a platfrom gstreamer on one distribution will crash on all other
distributions (that we tried).

Sound

Posted Jul 9, 2009 12:52 UTC (Thu) by rfunk (subscriber, #4054) [Link] (2 responses)

I thought Phonon was a KDE thing, not a Qt thing.

I also thought that the whole point of Phonon was that you don't have to
deal with the underlying Gstreamer (or Xine or whatever) engine at all, so
that engine could be swapped out or upgraded at will, as long as your
Phonon can handle the varying engines. Thus the confusion from Gstreamer
people who thought (wrongly) that Phonon was intended to replace
Gstreamer.

Of course, if you have a Qt app rather than a KDE app, it would seem to
make sense to use Qsound rather than Phonon anyway.

Sound

Posted Jul 9, 2009 13:09 UTC (Thu) by johnflux (guest, #58833) [Link] (1 responses)

Phonon was originally KDE, but was moved from KDE into Qt.

Sound

Posted Jul 9, 2009 13:33 UTC (Thu) by halla (subscriber, #14185) [Link]

It's a Qt thing. But at least in Qt 4.4.x it was quite problematic. On WIndows XP, it would play wav
files, which it wouldn't play on WIndow Vista, where phonon would be able to play mp3, which it
wasn't able to play on XP... On OSX everything was fine. And on Linux, an app compiled on Ubuntu,
crashed in gestreamer on OpenSUSE and Fedora -- and all other permutation gave the same result.

We'll have to try again with 4.5 -- 4.5 is in many ways a great series of releases with lots of fixes
and cool things.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 13:04 UTC (Thu) by marcH (subscriber, #57642) [Link] (2 responses)

> Almost agree ;) It's basicly imposible to distribute linux software as an ISV

If your goal is to distribute only one "universal" binary, you are right this is seldom possible. If on the other hand you start from source then it often just works or with only a minor effort.

Porting your source code from one Unix vendor to the other was generally much more difficult than from one Linux distribution to the other. Simply because Linux distributions share most of their underlying source code.

Minor effor for WHO?

Posted Jul 9, 2009 16:16 UTC (Thu) by khim (subscriber, #9252) [Link] (1 responses)

If your goal is to distribute only one "universal" binary, you are right this is seldom possible.

Your goal is to give users something they can use.

If on the other hand you start from source then it often just works or with only a minor effort.

...if you know how to install stuff from source. I was surprised at first when I found few years ago that some of my friends who work as admins don't know C and don't know how to compile programs from source... but then - why should they? It's rarely needed and when it's the only possibility - they can contact me and I'll help them (at first it was for free, later when I've tried to say "enough is enough" they just offered me to pay for my skills and now everyone is happy). If even admins can not compile stuff from source - what chance do you think "Joe Average" will have?

Porting your source code from one Unix vendor to the other was generally much more difficult than from one Linux distribution to the other. Simply because Linux distributions share most of their underlying source code.

Sure - this was the point in article. And my point is that such effort is still required if you are talking about programs for "normal users"... and there are more Linux distributions then there were Unix vendors...

Minor effor for WHO?

Posted Jul 16, 2009 16:08 UTC (Thu) by Wol (subscriber, #4433) [Link]

May I suggest you look at lilypond (www.lilypond.org). They distribute ONE universal binary which is pretty much guaranteed to run on ALL linux distros. (Firefox and OOo do the same ...)

And the tools lilypond uses to do that are open-source...

Cheers,
Wol

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 9:28 UTC (Thu) by DonDiego (guest, #24141) [Link] (6 responses)

Static linking is your friend. I never understood why ISVs don't simply distribute statically linked binaries.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 11:20 UTC (Thu) by da4089 (subscriber, #1195) [Link] (1 responses)

Often, they do. It's the least bad alternative. The major downside is that the vendor needs to deal
with their own security patches for the embedded dependencies.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 10, 2009 5:53 UTC (Fri) by pabs (subscriber, #43278) [Link]

Surely most of the time they don't bother to care about the embedded code copies?

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 14:32 UTC (Thu) by elanthis (guest, #6227) [Link] (3 responses)

Except that doesn't really work well for real applications. Static linking GTK means that your application doesn't use the same theme as the rest of the desktop that's running a newer GTK. Static linking sound libraries means that codec plugins don't work, and you need to statically link those too, which can be a legal nightmare. And static linking libc is a horrifically bad idea.

Linux compatibility at the ABI level is an absolute joke. Source compatibility can even be an issue now and then, because so many projects just change the damn API with every release, and distributions generally only ship the latest version of most projects (few distributions ship every 2.x release of Python for example, even though each release has a minor API and ABI breakages).

That's all fairly irrelevant though, since Linux distributions go out of their way to impose entirely artificial barriers to compatibility. Even if you make a solid portable Linux binary, there's no way to make that binary installable in a cross-platform way that doesn't rely on the user opening a shell and having wasted weeks/months/years of their life learning how to use a shell instead of spending their time doing something more important (like spending time with real people, instead of enslaving themselves to babysitting and hand-holding their "time saving" computational apparatus).

Until Linux distributions either agree on a common package manager (and standardize package names, virtual provides, etc.) or agree on shipping a second cross-platform installation tool (there are a ton of these, some of which I believe can even integrate with RPM/DPKG, but if these tools are not installed on the system then installing packages still requires shell magic to install the damn installation tool).

The various framework developers don't put a lot of effort into testing binary compatibility I believe, and that's largely because few people ask for binary compatibility, because binary compatibility is useless on an OS where getting the binaries installed is a nightmare. In turn, companies don't bother trying to make universal binaries because they know it's entirely pointless, and companies that flat out _can't_ release source just don't bother with Linux... which is why a great deal of us still have Windows installations around. The only thing stopping a great deal of those applications being ported to Linux is the fact that they'd be absolutely impossible to install on Linux.

Take a modern game for example. Even assuming the game source was released, you can't package those things in an RPM. They come on DVDs packed to the brim with textures, sounds, musics, meshes, maps, scripts, videos, and so on. Are we supposed to install a single 4.4GB RPM? And then every time there's a minor update to a few models, we're supposed to download a new 4.4GB RPM because there's no standard delta-RPM mechanism shared by all the RPM distros? That doesn't even include Debian/Ubuntu of course.

There needs to be a cross-platform way of installing software -- and I don't even care if it's a graphical frontend to a compilation script to make the GPL fans happy, so long as it can figure out how to install dependencies on its own -- including a way of updating that software in a realistic fashion given all of today's applications' needs, binary compatibility isn't worth testing and developing for on Linux. It's there mostly to make a nice bullet point for a few enterprise distros that don't really need it, and that's it.

In turn, Linux is still just an "appliance" OS and anybody who needs to do more than run a web browser and email client and word processor (which is a far, far greater percentage of users than the Linux desktop advocates continually claim -- I can't name a single person, even my 80+ year old grandparents, who limit themselves to just those three things) simply can't use Linux because the repositories don't include the software they want (be it Bejeweled 2 or their local geneaology club's favorite software package) and there's no possible way they could ever figure out how to install that software even if there was a Linux version.

Linux's future in the mass consumer market, assuming these things don't change (and I'm 100% convinced that they never will), is going to be handhelds and other appliance-like devices... assuming Linux developers can ever manage to beat the popularity of Apple's competing devices, anyway. Which, as of yet and for the forseeable future, they can't. Android and Pre have nothing on the iPhone's sales. And if you care about Linux from an Open Source/Free Software perspective, those Linux devices don't even matter to you because they rely on proprietary software to get full functionality!

I can't stress it enough though. Software installation is Linux's Achilles Heel. Until that's fixed, Linux is a just niche nerd OS in the desktop space.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 15:48 UTC (Thu) by marcH (subscriber, #57642) [Link]

> anybody who needs to do more than run a web browser and email client and word processor (which is a far, far greater percentage of users than the Linux desktop advocates continually claim [...]) simply can't use Linux because the repositories don't include the software they want

So a typical Linux repository holds only web browsers, email clients and word processors, that is all? I agree with most of your (very good) post, except for this exaggeration above.

You are right that anything happening outside of repositories is a nightmare. So maybe the future of Linux software distribution is just "more repositories". See for instance this: http://www.virtualbox.org/wiki/Linux_Downloads or this: http://rpmfusion.org/Configuration

Performing such automated re-compilation and packaging is possibly easier than ensuring portability across multiple Windows versions.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 16, 2009 19:15 UTC (Thu) by oak (guest, #2786) [Link]

> Are we supposed to install a single 4.4GB RPM? And then every time
there's a minor update to a few models, we're supposed to download a new
4.4GB RPM because there's no standard delta-RPM mechanism shared by all
the RPM distros?

Any competent packager would of course split the thing into suitable
packages.

In case of a game and related data files, the game itself would be in one
package and all the huge data files could be split e.g. so that there's
one level/scenerio/campain per additional package (depending how huge they
are).

I think this suits nicely to the model how shareware & ID software worked
earlier. You get (open sourced) game and some single player tutorial/demo
level as package(s) (maybe included into the distributor's repository) and
based on how much user likes this, s/he can then buy the full game. When
the demo level is finished, user would see something like: "Demo ended,
buy the full game? [enter <publisher>'s show]" which would open Browser
to the shop.

The non-code level data could be packages in publisher's own repo which
provides them in general packaging formats (RPM/DEB/tar.gz). As data files
don't have dependencies, these can be installed to any distro version.
Only game binary package needs to be recompiled & separate for each
distros' distro versions and that could be done by the distros themselves
if the game source is open and this is agreed with the publisher.

I'm not sure how one could get copy protection to this. DRM doesn't work
very well and isn't approved by anybody. Maybe this kind of games should
have some kind of network element where user needs to log in with his/her
game registration?

Static linking

Posted Jul 29, 2009 3:44 UTC (Wed) by jeff@uclinux.org (guest, #8024) [Link]

"Are we supposed to install a single 4.4GB RPM?" [when installing a static linked distribution-
portable game]

Well, game? Maybe 400M. Other things, sure. Doesn't make it right, but the incompatibility
problem is really quite serious, and disk space is cheap. Just for example... Download Xilinx
WebPack 11.1:

Jeff$ ls -l Xilinx_11.1_WebPack_SFD.tar
-rw-r--r-- 1 jeff users 2868316160 Apr 30 13:39 Xilinx_11.1_WebPack_SFD.tar

It's on that order of magnitude even with compressed installer files inside the tar. Mostly static
linked. Yes, I think for serious programs, this is a serious option. These programs have to work
today, next month, next year... if they achieve that, we'll see. If I need to distribute tools
binaries, --enable-static --disable-shared.

I think Bionic libc is a stellar library for a cell phone. I have never built any embedded system
with glibc, if you're doing that, you're not making the right engineering choices.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 10:27 UTC (Thu) by nye (guest, #51576) [Link] (8 responses)

Well, I was pleasantly surprised the other week when I tried running an old copy of Mosaic. The Linux binary from something like 1995 ran without my having to do anything more than extract and execute. Of course, this is because it wasn't built to rely on shared libraries.

(Sadly it turns out that NCSA's own website was the only one I could find which still uses HTTP/1.0, so it was all for naught.)

You were surprised? Why?

Posted Jul 9, 2009 16:24 UTC (Thu) by khim (subscriber, #9252) [Link] (4 responses)

Well, I was pleasantly surprised the other week when I tried running an old copy of Mosaic.

I run old DOS/Windows programs quite often - and I expect that they'll work on new version of Windows (the fact that Windows Vista x64 dropped support for DOS/Win16 was unpleasant surprise but AFICS it's problem with AMD64 mode, not the result of Microsoft's negligence). For you the fact that old copy of Mosaic can run is pleasant surprise. Nuff said.

The really strange this here is that GNU/Linux core is 99% bullet-proof: linux kernel and glibc go the huge pains to guarantee backward compatibility. But the further you go up in the software stack the flakier ABI stability becomes - by the time you've reached layers where you can do something usefull it's gone almost completely...

You were surprised? Why?

Posted Jul 9, 2009 16:30 UTC (Thu) by martinfick (subscriber, #4455) [Link] (2 responses)

There are plenty of old dos programs that do not run on windows. Anything that polled a mouse will chew up CPU. Anything that did out of the ordinary graphics will not work, games...

Yeah, but then Linux's task is simpler from the start...

Posted Jul 9, 2009 19:07 UTC (Thu) by khim (subscriber, #9252) [Link] (1 responses)

There are plenty of old dos programs that do not run on windows. Anything that polled a mouse will chew up CPU. Anything that did out of the ordinary graphics will not work, games...

Sure. DOS programs were written as if they own the computer. Which they did. So it's not easy to containerize them without huge overhead. The comparable thing in Linux world are OSS programs - they also like to hog the part of computer (/dev/dsp device). And like MS DOS games these old programs worked poorly with new distributions. And like with Windows the idea that you can just rewrite all programs (Windows old took of with version 3.0 which was the first version with decent support for MS DOS programs) didn't fly. Why the hell linux developers must repeat all Microsoft's mistakes?

Yeah, but then Linux's task is simpler from the start...

Posted Jul 11, 2009 9:55 UTC (Sat) by nix (subscriber, #2304) [Link]

Because they're not "Microsoft"'s mistakes, per se: they're mistakes made
by software developers in general. MS just tripped over them, and now so
are we.

Actually, this one isn't even a mistake: it's an inevitable consequence of
what happens when a stable foundation grows that everything relies on,
when that foundation is then shown to be faulty by design. At least we
*can* rip it out: when biology does the same thing, we get stuck with the
same unfixable faults for hundreds of millions of years...

You were surprised? Why?

Posted Jul 11, 2009 9:46 UTC (Sat) by nix (subscriber, #2304) [Link]

But the further you go up in the software stack the flakier ABI stability becomes - by the time you've reached layers where you can do something usefull it's gone almost completely...
Is it? I see only one or two small ABI breakages a year (much less if you ignore libdb, OpenSSL, ffmpeg, libperl and libpython, which all break ABI at the drop of a hat), and the only specific complaints I've seen on this thread have been people trying to run things that expect new ABIs of old libraries, which isn't going to work until we all have our time machines.

The major high-in-the-stack desktop libraries and the things they use go to great lengths to maintain back-compatibility, and it seems to work. What they do instead is introduce new libraries with better APIs every so often (e.g. gvfs replacing gnome-vfs), and, sure, if you don't have those and you install something that needs them, you'll have to install them. But, again, this has nothing to do with ABI breakages, which pretty much aren't happening.

Binary compatibility

Posted Jul 10, 2009 18:31 UTC (Fri) by anton (subscriber, #25547) [Link] (2 responses)

Yes, my binary of Mosaic 2.7b5 still works on my Debian Lenny AMD64 system, and it also understands current HTTP (but often likes to download instead of display HTML pages due to charset issues (IIRC)). The binary is from 1998 and is statically linked.

A Mosaic binary from 1994 just segfaults, as well as all the other ZMAGIC binaries (1994-1995) I have lying around.

The QMAGIC binaries (1995-1997) all report "can't load dynamic linker '/lib/ld.so nor /usr/i486-linux/lib/ld.so'" and could probably be made to work by copying the appropriate file to the right place (plus any libraries needed).

Then we get into the ELF era (since 1998), and the binaries (e.g., ssh 1.2.25) work if (compatible versions of) the libraries they use are present; for that ssh binary the libraries are libnsl.so.1 libcrypt.so.1 libutil.so.1 libc.so.6 /lib/ld-linux.so.2, which came with the libc6-i386 package on Lenny. Not bad.

I usually preserve old distributions I used to use, so I can easily copy the old libraries to my new distribution if I need them (or maybe just include the old library directories in ld.so.conf). I have not needed to do that for quite some time, though.

Binary compatibility

Posted Jul 11, 2009 10:04 UTC (Sat) by nix (subscriber, #2304) [Link] (1 responses)

I think you have to turn off address space randomization to get old ZMAGIC
binaries to work (but I'd not be surprised if they'd rotted completely:
does anyone other than Alan Cox run them anymore? :) )

Binary compatibility

Posted Jul 13, 2009 13:26 UTC (Mon) by anton (subscriber, #25547) [Link]

Address-space randomization was turned off in my experiments, so that alone is not enough. Apart from such experiments I don't run them anymore.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 15:10 UTC (Thu) by NAR (subscriber, #1313) [Link]

Oh, yeah. I've just had a user last week who didn't care to read the release notes and tried to run the software on SuSE 9 instead of SuSE 10. Of course, it didn't work due to glibc version differences.

Small problem for Linux ? Sure. Big problem for Linux user? Of course.

Posted Jul 9, 2009 22:34 UTC (Thu) by Baylink (guest, #755) [Link]

And this happens on many levels.

I just bought a Sierra 598 USB "Internet on a Rope" dongle from Sprint.

Sprint's shelf-talker card in the store -- offset color printed -- says it's
compatible with XP, Vista, Mac *and Linux* (without specifying a kernel version or distro).

Alas, though unsurprisingly, Sprint's support doesn't know from Linux, and Sierra "doesn't support" Linux -- though they do have a linux@ address which is ticket-tracked (with custhelp.com's absolutely *miserable* system, but don't get me started on that) -- the best thing Sierra has for me is "it works on Fedora and Ubuntu".

The sticking point is apparently that the USB devices that expose the modem as ttyUSB *are mode switched* from the USB devices that expose the onboard "TRU-install" PRAM and MicroSD card slot; the driver is supposed to switch from one mode to the other... and hald or udev may be what's getting in its way, which brings me back on topic: how portable your software {is,can be} depends on *how low level stuff it has to do*.

SysVinit is fairly well disseminated across distros now, so large packages can make reasonable assumptions about how they'll have to set their daemons up to run and suchlike, but while we used to consider that "low-level" stuff, there are lots of new middleware layers in the average Linux distro these days, and it's (again) not so easy...

(If anyone has any pointers on the USB thing; the gory story is at: http://www.evdoforums.com/thread12302.html)

Telcos

Posted Jul 9, 2009 4:56 UTC (Thu) by jmorris42 (guest, #2203) [Link]

Somehow I suspect Google's always connected to the mothership mentality will lead naturally to Chrome OS being intended to ship with netbooks equipped with 3G cards and a contract. Just like Android only on bigger displays and with a keyboard. So expect the same vendor locks.

I'll be shocked if you can get to root on one, to say nothing of actually loading a home rolled image or unsigned app. Like the gPhone we might get an option to buy a 'devel' version.

Google Chrome OS and the community

Posted Jul 9, 2009 6:59 UTC (Thu) by hickinbottoms (subscriber, #14798) [Link] (1 responses)

"If Google has a better way to do security on Linux, it should be sharing its ideas and getting community input now; presenting a new security model as a fait accompli months from now will not be helpful."

Whilst I would also be interested to see their ideas and developments, and agree that security is always improved by peer review from security grown-ups, if they really are "completely redesigning the underlying security architecture of the OS" that suggests that their security model would be a radical departure from that currently supported by Linux, perhaps breaking Posix and Posix-like compatibility. Such a change would, I suspect, be unlikely to ever get near being included upstream and hence may not have got much interest from that community anyway.

Personally, I'm much too wary of giving my data away on the cloud to a commercial infrastructure I've got no visibility of, and which could be accessed by who-knows-who within Google or whichever company hosts these kind of services. It might be paranoia, but I always feel that's far more of a security risk than access control on my laptop - at least I have some control over that.

It certainly will be interesting, though. I also noticed that the BBC initially pitched the story as a "potential blow to Linux on netbooks" until, presumably, someone informed them that Linux was underneath it after all...!

Google Chrome OS and the community

Posted Jul 10, 2009 9:40 UTC (Fri) by lbt (subscriber, #29672) [Link]

You said:
the BBC initially pitched the story as a "potential blow to Linux on netbooks" until, presumably, someone informed them that Linux was underneath it after all...!

And actually I think they had stumbled upon a point. Chrome OS is more likely to compete with and take market share from Moblin and Maemo than Windows; and as discussed it's not looking like Google are anywhere near as community friendly as, say, Nokia.

Google Chrome OS and the community

Posted Jul 9, 2009 7:09 UTC (Thu) by pranith (subscriber, #53092) [Link] (3 responses)

"With some small exceptions, nobody from Google is making any real effort to get Google's code reviewed in the wider community or merged into the official kernel."

I guess the trouble is not worth for Google to get the code peer-reviewed. Hey, if it works(tm) why will they bother? It is the community's responsibility, if they are interested, to clean up the code and get it into mainline.

Google Chrome OS and the community

Posted Jul 9, 2009 9:31 UTC (Thu) by modernjazz (guest, #4185) [Link] (2 responses)

The problem is that it makes more work, in the long run, for everyone.
Google is surely developing some features that "we" would like; but even
more surely, "we" will continue to develop features that Google would like
(for one, compatibility with evolving hardware). If they have to keep
porting their modifications to newer kernels, it also makes more work for
them. If instead they get their work into the mainline, then most of the
work would be done for them. That's a pretty big win-win situation.
Hopefully they'll figure that out someday.

Google Chrome OS and the community

Posted Jul 9, 2009 13:17 UTC (Thu) by marcH (subscriber, #57642) [Link] (1 responses)

> The problem is that it makes more work, in the long run, for everyone.

Sure, but: "in the long run we are all dead" (Keynes)

For some people months or even weeks is already a "long" run. And for financial markets it can even be days. GOOG?

Long term survival

Posted Jul 9, 2009 21:47 UTC (Thu) by man_ls (guest, #15091) [Link]

The community always lives on. This is not just a rhetorical point; many people developing the Linux kernel were not doing it 10 years ago, many have moved on, and all of us benefit from the efforts put in the kernel since the beginning. (A community can also die, but while it lasts it doesn't depend on specific individuals.)

But it also works for the private enterprise. When a company looks just at the short term, and doesn't stop and see the big picture, then it tends to last that same short term. Such a position is understandable for a company which didn't exist 12 years ago, but we expect better from them.

Google will contribute more than with Android

Posted Jul 10, 2009 7:48 UTC (Fri) by walles (guest, #954) [Link] (1 responses)

With ChromeOS, Google will have stronger incentives to work with the kernel community than what they have with Android.

The reason is that while Android is available only for very specific hardware, ChromeOS has the potential to be available for any PC. People are unlikely to upgrade their phone with a new network card (or whatever) and then go complaining to Google when it doesn't work, but they *could* do that with their ChromeOS devices.

For the ChromeOS Linux kernel Google has two choices:
1. Working with the community and having the community doing most / all the porting work for free.
2. Working without the community and as time goes by having to put more and more resources into keeping up with the upstream kernel.

Working with the community sounds like the cheaper option here in the long run. If Google has long term plans for ChromeOS, I can't see what they would gain by not letting the community work for them.

Google will contribute more than with Android

Posted Jul 16, 2009 10:39 UTC (Thu) by kragil (guest, #34373) [Link]

_*I*_think_ Google will be much more upstream friendly with this one. I can imagine that they will go rolling release and actively push new kernels etc. They could use a snapshotting FS for desaster recovery. I think they will use an OpenGL WM and they will use KMS, PulseAudio and maybe even Ksplice because they want to push updates silently without user notification. I should just work and not bother people with technicalities.

I think such an approach is sufficiently different that it might work.

( Oh and if they do as I predict they will probably also use an established LSM framework for security, but http://code.google.com/p/nativeclient/ for native code )

But I suck at predictions .. my prediction for 2009 were e-ink ebook readers everywhere and books dying and other stuff that might still take an century to make a dent.

“new windowing system”

Posted Jul 10, 2009 11:21 UTC (Fri) by pjm (guest, #2080) [Link] (2 responses)

In the context of a press release, I took the phrase ‘new windowing system’ to mean something more like ‘new window manager’ than outright replacing X11. I'd suggest changing the article to put ‘windowing system’ in quotation marks: in the context of LWN, I'd take ‘windowing system’ to mean what the X Window System is, and ‘new windowing system’ to mean something like Berlin/Fresco or Qtopia.

(The second paragraph http://en.wikipedia.org/wiki/Windowing_system is evidence of window-manager-like uses of the term.)

“new windowing system”

Posted Jul 16, 2009 14:26 UTC (Thu) by ariveira (guest, #57833) [Link] (1 responses)

Maybe they "just" will be the first users of Wayland http://cgit.freedesktop.org/~krh/wayland/ ?

“new windowing system”

Posted Jul 16, 2009 19:26 UTC (Thu) by oak (guest, #2786) [Link]

It's a pity that Fresco (http://fresco.org/) didn't proceed further. It
was an interesting concept. Server side had loadable widget toolkits and
clients used the toolkit over CORBA. Because the calls were high level
i.e. few, performance isn't an issue in this case (even over network). All
drawing was done using a scenegraph i.e. UI was fully transformable.

Y-windows (http://www.y-windows.org/) had later a similar "widgets on
server side" idea and I remember also another earlier attempt at this that
went a bit further than Y-windows (it was aimed more at embedded devices).

Google's behavior makes sense

Posted Jul 11, 2009 17:11 UTC (Sat) by mikov (guest, #33179) [Link] (5 responses)

I have said it before: there are many parts of the "conventional" OS which are outdated or simply broken (POSIX in particular is horrible). Incremental backwards compatible improvement can only go so far.

Google certainly has the resources to do something new. Their changes will never be accepted in the mainline, so it is understandable if they don't want to bother with it.

I can't imagine how a business which has immediate goals with timing, which has to release a product, etc, coordinates pushing all of its patching into the mainline. It has to be a tremendous PITA:
- first they have to release their product with a custom kernel. This kernel has to be supported for the life of the product.
- some indeterminate time in the future, some of their patches will hopefully be accepted in the mainline, probably in a modified form.
- now a new custom different custom kernel has to be created and also maintained.

It is madness. I perfectly understand why most embedded vendors don't do it and why Google doesn't bother with it.

Also, as trollish as it may sound, the comparable stability of the Windows kernel and ABI-s sometimes seems like heaven compared to this nightmare.

Don't get me wrong - I use and recommend Linux all the time - but I am trying to be objective here.

Google's behavior makes sense

Posted Jul 12, 2009 8:47 UTC (Sun) by quotemstr (subscriber, #45331) [Link] (4 responses)

POSIX in particular is horrible
Care to back this claim up? Name a particular thing you'd couldn't implement without breaking POSIX. There are many things that are broken, but POSIX isn't all that bad.

Google's behavior makes sense

Posted Jul 13, 2009 1:32 UTC (Mon) by mikov (guest, #33179) [Link] (3 responses)

Obviously you can implement anything with a sufficient amount of effort - but it is still a bad API full of kludges. The only thing really going for POSIX is that it is real and ultimately is an useful standard (in the sense that you can use it to get work done).

Anytime I browse through Stevens'es seminal books I want to cry - the books are great but what they are describing is a mess.

If you take the core Win32 API for comparison - only the IO operations and thread management - it is really clean and orthogonal, quite unlike POSIX. (Note that I am not necessarily saying that Win32's philosophy is better; only that the API is much cleaner).

One example of the POSIX mess that comes to mind is fork()/wait()/SIGCHLD/zombie processes/etc. Compare that to Win32's simple and straight-forward approach of WaitForMultipleObjects()/GetExitCodeProcess()/CloseHandle().

In an ideal world I would have liked most of the POSIX semantics with Win32's cleanness and orthogonality. (Well, WaitForMultipleObjects() woudln't hurt either)

Google's behavior makes sense

Posted Jul 13, 2009 2:05 UTC (Mon) by quotemstr (subscriber, #45331) [Link] (2 responses)

If you take the core Win32 API for comparison - only the IO operations and thread management - it is really clean and orthogonal, quite unlike POSIX
Clean and orthagonal? Like the WaitForMultipleObjects arbitrary 64-object limitation? Like the inability to wait on sockets using the thing without a WSAEventSelect? But wait -- if two threads are waiting on a single socket using WSAEventSelect, the second thread's event notification silently overwrites the first: i.e., only one thread can really wait on a given socket at a time.) How about the mess that's the named pipe interface, and its bastard stepchild, the anoymous pipe interface? How about having to care about the distinction between MsgWaitForMultipleObjects and WaitForMultipleObjects? And how on earth is it clean for COM initialization to implicitly create a GUI window for the thread?

Or how about the fact that sockets are in an undefined state after timing out? How about the horror that is CreateProcess, compared to fork? How about condition variables only being introduced in Windows Vista? How about the half of the functions that return NULL for failure and the half that return INVALID_HANDLE_VALUE?

What about handle inheritance? This deserves its own paragraph. Win32 can pass a HANDLE (for some objects) from a parent process to a child object, but unless you make that HANDLE one of the standard descriptors of a process (i..e, stdin, stdout, or stderr), you need to pass the *numeric value* of the handle to the child through some other mechanism (say, a command-line parameter), have the child pick that value up, and start using it. It's a kludge. Even SCM_RIGHTS is clean compared to that.

Granted, the Win32 event stuff is better than the rest of Win32. But while the worst part of POSIX is the setuid/seteuid mess, the setuid kind of arbitrary ugliness is par for the entire win32 API.

Google's behavior makes sense

Posted Jul 13, 2009 3:02 UTC (Mon) by mikov (guest, #33179) [Link] (1 responses)

As I said, I was not trying to compare Win32 and POSIX, nor their specific implementations - only the relative cleanliness of the (specific subset of the) APIs.

Also, I explicitly restricted my qualification to core "IO operations and thread management". Most of problems you raised relate to Winsock or User32 or COM, and so are completely beside the point.

I agree that ideally WaitForMultipleObjects() should have a much larger limit, but that doesn't negate the fact that it is conceptually a very clean and powerful API. Read/WriteFile[Ex], GetOverlappedResult, CancelIO, etc (even DeviceIoControl) - it is all very very simple and orthogonal.

The thread management and the IO use the same APIs. You can wait on anything (including custom IOCTLs!) using the same call, you also have a useful set of atomic operations (InterlockedXXX()).

I think that there is no need to compare this in detail to POSIX.

POSIX is what it is. It wasn't an effort to define a new clean API - it simply ratified the existing state of affairs, which had grown organically. It was a success because it exists and is portable and there are implementations from multiple vendors.

But we can't lie to ourselves by pretending that it is elegant or orthogonal or anything but a horrible mess.

Google's behavior makes sense

Posted Jul 13, 2009 10:57 UTC (Mon) by jlokier (guest, #52227) [Link]

I've spent a huge amount of time working with POSIX on lots of platforms, and with WIN32 in the areas of I/O, events and threads to a lesser extent.

WaitForMultipleEvents is not a strong point of WIN32 because it has so many limitations, and because you need to deal with the windows message queue and Winsock differently, and async I/O has lots of ways to report status, each with different performance characteristics and working with different versions of Windows. Apart from waiting for events and objects, there's APCs and Completion Ports and Vista's GetQueuedCompletionStatusEx, and you can wait on I/O handles or OVERLAPPED objects. For some things you need to create a "window" to receive events, though it's not really a "GUI window" as another post suggests, just a Windows object for demultiplexing the message queue.

POSIX is equally ugly when it comes to waiting for different things at the same time: select, aio_suspend, semop. With POSIX and WIN32 both, you have to use threads to wait for different classes of thing together. Like WIN32, POSIX select() has a fixed limit on the number of handles you can wait for: FD_SETSIZE, typically 256. Like WIN32, there are methods and alternate APIs to overcome the limits.

WIN32 I/O API is quite sensible. Basically equivalent to POSIX with a few extra options, and longer names for things. It's always had async I/O, and the I/O event dispatch is versatile (too versatile: no less than 4 ways to wait for async I/O), though awkwardly the different options aren't orthogonal. However, in practice you can't use all the options on all versions of Windows in every combination; it's not as orthogonal as it looks. The async I/O doesn't work on every version, and lacks important functionality prior to Vista.

It's all much the same as Linux really. WIN32 API is better documented and makes sense, but has a bunch of limitations and version-specificity, which isn't very well documented. Linux is similar when you use the non-POSIX APIs like epoll and eventfd.

WIN32's DLLs are a mess compared with ELF. Especially dynamically loading - enjoy the crashes, lockups (prior to Vista) and lack of load-time initialisation. People have to stick to certain patterns of DLL use to avoid the obscure problems which are quite nasty.

WIN32's threading primitives: mutexes, condition variables and so on were poor compared with POSIX threads, until Vista. Vista copies some of the good stuff and is comparable with Linux NPTL, that is to say, quite good. Vista's async I/O is better than Linux's and POSIX generally. But who wants to write Vista-specific code?


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds