HP dropping webOS devices
In addition, HP reported that it plans to announce that it will discontinue operations for webOS devices, specifically the TouchPad and webOS phones. HP will continue to explore options to optimize the value of webOS software going forward."
Posted Aug 19, 2011 8:13 UTC (Fri)
by imgx64 (guest, #78590)
[Link] (7 responses)
On the other hand, I've always been a huge fan of Palm, and I'm a bit relieved to know that someone has finally put it out of misery that has dragged for far too long. I won't have false hope anymore.
Posted Aug 19, 2011 10:11 UTC (Fri)
by epa (subscriber, #39769)
[Link] (6 responses)
Posted Aug 19, 2011 13:26 UTC (Fri)
by ejr (subscriber, #51652)
[Link] (4 responses)
Posted Aug 20, 2011 0:58 UTC (Sat)
by cry_regarder (subscriber, #50545)
[Link] (3 responses)
Cry
Posted Aug 20, 2011 19:28 UTC (Sat)
by sbergman27 (guest, #10767)
[Link] (1 responses)
I go to the outlets where handheld mathematics-related devices are sold, and instead of drooling, I yawn. There's nothing appreciably better than the 48SX/GX. Most seem more aimed at being the "Cliffs Notes" of math than anything else. And if you need more than what the HP48's can do, Python packages are your recourse. (Perhaps not the only one, But certainly the obvious one.) In mentioning the HP48 series, you have insightfully noted an effective border of eras.
It still surprises me how the expected "Moore's Law" evolution in that area just unexpectedly terminated. Scissors cut paper. Paper smothers rock. Rock breaks scissors. Market forces trump Moore's Law.
And yeah. Moore's law specifically deals with transistor density. But still...
Posted Aug 22, 2011 11:24 UTC (Mon)
by marcH (subscriber, #57642)
[Link]
http://www.google.com/search?q=science+enrolment+drop
More work for us...
Posted Aug 25, 2011 15:21 UTC (Thu)
by achiang (guest, #47297)
[Link]
In certain ways, it's better than the real hardware due to the smaller form factor of your phone. :)
Posted Aug 19, 2011 15:33 UTC (Fri)
by tjc (guest, #137)
[Link]
Two weeks! ;)
Posted Aug 19, 2011 9:19 UTC (Fri)
by Hausvib6 (guest, #70606)
[Link] (6 responses)
Posted Aug 19, 2011 9:42 UTC (Fri)
by alonso (guest, #2828)
[Link] (4 responses)
Posted Aug 19, 2011 11:33 UTC (Fri)
by petur (guest, #73362)
[Link] (1 responses)
Posted Aug 19, 2011 16:04 UTC (Fri)
by alonso (guest, #2828)
[Link]
Posted Aug 19, 2011 13:31 UTC (Fri)
by Hausvib6 (guest, #70606)
[Link] (1 responses)
The recent Motorola Mobility acquisition by Google can become a really strong reason to seriously develop Bada further to be on par with Android/iOS, although it is unlikely that Google will try to alienate its allies.
The more the merrier.. except for the nasty fragmentation that leads to duplicated efforts and incompatibilities.
Posted Aug 23, 2011 8:56 UTC (Tue)
by jospoortvliet (guest, #33164)
[Link]
Posted Aug 19, 2011 17:44 UTC (Fri)
by vblum (guest, #1151)
[Link]
Sad to see WebOS go.
Posted Aug 19, 2011 9:38 UTC (Fri)
by robert_s (subscriber, #42402)
[Link] (1 responses)
Posted Aug 20, 2011 16:49 UTC (Sat)
by hingo (guest, #14792)
[Link]
Posted Aug 19, 2011 10:37 UTC (Fri)
by gus3 (guest, #61103)
[Link] (5 responses)
Posted Aug 19, 2011 12:02 UTC (Fri)
by renox (guest, #23785)
[Link] (2 responses)
Posted Aug 19, 2011 12:55 UTC (Fri)
by ndye (guest, #9947)
[Link]
(I've got an illuminating story about their EVA disk array . . . .)
We renters (since that's how these hardware/software vendors treat us customers) need to get more uppity about escrowing what's required for support after the original vendor wants to move on.
Posted Aug 20, 2011 9:02 UTC (Sat)
by Los__D (guest, #15263)
[Link]
It would probably be as easy as loading it.
However, the performance today would be no match for the newer FPGA's built in DSPs.
Posted Aug 19, 2011 17:10 UTC (Fri)
by bk (guest, #25617)
[Link] (1 responses)
According to Wikipedia all the Alpha IP was sold to Intel.
Posted Aug 19, 2011 22:59 UTC (Fri)
by gus3 (guest, #61103)
[Link]
Posted Aug 19, 2011 11:47 UTC (Fri)
by spaetz (guest, #32870)
[Link]
Just as enticing as buying a Meego phone.
Posted Aug 19, 2011 13:13 UTC (Fri)
by dufkaf (guest, #10358)
[Link]
Posted Aug 19, 2011 15:32 UTC (Fri)
by pr1268 (guest, #24648)
[Link] (61 responses)
This article seems to imply that HP's WebOS ran "slow" due to outdated hardware. I wonder if this might have precipitated a lack of interest (and subsequent sales) in the device (or maybe a paucity of 3rd-party applications?). According to this page (in which the above was linked), WebOS ran twice as fast on an iPad2 as a TouchPad (which has me really curious now as to how they did that, since Apple's hardware is notoriously locked-down and/or difficult to root). Interesting...
Posted Aug 19, 2011 17:18 UTC (Fri)
by cmccabe (guest, #60281)
[Link] (60 responses)
I guess the theory was that being based around HTML/CSS/Javascript would make it easier for developers to hop on board. But the reality is, WebOS got a late start and didn't really offer anything to developers that other platforms didn't already.
The business model was also pretty questionable. HP is a company built around selling lots and lots of hardware. Did they really expect other OEMs to enthusiastically jump on board their proprietary OS, knowing they would be competing directly with HP?
The only surprise here is that it took HP so long to reach the obvious conclusion and focus their resources elsewhere.
Posted Aug 19, 2011 18:35 UTC (Fri)
by tstover (guest, #56283)
[Link] (57 responses)
Posted Aug 19, 2011 19:29 UTC (Fri)
by cmccabe (guest, #60281)
[Link] (56 responses)
Most mobile applications are all about the graphical user interface, though. Writing graphical user interfaces in C is not really a good choice.
I never understood the hate for Java. The mobile platforms insulate you from the most irritating parts, like JVM startup time, classpath, bundled libraries, JVM differences, and old-fashioned UI toolkits.
Posted Aug 20, 2011 13:37 UTC (Sat)
by khim (subscriber, #9252)
[Link] (55 responses)
It's very easy to create "passable UI" with Java. It's hard to create "great UI" with Java. GC is primary reason. It actively encourages designs which lead to slow, sloppy UI - and you can not fix it without basically full redesign of everything. The only solution is to paper over problem using huuuugely excessive horsepower (basically if you'll throw on Java 10x CPU power and 10x memory as compared to sane languages you'll get the same experience). Note that Apple supports GC on Macs but does not support them in iOS: it knows iPhone/iPad are not yet powerful enough for that.
Posted Aug 20, 2011 15:42 UTC (Sat)
by cmccabe (guest, #60281)
[Link] (54 responses)
All of these problems have been fixed. The new version of Android has incremental garbage collection. This helps a lot with the responsiveness of the UI. JVM start up time is not an issue because the JVM is always started. The UI toolkit is fully native rather than just being an ugly shim on top of something else.
In fact, nearly all UI development today is done in garbage-collected languages. On the desktop, you have .NET, Javascript, and HTML; on the server-side, you have more .NET, Java, Ruby, and Python. Java and .NET are actually the fastest of the bunch.
There are still some people developing UIs in C++ or C on the desktop. The main reason you would do this is because the rest of the app needs to be in C++ or C for speed reasons.
Typical Android handsets tend to clock in at somewhere between 800 MHz and 1.2 GHz today. Apple's flagship device is still at 800 MHz (I think?) Experience suggests that you don't need "10x horsepower" to use GC.
iPhones still do tend to get better battery life than android handsets. I expect this difference to narrow as users start expecting their phones to do more. For example, up until recently the iPhone had no way to run user-installed background proceseses. But due to user demand, and the fact that Android had one, they had to add it in.
Posted Aug 21, 2011 6:58 UTC (Sun)
by tzafrir (subscriber, #11501)
[Link] (5 responses)
Javascript, Ruby (most implementations - except, maybe, JRuby and IronRuby), Python (except JPython?), etc. C++ likewise has its own reference counting partial garbage collection facility.
With reference counting there's no inherent need for a single pass (or an elaborate way to work around the need for one).
Posted Aug 21, 2011 8:39 UTC (Sun)
by khim (subscriber, #9252)
[Link] (1 responses)
With refcounting you still must think about objects lifetime. Thus you still will create sane designs because you must make sure you'll not create loops. But full GC encourages "don't know and don't care when this object will be removed" designs - and these can only be fixed with full rewrite.
Posted Aug 31, 2011 14:25 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Aug 27, 2011 18:30 UTC (Sat)
by BenHutchings (subscriber, #37955)
[Link] (2 responses)
Reference counting can also result in long pauses due to cascading destruction. In fact, it can be worse than incremental GC in this respect.
Posted Aug 27, 2011 23:47 UTC (Sat)
by andresfreund (subscriber, #69562)
[Link] (1 responses)
Posted Aug 28, 2011 23:03 UTC (Sun)
by foom (subscriber, #14868)
[Link]
There's a reason that Python introduced the "with" statement: to allow for easily-written *actually* predictable resource closing.
Posted Aug 21, 2011 8:35 UTC (Sun)
by khim (subscriber, #9252)
[Link] (47 responses)
...with just a single one, yet significant exception: Apple. It actively fights GC-disease on iOS. There are other, smaller bastions: Microsoft Office, for example. And every time you see "latest and greatest" Java or .NET-based UI you see "chopping and laggy", when you see something developed using "old-school non-GC-based approach" you see "slick and beautiful". Coincidence? I think not. Note that typical Android handset has dual-core CPU already while Apple is using slower single-core CPU. And still the experience of iOS is "smooth butter" while Android's experience is "choppy and laggy". I guess when Android handsets will get four-core 2GHz CPUs they will finally be able to reach the level of responsiveness of single-core 800MHz Apple's CPU. This is exactly 10x the horsepower :-) No. The main reason is that you want something good-looking on lower-end systems. Note how Visual Studio (which was initially in C/C++) is slowly morphing in .NET-based monster while Microsoft Office team fights .NET tooth and nail. That's because MS Office should work great on low-end systems, not just on 16-core/16GiB monsters.
Posted Aug 21, 2011 12:16 UTC (Sun)
by alankila (guest, #47141)
[Link] (46 responses)
I'm testing the application on Galaxy S. No choppiness is evident. According to logcat, GC pause lengths vary from < 10 ms to 37 ms, majority of the pauses being 20 ms. I don't notice them at all. There is unfortunately one thing that makes an app of this type a bit choppy: sometimes a new texture must be uploaded to the GPU, and it must be done from the rendering thread, because it has the only OpenGL context! And for the 2048x2048 (8 MB) texture, this operation takes something like 60 ms to do, and you can clearly see how the app misses a beat there.
I've also written the iOS version of the app. It behaves virtually the same, as the architecture is of course the same. JPEG decoding is done in an NSOperationQueue which is backed by a private thread, and the texture upload is inline, as this is how it has to be. On iOS there's also a slight pause in the animation during the texture upload.
However, iOS version was much harder to write because objC was new to me, and xcode 4 is fairly buggy and crashes quite often, and then there's all that additional complexity around object allocation initialization, and autorelease pools and references that you need to explicitly care for. The class files are longer than their java counterpart, and the need to free every little object explicitly adds a chance for making mistakes. I really don't believe in requiring programmers to forcibly care about this sort of stuff when most of these allocations that add to the programming-type complexity are so small that it doesn't even matter if they hang around for a while and are taken care of by GC at some convenient time later.
To summarize: no, I don't think GC is an issue anymore. Some API issues like the possibility for only synchronous texture uploads are far more important.
Posted Aug 21, 2011 15:26 UTC (Sun)
by endecotp (guest, #36428)
[Link] (12 responses)
My understanding is that in principle you can have multiple OpenGL contexts on Android, but in practice (according to gurus on the Android forums) this is likely to be unreliable in practice due to implementation bugs. Since there are many different OpenGL implementations on Android - one per GPU vendor - you would need to do a lot of testing before trying to use this.
(This was one of several bad things that I discovered about Android when I experimented with it last year; another was that the debugger didn't work with multi-threaded programs. I was actually quite shocked by how poor at lot of it was once I poked below the surface. I believe some things have got better in the meantime.)
> On iOS there's also a slight pause in the animation during the
On iOS, you can definitely have multiple OpenGL contexts, one per thread, both in theory and in practice.
> objC was new to me, and xcode 4 is fairly buggy and crashes quite
Right, I agree. So does Apple, and they now have GC on Mac OS and "automatic reference counting" on iOS.
However, my suggestion would be to use C++ with smart pointers. This has the advantage of working on every platform that you might ever want to use - even WebOS! - but not Windows Mobile.
Posted Aug 21, 2011 16:16 UTC (Sun)
by alankila (guest, #47141)
[Link]
Posted Aug 22, 2011 1:52 UTC (Mon)
by cmccabe (guest, #60281)
[Link] (10 responses)
Multi-threaded debugging has always worked fine for pure Java code on Android. Debugging multi-threaded native code (aka NDK code, aka C/C++ code), is broken on Android 2.2 but it works on Android 2.3 and above.
> However, my suggestion would be to use C++ with smart pointers. This has
Um, it depends on what "every platform you might ever want to use" is. Neither C nor C++ are supported at all on Blackberry or Windows Phone 7.
Android supports C and C++ through the NDK. However, the older NDK kits do not support C++ exceptions. There are reports that some people have gotten exceptions to work with the older NDK by including glibc and libstdc++ in their application, but that increases the application size by many megabytes.
Without exceptions, you cannot use std::tr1::shared_ptr, which is more or less the standard smart pointer in the C++ world these days. Most of the stuff in the STL uses exception too, which is inconvenient to say the least.
There is this thing called Objective C++ that you can use on iOS if you want. However, that is not necessarily a good idea. Basically, Apple views Objective C as replacement for C++, and only supports Objective C++ for compatibility reasons.
Posted Aug 22, 2011 16:25 UTC (Mon)
by endecotp (guest, #36428)
[Link]
Right. I own 4 Android devices, and currently only one of them has >=2.3 available; that's my Galaxy Tab, and Samsung released the update just a few weeks ago. So on my other 3 devices I can still only do "printf-style" debugging. My Motorola Defy has only just got an update to 2.2!
It's actually even worse than that; on the 2.2 Galaxy Tab some vital symlink or somesuch was missing, which made even single-threaded native debugging impossible.
> the older NDK kits do not support C++ exceptions.
Right, that's one of the other surprising "bad things" that I was referring to. I was able to work around it by installing a hacked version of the tools from crystax.net.
> There is this thing called Objective C++ that you can use on
I'm very familiar with it :-)
> However, that is not necessarily a good idea. Basically, Apple views
"Citation Required".
Posted Aug 23, 2011 10:44 UTC (Tue)
by jwakely (subscriber, #60262)
[Link] (8 responses)
GCC's C++ standard library can be used with -fno-exceptions and I'd be very surprised if other implementations don't have something equivalent. In normal use there are few places where the C++ Standard Library throw exceptions, and they can often be avoided by checking preconditions first (e.g. don't call std::vector::at() without checking the index isn't out of range first)
Posted Aug 23, 2011 19:24 UTC (Tue)
by cmccabe (guest, #60281)
[Link] (7 responses)
That is technically true, but a little bit misleading.
Code using tr1::shared_ptr will not compile without support for RTTI. Now, you could enable RTTI without enabling exceptions, but nobody actually does, because RTTI requires exceptions in order to function in any reasonably sane way. Otherwise, the entire program aborts when a dynamic_cast to a reference type fails. And I don't think even the most die-hard C++ advocate could put a positive spin on that.
Realizing this, Google compiled their old libc without support for exceptions or RTTI. So you will not be able to use shared_ptr with the old NDK, only with the new one-- sorry.
There is talk of removing the dependency on RTTI from tr1::shared_ptr. But of course that will take years to be agreed on by everyone and rolled out, assuming that it goes forward.
Posted Aug 23, 2011 20:09 UTC (Tue)
by cmccabe (guest, #60281)
[Link] (6 responses)
er, I meant libstdc++
Posted Aug 24, 2011 3:00 UTC (Wed)
by njs (subscriber, #40338)
[Link]
Posted Aug 24, 2011 21:53 UTC (Wed)
by jwakely (subscriber, #60262)
[Link] (4 responses)
Posted Aug 24, 2011 22:12 UTC (Wed)
by jwakely (subscriber, #60262)
[Link] (3 responses)
But please stop making misleading comments about C++ that ignore facts. As the bug I linked to shows, it didn't take years to agree on, it took 8 days.
Posted Aug 24, 2011 22:38 UTC (Wed)
by jwakely (subscriber, #60262)
[Link]
Posted Sep 4, 2011 20:45 UTC (Sun)
by cmccabe (guest, #60281)
[Link] (1 responses)
I did not mean to imply that the libstdc++ maintainers were slow. However, rollout of new libstdc++ versions can be quite delayed, as you know. Using shared_ptr without exceptions on older Android versions just isn't going to compile, and it would be misleading to suggest otherwise. That was what I was trying to avoid.
Just out of curiousity, are the -fno-rtti and -fno-exceptions modes part of any standard, or just something that GCC and a few other compilers implement?
P.S. as a former C++ user, thanks for all your work on libstdc++
Posted Sep 4, 2011 21:08 UTC (Sun)
by jwakely (subscriber, #60262)
[Link]
There is (or was) an "Embedded C++" dialect which omits RTTI and exceptions, among other features, but it's not a standard and as Stroustrup has said "To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be."
Posted Aug 21, 2011 17:55 UTC (Sun)
by khim (subscriber, #9252)
[Link] (32 responses)
That's the problem with GC. It works quite well in tests, but not so well in practice. What happens when your application works for half-hour and memory is badly fragmented? What happens when there are some other applications in background which also need to frequently run GC? This is where 2-4-8 2GHz cores will be helpful to mitigate GC disease. Eventually, when hardware is significantly more powerful GC-based will finally work as well as non-GC based original iPhone did back in 2007 with it's 412MHz CPU... Perhaps by then Apple will decide that it's Ok to give it to iOS developers too. And this was the whole point, right: Java makes it easy to write mediocre UI, but not so easy to write good UI. ObjectiveC and iOS tools in general are geared toward great UI but sometimes it's hard to write something which "just barely works". Which was my original point.
Posted Aug 21, 2011 19:23 UTC (Sun)
by HelloWorld (guest, #56129)
[Link] (27 responses)
> What happens when there are some other applications in background which also need to frequently run GC?
Posted Aug 22, 2011 12:43 UTC (Mon)
by khim (subscriber, #9252)
[Link] (26 responses)
Right. And this is where you experience the most extreme dropouts and slowdowns. How will you compact a heap with multimegabyte arrays without significant delays?
Posted Aug 22, 2011 15:04 UTC (Mon)
by HelloWorld (guest, #56129)
[Link] (25 responses)
> How will you compact a heap with multimegabyte arrays without significant delays?
Posted Aug 22, 2011 17:00 UTC (Mon)
by khim (subscriber, #9252)
[Link] (24 responses)
Ah, now we back to the whole BFS debate. No, I have no benchmarks present. I know how to make Java interface not "extremely sucky" but "kind-of-acceptable" - but yes, it's kind of black magic and I'm not 100% sure all techniques are actually required and proper. The main guide are dropout benchmarks for real programs: you just log timing of operations and tweak the architecture till they show acceptable timings. And I know I never need such black magic for simple, non-GC-driven programs: there I can measure timings without a lot of complicated experiments and be reasonably sure this will translate well to the end product. Not so with GC. Because these are the same "wizards from Ivory Tower" that proclaimed 20 years ago "Among the people who actually design operating systems, the debate is essentially over. Microkernels have won." This was nice theory but practice is different. In practice two out of three surviving major OSes are microkernel-based only in name and one is not microkernel at all. I suspect the same will happen with GC: wizards promised that with GC you can just ignore memory issues and concentrate on what you need to do, but in practice it only works for packet computation (things like compilers or background indexers) while in UI you spend so much time fighting GC the whole savings become utterly pointless.
Posted Aug 22, 2011 17:42 UTC (Mon)
by cmccabe (guest, #60281)
[Link] (2 responses)
Why don't you check out the incremental garbage collector that was implemented in Android 2.3? It exists and is deployed in the real world, not an ivory tower.
Posted Aug 25, 2011 13:30 UTC (Thu)
by renox (guest, #23785)
[Link] (1 responses)
To have no pause, you need a real time GC not merely an incremental GC!
Posted Sep 4, 2011 20:41 UTC (Sun)
by cmccabe (guest, #60281)
[Link]
Posted Aug 22, 2011 18:06 UTC (Mon)
by pboddie (guest, #50784)
[Link] (13 responses)
Someone gives you data and you just come back with more conjecture! The good versus bad of GC is debated fairly actively in various communities. For example, CPython uses a reference-counting GC whose performance has been criticised from time to time by various parties for different reasons. As a consequence, implementations like PyPy have chosen different GC architectures. The developer of the HotPy implementation, who now appears to be interested in overhauling CPython, also advocates a generational GC, I believe, which means that there is some kind of emerging consensus. There has even been quite a bit of work to measure the effect of garbage collection strategies on the general performance of virtual machines, and that has definitely fed into actual implementation decisions. This isn't a bunch of academics hypothesising, but actual real-world stuff.
Posted Aug 22, 2011 19:38 UTC (Mon)
by zlynx (guest, #2285)
[Link] (12 responses)
I can back that up with my own personal experience. Java software and C#/.NET too will show unexpected and very annoying pauses whenever the GC is required to run. C or C++ software, even Perl and Python software never demonstrated this erratic behavior.
If you would like to see it for yourself, please run JEdit on a system with 256MB RAM while editing several files of several megabytes each. That is one application I know I experienced problems with while Emacs and vi never acted funny.
Posted Aug 22, 2011 22:40 UTC (Mon)
by cmccabe (guest, #60281)
[Link] (3 responses)
The JVMs in use on servers do not use incremental garbage collection. I mentioned this is in the post that started this thread.
> C or C++ software, even Perl and Python software never
CPython's garbage collector is based on reference counting, which is inherently incremental. So it's no surprise that you don't see long pauses there.
Perl's garbage collector is "usually" based on reference counting. It does a full mark and sweep when a thread shuts down, apparently. See http://perldoc.perl.org/perlobj.html#Two-Phased-Garbage-C...
> If you would like to see it for yourself, please run JEdit on a system
I would like to, but unfortunately systems with 256MB of RAM are no longer manufactured or sold.
> That
Ironically, emacs actually implements its own garbage collector, which is based on mark-and-sweep. So apparently even on your ancient 256 MB machine, old-fashioned stop-the-world GC is fast enough that you don't notice it. As a side note, it's a little strange to see emacs held up as a shining example of efficient programming. The sarcastic joke in the past was that emacs stood for "eight megs and constantly swapping." Somehow that doesn't seem like such a good punchine any more, though. :)
The bottom line is that well-implemented garbage collection has its place on modern systems.
Posted Aug 23, 2011 2:26 UTC (Tue)
by viro (subscriber, #7872)
[Link] (1 responses)
Posted Aug 24, 2011 18:07 UTC (Wed)
by cmccabe (guest, #60281)
[Link]
Posted Aug 24, 2011 9:07 UTC (Wed)
by khim (subscriber, #9252)
[Link]
This machine may be ancient, but emacs is beyond ancient. You said it yourself: It's still good punchline - but now it can be used as showcase for GC. If you system is so beefy, is so overpowered, is so high-end that you can actually throw 10 times as much on the problem as it actually needs... then sure as hell GC is acceptable. But this is not what GC proponents will tell you, right?
Posted Aug 23, 2011 5:55 UTC (Tue)
by alankila (guest, #47141)
[Link] (3 responses)
I do know however that even the mark-and-sweep collectors in practice limit pauses to less than 100 ms because I have written audio applications in java with shorter buffers than 100 ms and they run without glitching. This sort of application should have its heap size tuned appropriately because too large heap will have lot of objects to collect when the cycle triggers, and this in turn can cause glitches, whereas a small heap will have frequent but fast enough collections.
G1GC strategy looks very promising, because it concentrates GC effort to memory regions that are likely to be free with very little work, and supports soft realtime constraints for limiting the length of a single GC cycle. It looks to be something to the tune of order of magnitude faster than the other strategies, but I haven't personally tried it yet.
Posted Aug 24, 2011 9:00 UTC (Wed)
by khim (subscriber, #9252)
[Link] (2 responses)
Yeah. It works. But the main stated goal of the GC, it's raison d'être is the ability to forget about memory allocation. Remember: no memory leaks, no more confusion about ownership, etc? GC pseudoscience fails to deliver, sorry. Just like relational database theory fails to deliver. It does not mean both are useless - if your goal is "something working" they often are "good enough". But the problem with Java is the fact that GC is imposed. You can not avoid it because standard library requires it. So in the end you fight to the death with the one thing which should "free you" from some imaginary tyranny.
Posted Aug 25, 2011 10:37 UTC (Thu)
by alankila (guest, #47141)
[Link]
Think about what I just said: I have positive experience working with a relatively low-latency application in the real world. To get it, all I have to do is adjust one tunable -- the heap size. And I hinted that with G1GC even that adjustment is now unnecessary, but I'll wait until G1GC is actually the default. JDK7 maybe, when it rolls out for OS X I'll probably check it out.
Posted Aug 31, 2011 14:37 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Hint: GCs are actually studied, quite intensively, by actual computer scientists. Science is what scientists do: thus, GC research is science. Enough of the badmouthing. (It is quite evident to me at least that no amount of evidence will change your opinion on this score: further discussion is clearly pointless.)
Posted Aug 23, 2011 11:00 UTC (Tue)
by pboddie (guest, #50784)
[Link] (3 responses)
Yet Perl and Python employ garbage collection. My point was that blanket statements about GC extrapolated from specific implementations of specific platforms don't inform the discussion, but I'm pretty sure that this is a replay of a previous discussion, anyway, fuelled by the same prejudices, of course.
Posted Aug 24, 2011 8:52 UTC (Wed)
by khim (subscriber, #9252)
[Link] (2 responses)
1. Perl and python are unbearably slow and laggy when used by itself. Thankfully noone in the right mind will ever try to use then without wide array of support C libraries. These "fast core" libraries don't employ GC. 2. Most perl and python scripts are batch-mode scripts. GC is perfectly Ok for such use (when you only care about throughput and not about latency). 3. Current implementations of perl and python use refcounting GC which is, of course, it as reliable WRT latency as manual memory allocation. We'll see what happens when PyPy and other "advanced" implementations will become mainstream.
Posted Aug 25, 2011 10:55 UTC (Thu)
by alankila (guest, #47141)
[Link] (1 responses)
2. Probably true. Missing from this discussion is the observation that even malloc/free can be unpredictable with respect to latency because they must maintain the free list and occasionally optimize it to maintain performance of allocations. No doubt advances to malloc technology have happened and will happen, and there are multiple implementations to pick from that expose different tradeoffs.
3. Python, I've been told, also contains a true GC in addition to the refcounter. I think the largest single advantage of a refcounter is that it gives a very predictable lifecycle for an object, often removing it as soon as the code exits a block. This makes user code simpler to write because filehandles don't need to be closed and database statement handles go away automatically. Still, this is synchronous object destruction, and scheduling object destruction during user think-time would give better user experience.
Posted Aug 25, 2011 11:41 UTC (Thu)
by pboddie (guest, #50784)
[Link]
Indeed. One can switch off GC and just let programs allocate memory until they exit to see the performance impact of GC, if one is really interested in knowing what that is. This is something people seem to try only infrequently and in pathological cases - it's not a quick speed-up trick. Quite right. This is not so different from any discussions of latency around garbage collectors. CPython uses reference counting and a cycle detector. PyPy uses a generational GC by default, if I remember correctly. The PyPy people did spend time evaluating different GCs and found that performance was significantly improved for some over others. This advantage is almost sacred to the CPython developers, but I don't think it is entirely without its own complications. Since we're apparently obsessed with latency now, I would also note that a reference counting GC is also unlikely to be unproblematic with regard to latency purely because you can have a cascade of unwanted objects and the GC would then need to be interruptable so that deallocation work could be scheduled at convenient times. This could be done (and most likely is done) in many different kinds of GC.
Posted Aug 22, 2011 19:39 UTC (Mon)
by HelloWorld (guest, #56129)
[Link] (3 responses)
> Sure. It's typical problems for real programs.
> Because these are the same "wizards from Ivory Tower" that proclaimed 20 years ago "Among the people who actually design operating systems, the debate is essentially over. Microkernels have won."
> This was nice theory but practice is different. In practice two out of three surviving major OSes are microkernel-based only in name and one is not microkernel at all.
Posted Aug 23, 2011 5:00 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Aug 23, 2011 17:19 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (1 responses)
Like atoms vs. the solar system, they might look similar if you look from afar. It's certainly possible to cherry-pick theoretical similarities. For real-world work, though, they tend to be quite different.
Posted Aug 25, 2011 20:37 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Like you said, I'm certainly standing from afar and squinting more than a little. 8-)
Posted Aug 23, 2011 23:36 UTC (Tue)
by rodgerd (guest, #58896)
[Link] (2 responses)
Boy microkernels didn't get anywhere, did they.
Posted Aug 24, 2011 0:12 UTC (Wed)
by rahulsundaram (subscriber, #21946)
[Link]
Posted Aug 24, 2011 7:44 UTC (Wed)
by anselm (subscriber, #2796)
[Link]
MacOS may be based on the Mach microkernel, but given that it has a big monolithic BSD emulation layer and nothing else on top it can by no stretch of the imagination be called a »microkernel OS«. (Andrew Tanenbaum probably wouldn't like it any more than he likes Linux.) Very similar considerations apply to Windows; having the graphics driver in the kernel is not what one would expect in a microkernel OS.
It is fair to say that the microkernel concept, while academically interesting, has so far mostly failed to stand up to the exigencies of practical application. There are exceptions (QNX comes to mind), but despite previous claims to the contrary no mainstream operating system would, in fact, pass as a »microkernel OS«. At least Linux is honest about it :^)
Posted Aug 22, 2011 3:35 UTC (Mon)
by alankila (guest, #47141)
[Link] (3 responses)
Does objC run object releases in batches after an iteration in the main loop has executed, or does it release them synchronously when I type [foo release]? One of the supposed advantages of GC over malloc/free style management is that the VM can attempt to arrange GC to occur asynchronously with respect to other work being done. I think Dalvik does something like this.
Dalvik does not have a compacting GC last time I heard, so heap fragmentation could in theory result in more unusable memory and therefore more common GC cycles. There's nothing I can do about heap fragmentation on either iOS or Android, so I don't worry about it.
Posted Aug 22, 2011 16:37 UTC (Mon)
by endecotp (guest, #36428)
[Link] (2 responses)
If you [foo release], it releases it synchronously. If you [foo autorelease], it releases it later.
> One of the supposed advantages of GC over malloc/free style
Right. I have never heard anyone claim that objC's autorelease is faster than synchronous release, however. My guess is that any benefit of postponing the actual release is offset by the effort needed to add it to the autorelease pool in the first place.
The most successful example of this sort of "postponed release" that I've seen is Apache's memory pools. Apache manages per-connection and per-request memory pools from which allocations can be made contiguously, with no tracking overhead. These allocations don't need to be individually freed, but rather the whole pool is freed at the end of the request or connection. I am surprised that this sort of thing is not done more often, as it should have both performance and ease-of-coding benefits.
Posted Aug 22, 2011 19:31 UTC (Mon)
by kleptog (subscriber, #1183)
[Link] (1 responses)
It's not free though. It does mean that if you want data to survive for longer periods it means you need to copy the data to a new context. It means that for functions the context of associated memory becomes part of the API and you need to be careful that people respect the conventions, or you get dangling pointers easily. And valgrind gets confused. And external libraries don't get along well with it some times. And you don't get destructors (although I understand Samba has a memory pool architecture with destructors).
But you never get memory leaks, which is good for reliability. And that makes up for a lot.
Posted Aug 23, 2011 10:59 UTC (Tue)
by jwakely (subscriber, #60262)
[Link]
Posted Aug 20, 2011 3:30 UTC (Sat)
by pr1268 (guest, #24648)
[Link] (1 responses)
> WebOS ran slow and power-hungry due to being based around Javascript, Be that as it may, it still doesn't explain the speed difference of WebOS on the two different hardware platforms. Unless the iPhone2 has some JS speed booster/optimizer and WebOS runs on top of this interpreter, I still don't see how "twice as fast" could be realized in software alone. I do agree that JS seems like a poor choice on which to base an entire OS.
Posted Aug 20, 2011 15:53 UTC (Sat)
by cmccabe (guest, #60281)
[Link]
For example, WebOS uses the V8 Javascript engine, whereas iOS uses Nitro. As far as I know, V8 is still the faster of the two. So I'm sure you could come up with some benchmark like loading a complicated web page where webOS wins.
Posted Aug 19, 2011 16:07 UTC (Fri)
by martinfick (subscriber, #4455)
[Link] (2 responses)
Heck, why anyone would not make their hardware run android even if it came with another OS boggles my mind? As a customer, I appreciate having hardware with OS choices...
Posted Aug 19, 2011 17:04 UTC (Fri)
by anselm (subscriber, #2796)
[Link]
Nobody says they're dropping WebOS, the software, yet. They said they would stop making WebOS devices.
On the other hand, whether HP will manage to get somebody else interested in making devices to run WebOS, to a point where it makes sense to put resources into developing the software further, is anyone's guess so we shouldn't be surprised to see them shut down WebOS altogether at some point down the road.
Posted Aug 20, 2011 21:03 UTC (Sat)
by ksmathers (guest, #2353)
[Link]
The only proprietary part was pretty much the look and feel, and the application manager as far as I could tell from my reading.
Posted Aug 22, 2011 16:42 UTC (Mon)
by endecotp (guest, #36428)
[Link] (1 responses)
Is it worth buying one and trying to run something else on it?
Is there anyone here with any experience of e.g. the boot architecture who can say how easy it would be to repurpose these devices?
Posted Aug 22, 2011 17:21 UTC (Mon)
by rfunk (subscriber, #4054)
[Link]
Posted Aug 22, 2011 19:15 UTC (Mon)
by xxiao (guest, #9631)
[Link]
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
"Badas overall market share jumped from .9 percent to 1.9 percent, but thats still better than Microsofts take, which fell from 4.9 percent (mostly consisting of legacy Windows Mobile users) to 1.6 percent."
http://venturebeat.com/2011/08/11/bada-beats-windows-phone/
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
HP dropping webOS devices
Five weeks.
HP dropping webOS devices
end of H-P (Compaq, Digital) hardware
end of H-P (Compaq, Digital) hardware
end of "owned" stuff
end of H-P (Compaq, Digital) hardware
end of H-P (Compaq, Digital) hardware
end of H-P (Compaq, Digital) hardware
HP dropping webOS devices
Confusingly there is twitter post by Ari Jaaksi giving some hope with text "We will continue webOS platform full speed!" 15 hours ago, see http://twitter.com/#!/jaaksi
And BTW it looks like Ari's jump from dying Meego pan into HP's fire is source of (not) funny comments here http://jaaksi.blogspot.com/2011/08/first-webos-update-for-touchpad.html
HP dropping webOS devices
Lack of sales/interest due to slow hardware?
Lack of sales/interest due to slow hardware?
Lack of sales/interest due to slow hardware?
Lack of sales/interest due to slow hardware?
Java is new Tk
Java is new Tk
Java is new Tk
Refcounting is fine...
Refcounting is fine...
Java is new Tk
Java is new Tk
I guess the point is that predictable deletion has nice properties to implement things in an RAIIish fashion which probably outweighs the problems of expensive cascading deletion.
Java is new Tk
Well, it's nice to ignore facts...
In fact, nearly all UI development today is done in garbage-collected languages.
Typical Android handsets tend to clock in at somewhere between 800 MHz and 1.2 GHz today. Apple's flagship device is still at 800 MHz (I think?) Experience suggests that you don't need "10x horsepower" to use GC.
There are still some people developing UIs in C++ or C on the desktop. The main reason you would do this is because the rest of the app needs to be in C++ or C for speed reasons.
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
> be done from the rendering thread, because [on Android] it has the
> only OpenGL context!
> texture upload.
> often, and then there's all that additional complexity around object
> allocation initialization, and autorelease pools and references that
> you need to explicitly care for. The class files are longer than their
> java counterpart, and the need to free every little object explicitly
> adds a chance for making mistakes. I really don't believe in requiring
> programmers to forcibly care about this sort of stuff when most of
> these allocations that add to the programming-type complexity are so
> small that it doesn't even matter if they hang around for a while and
> are taken care of by GC at some convenient time later.
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
> experimented with it last year; another was that the debugger didn't work
> with multi-threaded programs. I was actually quite shocked by how poor at
> lot of it was once I poked below the surface. I believe some things have
> got better in the meantime.)
> the advantage of working on every platform that you might ever want to use
> - even WebOS! - but not Windows Mobile
Well, it's nice to ignore facts...
> code), is broken on Android 2.2 but it works on Android 2.3 and
> above.
> iOS if you want.
> Objective C as replacement for C++, and only supports Objective C++
> for compatibility reasons.
Well, it's nice to ignore facts...
Without exceptions, you cannot use std::tr1::shared_ptr, which is more or less the standard smart pointer in the C++ world these days. Most of the stuff in the STL uses exception too, which is inconvenient to say the least.
Both boost::shared_ptr and GCC's tr1::shared_ptr can be used without exceptions. Failed memory allocations will abort. The only other throwing operation is converting a weak_ptr to a shared_ptr, which can be replaced by calling weak_ptr::lock() which is non-throwing.
Well, it's nice to ignore facts...
> exceptions. Failed memory allocations will abort. The only other throwing
> operation is converting a weak_ptr to a shared_ptr, which can be replaced
> by calling weak_ptr::lock() which is non-throwing.
Well, it's nice to ignore facts...
> for exceptions or RTTI. So you will not be able to use shared_ptr
> with the old NDK, only with the new one-- sorry.
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
There is talk of removing the dependency on RTTI from tr1::shared_ptr. But of course that will take years to be agreed on by everyone and rolled out, assuming that it goes forward.
What talk exactly? You know TR1 is finished, right? It is what it is, there will be no more changes to the document. But if you want changes to libstdc++'s implementation of tr1::shared_ptr, just ask me, if it's reasonable I'll consider it.
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
Well, it's nice to ignore facts...
That's the problem witrh GC
I'm testing the application on Galaxy S. No choppiness is evident. According to logcat, GC pause lengths vary from < 10 ms to 37 ms, majority of the pauses being 20 ms.
However, iOS version was much harder to write because objC was new to me, and xcode 4 is fairly buggy and crashes quite often, and then there's all that additional complexity around object allocation initialization, and autorelease pools and references that you need to explicitly care for.
That's the problem witrh GC
Actually, fragmentation is usually less of an issue on garbage-collected systems, because the GC can defragment memory, which isn't feasible in languages like C where pointers aren't opaque.
Why should that be a problem?
And THAT is the problem
Actually, fragmentation is usually less of an issue on garbage-collected systems, because the GC can defragment memory, which isn't feasible in languages like C where pointers aren't opaque.
And THAT is the problem
Really? Do you have any data to back this up? Can you cite any measuremeants to that effect?
I don't know, but that doesn't mean it's not possible. I mean, people have been writing books about GCs (e. g. http://www.amazon.com/dp/0471941484/), do you really expect me to answer this kind of questions in an lwn comment?
And THAT is the problem
> Right. And this is where you experience the most extreme dropouts and slowdowns.
Really? Do you have any data to back this up? Can you cite any measuremeants to that effect?> How will you compact a heap with multimegabyte arrays without significant delays?
Sure. It's typical problems for real programs. And if the best answer you can offer "there are a lot of papers on subject, surely CS wizards solved the problem long ago" then I'm not convinced.
I don't know, but that doesn't mean it's not possible. I mean, people have been writing books about GCs (e. g. http://www.amazon.com/dp/0471941484/), do you really expect me to answer this kind of questions in an lwn comment?
And THAT is the problem
> can offer "there are a lot of papers on subject, surely CS wizards solved
> the problem long ago" then I'm not convinced
And THAT is the problem
And real time GCs, especially free one are rare indeed.
And THAT is the problem
And THAT is the problem
I suspect the same will happen with GC: wizards promised that with GC you can just ignore memory issues and concentrate on what you need to do, but in practice it only works for packet computation (things like compilers or background indexers) while in UI you spend so much time fighting GC the whole savings become utterly pointless.
And THAT is the problem
And THAT is the problem
> C#/.NET too will show unexpected and very annoying pauses whenever the GC
> is required to run.
> demonstrated this erratic behavior.
> with 256MB RAM while editing several files of several megabytes each.
> is one application I know I experienced problems with while Emacs and vi
> never acted funny.
And THAT is the problem
And THAT is the problem
There are nothing ironic here...
Ironically, emacs actually implements its own garbage collector, which is based on mark-and-sweep. So apparently even on your ancient 256 MB machine, old-fashioned stop-the-world GC is fast enough that you don't notice it.
As a side note, it's a little strange to see emacs held up as a shining example of efficient programming. The sarcastic joke in the past was that emacs stood for "eight megs and constantly swapping."
And THAT is the problem
This is my experience as well...
This sort of application should have its heap size tuned appropriately because too large heap will have lot of objects to collect when the cycle triggers, and this in turn can cause glitches, whereas a small heap will have frequent but fast enough collections.
This is my experience as well...
This is my experience as well...
And THAT is the problem
I can back that up with my own personal experience. Java software and C#/.NET too will show unexpected and very annoying pauses whenever the GC is required to run. C or C++ software, even Perl and Python software never demonstrated this erratic behavior.
Puhlease...
Puhlease...
Puhlease...
1. The language speed is due to their interpreting loop rather than the fact they use GC. As a rule of thumb, an interpreter evaluating an opcode stream appears to be 10 times slower than compiler that translates it into native form.
2. Probably true. Missing from this discussion is the observation that even malloc/free can be unpredictable with respect to latency because they must maintain the free list and occasionally optimize it to maintain performance of allocations. No doubt advances to malloc technology have happened and will happen, and there are multiple implementations to pick from that expose different tradeoffs.
3. Python, I've been told, also contains a true GC in addition to the refcounter.
I think the largest single advantage of a refcounter is that it gives a very predictable lifecycle for an object, often removing it as soon as the code exits a block.
And THAT is the problem
Well, then you don't really have a point, do you?
Again, do you have any data to back this up? Because I really doubt that this is a problem for 99% of all applications.
That doesn't mean anything as long as you don't show that the failure of microkernel based OSes on the desktop can be attributed directly to the microkernel design. There may well have been other factors: inertia, lack of hardware and software vendors, bad luck etc.. Also, microkernels were actually a success in the embedded world. QNX is just one example of this.
And THAT is the problem
And THAT is the problem
And THAT is the problem
And THAT is the problem
And THAT is the problem
And THAT is the problem
That's the problem witrh GC
That's the problem witrh GC
> in the main loop has executed, or does it release them
> synchronously when I type [foo release]?
> management is that the VM can attempt to arrange GC to occur
> asynchronously with respect to other work being done.
That's the problem witrh GC
That's the problem witrh GC
Lack of sales/interest due to slow hardware?
> a language that was never designed for high efficiency.Lack of sales/interest due to slow hardware?
HP dropping webOS devices
HP dropping webOS devices
Not sure anyone should mourn the passing of another proprietary OS.
HP dropping webOS devices
Can we run something else on this hardware?
Can we run something else on this hardware?
http://rootzwiki.com/showthread.php?t=3327
http://fzservers.com/touchdroid/
HP dropping webOS devices