Opera moves to WebKit and V8
Opera has announced that it will stop using its own rendering engine and will migrate its browser to WebKit and the V8 JavaScript engine—specifically, the Chromium flavor of WebKit. Opera Mobile will be ported first, with the desktop edition to follow later. The announcement downplays the significance of the change, saying: "Of course, a browser is much more than just a renderer and a JS engine, so this is primarily an "under the hood" change. Consumers will initially notice better site compatibility, especially with mobile-facing sites - many of which have only been tested in WebKit browsers.
"
Posted Feb 15, 2013 22:14 UTC (Fri)
by Company (guest, #57006)
[Link] (47 responses)
Posted Feb 15, 2013 22:33 UTC (Fri)
by philipstorry (subscriber, #45926)
[Link] (33 responses)
I'd agree that the software monoculture aspect of it is less attractive - webkit is going to have to be very secure!
I've been using Opera for years. (Since v3, and I paid for several versions.) I'm typing this in Opera right now. It's a great browser. But there are some websites that it still doesn't quite work with.
Neither is Chrome, nor is Firefox. They all have either rendering behaviour or an interface quirk which bugs me. Opera just bugs me the least.
This at least means that Opera will be feeding back code to WebKit. Perhaps WebKit will get an option to handle zooming in Opera's much better way. Perhaps some of the excellent work on reflowing that Opera has done over the years will slip into WebKit.
Overall, this looks like a good thing for all concerned...
Posted Feb 15, 2013 23:16 UTC (Fri)
by josh (subscriber, #17465)
[Link] (2 responses)
When "broken", sure; however, that doesn't stop sites from saying "best viewed in a WebKit-based browser". Already a fair number of sites only work on Chrome or other WebKit-based browsers; most commonly, mobile sites that just break, and supposedly "HTML5" demo sites that use -webkit-* prefixed CSS and otherwise only work in WebKit-based browsers.
Posted Feb 15, 2013 23:58 UTC (Fri)
by philipstorry (subscriber, #45926)
[Link] (1 responses)
Idiot web designers will be idiot web designers. And they'll assume WebKit means Safari, or that it means a mobile device. And they'll code for prefixed CSS but do it incompletely and sloppily.
I can't really do anything but condemn that.
But on the other hand, if these idiots can't even handle Chrome also using WebKit, then why does it matter if Opera is using Presto or WebKit?
They were going to support Opera badly either way. Just like they were going to support Chrome badly, or non-mobile WebKit instances badly.
Idiot web designers will be idiot web designers. :-(
Posted Feb 16, 2013 0:12 UTC (Sat)
by josh (subscriber, #17465)
[Link]
Posted Feb 15, 2013 23:57 UTC (Fri)
by jke (guest, #88998)
[Link] (20 responses)
For one it's not solely influenced by one "culture." Adding Opera developers adds to the mix, doesn't it?
It also differs from the IE6 extreme because there doesn't seem to be a lot of evidence (so far) that we're getting stuck on one version of it. Is it going to be the case that we'll need to go dig up some ancient unmaintained version to keep compatibility with some mission critical junk that no one can fix? From what I see everyone's updating to newer versions of webkit Willy-nilly and not looking back much.
Maybe I'm not buying into the right paranoia but is monoculture an overstated problem with open source software in active development?
Posted Feb 16, 2013 0:06 UTC (Sat)
by philipstorry (subscriber, #45926)
[Link]
Otherwise, I don't disagree with you - the monoculture of an open source project is far preferable to a closed source one.
I also think that such monocultures - like the Linux kernel itself, of which there is only one but which is used by many different distributions - has shown that it can be reactive enough and diverse enough that the security concerns aren't much more or less worse than any other monoculture.
Which brings us back to the fact that Opera's action is effetively trading off the strengths of diversity for a hopeful strengthening of a monoculture. It's kind of easy to see why some people are a little uneasy about it.
(But as I've said, I'm for it - providing I lose no features in Opera!)
Posted Feb 16, 2013 19:27 UTC (Sat)
by kripkenstein (guest, #43281)
[Link] (18 responses)
An open source monoculture is better than a proprietary one, for sure. But it's still bad.
Aside from security issues, there is the concern for standards. Standards are meaningless with a single implementation. And without standards, making additional implementations is extremely hard. Right now, it is possible for someone to write a new web browser with new technical improvements - they would implement the standards. If WebKit is the new IE6, then a new browser would have to do "what WebKit does."
> For one it's not solely influenced by one "culture." Adding Opera developers adds to the mix, doesn't it?
Not much. Opera is going to be shipping a rebranded Chrome - they aren't even taking WebKit, they're taking Chromium. Opera will differentiate through UI, it appears - hard to see what else they can.
Posted Feb 17, 2013 18:13 UTC (Sun)
by khim (subscriber, #9252)
[Link] (16 responses)
And this is a bad thing… exactly why? Standards only raison d'etre is interoperability (well, some companies think it's PR - see OOXML, but let's ignore these aberrations for now). If you have just one implementation then interoperability is achieved automatically. In fact most languages achive interoperability this way: perl, tcl, python… they all have one "canonical" implementation which defines what the language is. Why is it such a bad thing to have for HTML or JavaScript?
Posted Feb 17, 2013 18:53 UTC (Sun)
by viro (subscriber, #7872)
[Link] (1 responses)
It boils down to this: unless the damn thing includes strong AI, you need interoperability of sorts, at least with the mental models in the heads of programmers writing in that language. Learning a language means building such a mental implementation, just to be able to reason about the expected program behaviour. Without that people are reduced to cargo-culting their way through every problem and that's *not* a way to write well.
sh(1) sucked well before there had been other implementations (not that they had helped when they appeared) and in large part it had been caused by lack of predictability...
Posted Feb 17, 2013 19:23 UTC (Sun)
by pboddie (guest, #50784)
[Link]
It also doesn't help that in some projects, comments and documentation strings are seen as superfluous fluff, meaning that one has to get into the exact mindset of the developers to first of all discover what they were trying to do, and only then to figure out what they meant to do.
Standards can be fairly awful things that are mostly exercises of formalising various vendor implementations, and I perceive Opera Software to be yet another vendor in this respect, even though the various Web standards involved have been fairly comprehensive and coherent. But they do serve a genuine purpose.
Posted Feb 17, 2013 22:02 UTC (Sun)
by anselm (subscriber, #2796)
[Link] (12 responses)
Having just one »canonical« implementation of a programming language doesn't imply interoperability across all platforms where that implementation runs. There are lots of things that can go wrong even so unless whoever maintains that implementation is very careful indeed. The case in point would be Java, whose tag-line, famously, is »write once, debug everywhere«.
In general, it is very useful to have a notion of what programs in a language mean which is independent of a particular implementation of that language – even if there is only one implementation. Otherwise it is impossible to distinguish actual intentional properties of the language from quirks of the (albeit »canonical«) implementation.
Posted Feb 17, 2013 22:56 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (11 responses)
But in reality Java works pretty fine across all supported platforms. People routinely use Windows to develop and debug Java software that is later deployed on Linux (or earlier on Solaris).
Posted Feb 17, 2013 23:50 UTC (Sun)
by anselm (subscriber, #2796)
[Link] (9 responses)
Yep. They have had 20 years or so to get their act together, after all.
The fun observation when Java was new was that Java basically claimed to do, as a big innovation, what many other languages (including Tcl, Perl, and Python) were already doing as a matter of course – while Java failed abysmally. The »everywhere« in »run everywhere« essentially meant »Windows and Solaris«, and that was only if you knew what you were doing.
Java got to where it ended up because Sun spent millions in the 1990s marketing it to suits; based purely on its technical merits it would have sunk like a lead balloon.
Posted Feb 18, 2013 3:13 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
There was NO fast cross-platform language in 95-97. Python was still very young, Perl was not fast (and, well, Perl), C++ was not nearly cross-platform (for GUI, in particular).
Java provided a way to create programs that run without large changes pretty much on all major platforms (even classic Mac OS).
Posted Feb 18, 2013 7:51 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (7 responses)
Java was well worth bashing even then. As I said, it became popular only because Sun was flogging it to the suits. Eventually it had had so much money poured into it that it had to become a half-way usable language despite itself, but at least for the first five years of its existence it really, really sucked. Few technical people wanted anything to do with it if they could help it at all – the big use case it was being touted for (browser applets) never really got off the ground, and for most everything else, many folks thought of Java as a kind of C++ with non-removable training wheels and abysmal performance.
That didn't really matter with respect to Java because Java at the time was by no means »fast«, either. In fact, compared to other interpreted languages of the time it was pretty slow. Just-in-time compilation for Java – which was the technique that eventually did make a difference – became popular only later.
So? Other popular languages did, too.
Posted Feb 18, 2013 14:51 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
A lot of technical people were thrilled with Java, because it FINALLY allowed to write large programs without spending hours compiling stuff or waiting for Perl interpreter to muddle through the code.
Java has never become successful on the client side, but on server-side it was an instant hit. A great explosion of OpenSource Java projects in 90-s attests to that.
> So? Other popular languages did, too.
I know only of Tcl/Tk which requires, shall we say, quite a bit of getting used to.
Posted Feb 18, 2013 15:42 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (4 responses)
Yes, because Java in the 1990s gave you the joint benefit of both spending hours compiling stuff and waiting for the JVM to muddle through the code.
Now you're projecting.
Tcl/Tk is a lot better than its reputation. It is safe to say that if Sun had spent all that money they spent pushing Java on pushing Tcl/Tk instead (which would have been quite feasible given that Java was a SunLabs project at the time) the world would be a different (and arguably better) place. Remember that in the 1990s Tcl/Tk was a language that actually had serious commercial users all over the place, while Java was a language desperately in search of something – anything, really – it could make itself useful for. Set-top boxes, web applets, you name it.
It is probably telling that Tcl/Tk, while never having had much of a marketing force behind it, is still popular in many places – and not unimportant ones, either.
Posted Feb 18, 2013 16:50 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
> Tcl/Tk is a lot better than its reputation.
Now, Smalltalk (and Strongtalk) might have been more widespread. That might have been more interesting.
> It is probably telling that Tcl/Tk, while never having had much of a marketing force behind it, is still popular in many places – and not unimportant ones, either.
Posted Feb 18, 2013 17:19 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (2 responses)
For the record, HotSpot came out in 1999 and became the default in Java 1.3, which was released in 2000. The HotSpot JVM was also not a particularly portable program, so the performance benefits of JIT were by no means universal to all Java platforms. (Which was fine by Sun because it meant that for reasonable server-side performance you pretty much had to be running Solaris.)
Posted Feb 18, 2013 17:28 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
The first HotSpot JIT was present in JRE 1.2 (that's why it was called 'Java 2') later in 97.
Posted Feb 18, 2013 21:23 UTC (Mon)
by pboddie (guest, #50784)
[Link]
For the record, I stumbled upon Java, Tcl/Tk, Python and a bunch of other languages at the same time, in around 1995. Java was the only one that needed me to get a disk quota upgrade and an account on a flaky Solaris server (whereas the others all ran on SunOS and a multitude of other platforms). I recall a colleague during my summer job showing me Java for the first time: Duke the Java mascot waving in an applet; premium UltraSPARC workstation required.
To be fair, I did get some mileage out of Java for a university project, doing a bit of graphics in AWT instead of using Xlib like everybody else, but a few months later I would saved myself the hassle of the dubious AWT implementation and used something like Python and Tk instead.
Sorry, how did we get onto this again?
Posted Feb 21, 2013 4:40 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Qt was started in 1991. I don't know when QtGui popped up.
Posted Feb 24, 2013 11:51 UTC (Sun)
by Jandar (subscriber, #85683)
[Link]
Only if you restrict the jre to a specified version and test the programm on all used platforms.
Up to now this "runs everywhere" is total bogus. Everytime a new jre version is rolled out by a customer reports about changed behaviour starts rolling in.
Posted Feb 19, 2013 3:47 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link]
Not really. Fred Brooks goes into some detail of the drawbacks of implementation as specification in The Mythical Man-Month. The problem is that you never have just one implementation. Every version is a subtly different implementation from the previous version. If you define the implementation to be the specification, there are no such things as bugs. Every quirk of your implementation is part of the API, and any attempt to fix what you see as bugs may turn out to be eliminating a feature that somebody has come to depend on. Even if you have just one implementation, it can still be useful to have a formal specification to prevent yourself from getting stuck with that kind of problem.
Posted Feb 17, 2013 20:53 UTC (Sun)
by pjm (guest, #2080)
[Link]
As for "What's wrong with having only one implementation", the problem is that not all uses of HTML/CSS are for web browsing: you want your word processor to be able to read & write HTML/CSS, and publishers want to use HTML/CSS for writing books, but there are fundamental conflicts between the needs of an interactive web browser and the needs of printing. (One needs to be fast and support interactive technologies, the other needs to look good: well-chosen page breaks, line breaks, float placements, table column widths etc.)
Posted Feb 16, 2013 1:00 UTC (Sat)
by Company (guest, #57006)
[Link]
Fixing a monoculture is bad, in particular when that monoculture defines an interface. It usually takes forks (Xorg) or reimplementations from scratch (LLVM) to get things moving again. Both of these usually take a while before they happen and they take a huge amount of effort. No matter of the original was open or closed.
Posted Feb 16, 2013 12:14 UTC (Sat)
by marduk (subscriber, #3831)
[Link]
Yeah, except when it comes to web engines it seems that they go by their own law:
Given enough eyeballs, all bugs are features.
These browsers become so non-compliant/forgiving of errors because more and more users ( popular web sites) depend on them.
Posted Feb 16, 2013 19:50 UTC (Sat)
by robert_s (subscriber, #42402)
[Link] (2 responses)
If that were true then there wouldn't be any odd quirks of behavior specific to WebKit. And there are plenty.
It's not so much about going by the core specifications - browsers have become better at that - it's about all the tiny pieces of behavior around the edge that aren't in specifications. And with a monoculture of browser engines it's far more likely that a piece of unspecified behavior *becomes* a de-facto standard because "most browsers" "do it like that".
And that's where you start entering dangerous territory, standards-wise.
Posted Feb 18, 2013 16:36 UTC (Mon)
by maderik (guest, #28840)
[Link] (1 responses)
Posted Feb 19, 2013 18:30 UTC (Tue)
by sorpigal (guest, #36106)
[Link]
I can graph Linux adoption and project an inaccurate date in the future when there will be no notable other *nix implementations. Whether everything will continue to be as okay as it is now is debatable; I'd say that we've already reached "only one notable implementation" land when it comes to certain categories of use. It will get worse and I just hope that nothing really bad happens as a result.
Posted Feb 16, 2013 21:52 UTC (Sat)
by elanthis (guest, #6227)
[Link] (1 responses)
WebKit is already the de facto standard mobile browser. Opera switching is affecting only a tiny fraction of the market. WebKit already needed to be super secure to keep safe 80+% of the mobile market. Safari has 60% of the market and Android Browser has 25%. Third place is held by the Java browsers on feature phones.
Opera moving over only makes sense as a business, and barely affects the mobile browser space; they are tiny and wasting a lot of resources just trying to stay compatible with WebKit browsers as is.
Posted Feb 18, 2013 23:51 UTC (Mon)
by Drongo (guest, #60513)
[Link]
My concern, justified or not, is just how isolated the MVC elements are from each other, and thus whether the rendering engine conversion will blow back to the interface in ways I regret. We will soon see.
Posted Feb 18, 2013 11:03 UTC (Mon)
by job (guest, #670)
[Link] (1 responses)
"Can be fixed by anyone when it's broken", sure, but do they? According to the JQuery developers, they have more bug workarounds in their codebase for Webkit than for any other browser(!). If anybody knows these things it's probably them. That leaves me skeptical about how well Webkit is taken care of. Especially since Google and Apple keep piling new features on it. That can never be be a suitable reference implementation of the HTML standard, which some people purports it to be. One less rendering engine is not something to celebrate.
Posted Feb 26, 2013 1:19 UTC (Tue)
by pjm (guest, #2080)
[Link]
It's one thing to worry about this development, but we also need to think about what we can actually do about it. Regarding bugginess: Among the comments in that jQuery-related page is a WebKit developer asking whether these workarounds are for bugs still present in current WebKit, or whether they're just for old versions of WebKit. Another couple of comments ask whether the jQuery developers made any attempt to have the bugs fixed (even if only by filing bug reports). So far there's no reply to any of these queries, so it's not clear to me that current WebKit (as adopted by Opera) is more buggy than other browsers. As noted both here and in the comments to the referenced blog post, many (or even most) differences between WebKit and Gecko are where the specs aren't clear what the right behaviour is. Opera switching to WebKit will surely only make this problem worse. Opera adopting WebKit says that the CSS specs (and other, not-yet-specified, expected browser behaviour) are too onerous not just to implement to begin with, but even just to keep up with when you already have an implementation. These two observations do not bode well for the future of an open web, given that not everyone's HTML/CSS-related needs can be met by a single implementation. If we want an Open Web, then it has occurred to several people (independently) that we need a different approach to providing advanced functionality, making more use of Javascript polyfills and providing useful basic building blocks. Tab Atkins, who works on both WebKit and spec development, is one person thinking along these lines. LWN readers might compare HTML/CSS interoperability with (La)TeX interoperability. In TeX, the usual way of adding functionality is with a package file. I don't remember having to upgrade the latex binary to read a document, I've only needed to install packages. For web pages, "installing packages" is even easier, because the browser does it automatically from the URIs in the referring document. With proper versioning, implementers could even implement native versions of commonly-used libraries that prove to be performance costly. The limiting factor for a polyfill approach is what hooks CSS provides for changing functionality, and what building blocks are available. A lot of polyfills today work well enough in common cases, but fail once there's a user stylesheet or the document gets viewed in a medium other than screen, or the content uses a layout feature beyond the basics. The challenge, then, is for each new CSS feature to think how it could be implemented with a Javascript polyfill, and what changes to CSS (hooks and building blocks) would make a polyfill implementation more practical; and to get these generally useful extensibility changes implemented instead of once-off features. Comments, suggestions?
Posted Feb 16, 2013 2:11 UTC (Sat)
by drag (guest, #31333)
[Link] (7 responses)
Posted Feb 16, 2013 6:42 UTC (Sat)
by wahern (subscriber, #37304)
[Link] (6 responses)
Posted Feb 16, 2013 14:34 UTC (Sat)
by drag (guest, #31333)
[Link] (5 responses)
I don't think we have anything to worry about.
I am still kinda surprised anybody cares about Opera, actually.
Posted Feb 17, 2013 10:52 UTC (Sun)
by nhippi (subscriber, #34640)
[Link]
http://blogs.windows.com/windows_phone/b/wpdev/archive/20...
There is some deep irony from fate, that when the world was full of "IE only" websites, "mobile IE" sucked and was unable to show those sites. Now that mobile IE is on par with desktop IE, and both are nicely standards compliant, both are having trouble showing some sites, as the world is shifting to webkit-only...
And with IE as the only browser for windows phone, IE has become one of the weak links of the windwows phone platform...
Posted Feb 17, 2013 18:22 UTC (Sun)
by khim (subscriber, #9252)
[Link] (3 responses)
Posted Feb 18, 2013 15:51 UTC (Mon)
by drag (guest, #31333)
[Link] (2 responses)
> You may as well say "I am still kinda surprised anybody cares about iPhone" (which has similar penetration in many countries).
No. Opera isn't anything remotely like a iPhone in terms of popularity or widespread usage.
Posted Feb 18, 2013 21:27 UTC (Mon)
by pboddie (guest, #50784)
[Link]
That said, Opera has been in full strategic retreat from what the company was (and still is) known for, hence the emphasis on services and things like Opera Mini. I just interpret this news to be a continuation of that retreat.
Posted Feb 19, 2013 18:23 UTC (Tue)
by khim (subscriber, #9252)
[Link]
Nope. Site statistic? SITE? I think you don't understand. These are not stats for some website. These are stats for the whole Russian internet. liveinternet.ru is a company which provides something like Google Analytics - but also publishes aggregate numbers. As for the individual sites... On lot of Russian sites over 50% of visitors still come using Opera! Last year Google's Chrome finally become browser number one in Russia by most measures (by some it's still #2). To do that it needed to beat Opera, not Internet Explorer or Firefox! This was serious problem for us in Russia for years because most developers don't live in Russia and it was pretty hard to convince them to test their stuff with Opera - but as a result huge amount of users have not even tried our products. It was huge problem for our Russian subsidiary. In Russia it is. 20% iPhone users in smartphones match 20% of Opera users in browsers almost exactly. If anything percentage of Opera users is higher.
Posted Feb 16, 2013 10:17 UTC (Sat)
by geofft (subscriber, #59789)
[Link] (2 responses)
A lot of the usual reasons to worry about monocultures go away, I think, when you have a large public development community and when there are independently-driven forks in use with somewhat different code, especially for newer code (the distro kernels, in Linux's case). So I'm not too worried.
What worries me more is, for instance, Apple's requirement that everyone writing a browser on iOS must be using the system WebKit (the so-called Chrome for iOS included), since that keeps all power to patch that web engine for an entire platform in one entity's hands. Opera for Android and Chrome for Android both using WebKit, using different forks of WebKit, not so much.
Posted Mar 12, 2013 5:38 UTC (Tue)
by Duncan (guest, #6647)
[Link] (1 responses)
That sounds very much like the distros vs. upstream argument on bundled libraries... for exactly the same reason, unbundled system libs allow the distro/OS to patch all users of that lib with a single patch to the system lib, instead of requiring dozens of apps, some of them obscure and rarely used enough to not be as well tracked as the major browsers, be patched.
Seems quite reasonable to me.
Now if, say, firefox isn't allowed to run on iOS (I don't know, iOS is a walled garden I don't even visit), or links or lynx, the text-mode-browsers, because they use some non-webkit library, then that's a different matter indeed. But requiring unbundled system libraries is standard practice among distros as well... for good reason!
Duncan
Posted Mar 12, 2013 7:24 UTC (Tue)
by dlang (guest, #313)
[Link]
Posted Feb 16, 2013 16:11 UTC (Sat)
by thebluesgnr (guest, #37963)
[Link] (1 responses)
Posted Feb 16, 2013 23:14 UTC (Sat)
by kris.shannon (subscriber, #45828)
[Link]
Posted Feb 16, 2013 17:03 UTC (Sat)
by alankila (guest, #47141)
[Link] (17 responses)
Posted Feb 17, 2013 23:10 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (16 responses)
I've always wondered what kind and how much portability work web developers typically perform. Is this really useful work like trying to work around horrible bugs in some browsers? Or do some web developers just waste their time trying to get the same, "pixel perfect" layout across all browser versions instead of reading and applying Nielsen Norman's free advice?
Posted Feb 18, 2013 4:43 UTC (Mon)
by dlang (guest, #313)
[Link] (15 responses)
This doesn't work for even a single browser because people do not all have their browser maximized to take their entire screen, and even if they do, you don't know how many 'helpful' toolbars they've got loaded that eat up screen space.
But they keep trying.
Posted Feb 18, 2013 9:02 UTC (Mon)
by alankila (guest, #47141)
[Link] (10 responses)
I use chrome to develop websites because of its built-in development tools, then once ready check it on the major browsers like Firefox $LATEST_VERSION and IE8 (if I know customer still uses XP) or just IE9. The fact Opera has no JS and HTML rendering implementation of its own means I can leave Opera out of my routine altogether, which is what I am pleased about.
I do not care one bit if things are not exactly laid out as I intended. This is not the sort of thing that matters. The site just has to be usable and controls laid out approximately where I intended them to be placed originally. If something isn't quite where I intended but the site is usable, I'll just state that it's a browser issue that likely fixes itself in some future update.
The layout issues that are too severe to be ignorable, I circumvent with IE8/IE9 targeted conditional comments and Firefox-specific CSS workarounds. Most of the time I only need some simple hacks for IE8, as all the other browsers (including Opera) tend to render everything pretty much the same way.
Posted Feb 18, 2013 9:19 UTC (Mon)
by epa (subscriber, #39769)
[Link] (9 responses)
After all, the purpose of testing is to find bugs, and anything that helps you find more of them is a good thing. But you seem to be approaching it from the other way: you see the testing process as a way to prove that something works (rather than finding ways in which it doesn't work), and thus by testing less exhaustively you can find fewer problems and go home earlier.
Posted Feb 18, 2013 10:13 UTC (Mon)
by alankila (guest, #47141)
[Link] (8 responses)
My ideal world would have exactly one browser engine, with everyone running the same version. That would allow skipping the entirely of the testing-after-development phase, and to use everything that engine provides -- a huge improvement to current status quo. And if it was nailed down in such a way that I could specify "I designed this to look and work right with version 12.34" (similar to what Microsoft is doing, incidentally, with their IE version compatibility tagging, and what Android is doing, with the manifest version specification), I would then simply trust that future versions of browsers retain compatibility to past versions, and solve this thorny problem for me.
The difference in our mindsets could hardly be greater. I think browsers grew up a huge deal when the acid tests arrived, because they provided a nice score that anybody could check and solved a lot of incompatibility problems back in the day when browser vendors rushed to compete against each other to beat their rivals' scores. So I would credit those tests, and the long-awaited death of IE6 and now IE7 for making our lives quite easy these days.
Posted Feb 18, 2013 12:15 UTC (Mon)
by epa (subscriber, #39769)
[Link] (7 responses)
An alternative nirvana would be to have a single reference browser and for all current and future browsers to promise compatibility with that. Then you could test with the reference engine only. But that is no more realistic than the single-browser scenario.
In the real world, of course you have to program to some pathological subset of the language which is supported on all browsers. You said as much yourself; you end up writing compatibility code for older IE versions. I would not advocate testing on additional browsers just to narrow that subset even further, but to flag up possible problems with the dialect you are using. You may inadvertently rely on a behaviour which happens to work one way at the moment, but was never specified anywhere (formally or informally), and might even be considered buggy and changed in future browser versions. You may make assumptions which look fine when rendered on a standard screen today, but will cause the site to break with very small or very large screens tomorrow - this was certainly the case a few years back with sites that had only been 'tested' on typical PC monitors. If you consciously decide to rely on these implementation details, that's fine. You may judge it is a better use of your time to write something that happens to work for 99% of users today, rather than agonize over whether it strictly complies with all possible standard-conforming browsers in the future. But it is surely better to make an informed choice.
Posted Feb 18, 2013 12:53 UTC (Mon)
by alankila (guest, #47141)
[Link] (4 responses)
From my point of view, all the multiple browsers accomplish is:
1) make it harder to write single acceptable codebase that works for them;
The compat crud and ridiculous kludges you end up with are not resulting in any improved code quality of catching of bugs; that is the standard-mindset speaking. But there is no standard: they're just ridiculous kludges that mar otherwise simple and nice designs.
Posted Feb 18, 2013 13:28 UTC (Mon)
by alankila (guest, #47141)
[Link] (3 responses)
My message is this: implementations dominate. And I have fully resigned to updating the code whenever some new popular browser or new screen dimension comes that causes stuff to not work optimally. It's just fact of life and the alternatives of trying to spend a lot of effort trying to guess where you are 5 years from now are probably going to go wrong anyway.
Posted Feb 18, 2013 14:26 UTC (Mon)
by epa (subscriber, #39769)
[Link] (2 responses)
It seems to me that Javascript is like the Unix-variants question: much better to have a single implementation. But pure HTML and CSS is more like the choice of compilers. CSS is declarative anyway and you don't expect to get pixel-identical results (not if you are wise), and there is such a large diversity of different screens (not to mention print and screen readers) that you are not building a fixed artefact so much as giving some general instructions which you hope that somebody will interpret in the way you intended. Given that you want to make sure that there are no hidden ambiguities or assumptions in the instructions you have written.
Posted Feb 18, 2013 16:16 UTC (Mon)
by alankila (guest, #47141)
[Link] (1 responses)
CSS was also meant to solve the multidevice presentation problem by allowing overriding of the instructions based on screen type (somehow), so the pixel reign can probably be extended even to devices shaped like a david's star in due course. But as with all abstractions, you can spend a lot of effort doing something that probably ends up being a very small benefit. The only really good use case I've come across for @media is the ability to remove elements from page when printing, so far. Not very glorious.
All this CSS stuff is very complicated, and historically most of it didn't work reliably. Today it might work, but I don't care. As I admitted from early on, my attitude is best summarized in the word "apathy".
Posted Mar 12, 2013 6:12 UTC (Tue)
by Duncan (guest, #6647)
[Link]
Perhaps it was meant to be at one point...
As someone in another recent discussion pointed out, even pixels are relative, these days. Even if on first visit there's a 1:1 mapping of physical to logical pixels (and for all I know a browser accounts for the window size and possible average zoom on previous sites and does an initial zoom even on first render), the first thing many people do the first time they hit a site is zoom it to a comfortable size. I know firefox (and I presume all modern browsers) remember the zoom-state, so from there on out, every time a user comes back, the browser automatically zooms to whatever the user preferred, and any page-specified pixel sizes are relative to that.
So even pixel metrics are relative, these days...
(Of course, that's not even considering user controlled rewriting proxies such as privoxy, or technologies such as greasemonkey. I long ago taught my instance of privoxy to replace font px with pt, along with a number of related replacements like ms sans serif with just sans serif. From my filter file (JED are my initials, nice convenient way to avoid namespace pollution; many of these were from when I was using konqueror as my default browser; I could probably revisit some of them now, but if they're not causing issues...):
FILTER: JED-Font Kill various font attributes
# Kill MSSansSerif, as the default replacement for it
s+ms sans serif(,?)+m\$ sans serif($1)+isg
# Kill clean, as it's single-sized tiny
s+((\{\s*font-family:[^;]*)|(<font[^>]*))clean+$1unclean+isg
# Kill dynamic-loaded font-faces as they're
s+@font-face+@font-face-jed-font-killed+isg
# Replace Lucida Console
s+lucida\sconsole+luxi mono+isg
# Replace pixels with points (style)
s+({[^}]*font-size:\s*[0-9][0-9]*)px+$1pt+isUg
# Kill sub-100% font sizes
s+({[^}]*font-size)(:\s*[0-9][0-9]%)+$1-JED-Font-pct$2+isUg
)
Posted Feb 18, 2013 13:10 UTC (Mon)
by marcH (subscriber, #57642)
[Link] (1 responses)
Lack of backward compatibility: where the "single browser" dream/nightmare breaks indeed. It's surprising how browsers versions are very seldom mentioned in discussions like this one.
Posted Feb 19, 2013 19:52 UTC (Tue)
by sorpigal (guest, #36106)
[Link]
Posted Feb 18, 2013 9:29 UTC (Mon)
by pboddie (guest, #50784)
[Link] (3 responses)
Perhaps the only thing that is more annoying than pixel-precise layout is the apparent fad for needing JavaScript to set up the layout in the first place, meaning that if you use NoScript to stop the tens of superfluous "analytics" parasites from loading their scripts, you get a blank page or maybe content overwriting itself.
Posted Feb 18, 2013 15:54 UTC (Mon)
by drag (guest, #31333)
[Link] (2 responses)
Of course it's irritating when 3 column design were everything is just jammed together and you end up with the real content that is only about 15 character wide text. That's almost as bad.
Posted Feb 18, 2013 19:08 UTC (Mon)
by dlang (guest, #313)
[Link] (1 responses)
The problem is that lots of websites will have a 2-3" wide column of text in a 19" wide window, the rest of the browser window is blank.
This is even worse than super wide text.
if you let your text go the width of the browser window the user can narrow the window if they want. If you lock it to a really narrow column, the user can't widen it.
Posted Feb 18, 2013 21:40 UTC (Mon)
by pboddie (guest, #50784)
[Link]
People seem to forget that observations about the readability of wide columns of text are accompanied by recommendations that do not simply end with "restrict the width to ten words". Such recommendations came about through centuries of experience with the printed word and included other measures such as increasing the line spacing or "leading", or using shorter paragraphs (a general trend in mainstream writing, particularly in electronic media).
I do agree that CSS is a fairly poor tool for dealing with flexible layout and thus delivering multiple columns without a lot of extra work, however.
Posted Feb 16, 2013 19:18 UTC (Sat)
by ibukanov (subscriber, #3942)
[Link] (18 responses)
For example, Opera was first to have HTTP pipelining on by default. That initially broke many sites for users and Opera was forced to develop a rather complex code to blacklist servers and proxies that cannot tolerate that. But the end result is that Opera loads complex sites faster than, for example, Firefox, where user has to edit about:config to activate pipelining.
Posted Feb 17, 2013 13:44 UTC (Sun)
by tialaramex (subscriber, #21167)
[Link] (17 responses)
How long ago was this "first"? Because Mozilla spent (arguably wasted) a bunch of time trying to get to a point where blacklisting was enough to turn on pipelining and comprehensively failed. There were reverse proxies, corporate firewalls, all sort of stuff that felt it was OK to violate the end-to-end rule when it came to HTTP or try to share sockets across threads with no locking and the experience was that both proprietary software vendors AND end sites felt that "it works in Internet Explorer, stop bothering us about this" or just outright ignored bug reports.
Over at Google they also spent a bunch of time fighting one of these protocol violatons, and eventually they gave up and switched off the speed-up for any site that doesn't speak their new non-HTTP protocol.
It's great news if today Opera has this _working_ but your reference to blacklists suggests that they just haven't done enough research/ got enough users to notice that blacklists aren't enough to actually get pipelining working reliably out on the public web. Certainly they weren't when Mozilla experimented with this.
(Some people will say this is a great failing of HTTP, but actually the same happens even down in the lower levels, there are big companies where it's strictly forbidden to use IP multicast, a core feature of the protocol for decades, because their network hardware crashes when exposed to "too much" multicast and they can't get the supplier to fix it.)
Posted Feb 17, 2013 17:37 UTC (Sun)
by raven667 (subscriber, #5198)
[Link] (15 responses)
Tangent alert: Multicast is broken because it requires the network equipment to be smart which fundamentally breaks a core assumption of IP networking that the endpoints are smart but the network is dumb.
Posted Feb 17, 2013 19:41 UTC (Sun)
by butlerm (subscriber, #13312)
[Link] (9 responses)
Posted Feb 18, 2013 3:44 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (8 responses)
Posted Feb 18, 2013 4:42 UTC (Mon)
by dlang (guest, #313)
[Link] (7 responses)
most people who could use IP TV are not going to be doing this. They will watch the show they want to watch at the time they want to watch it, and the number of other people who start watching the same show at the same time is so small as to be meaningless.
For broadcasts of live events (political speeches, Sports events, etc) there may be a small niche, but is that really worth the effort of implementing it across such a large infrastructure?
remember that if people aren't consuming the content, all you are doing is wasting available bandwidth.
Posted Feb 18, 2013 12:12 UTC (Mon)
by ewan (guest, #5533)
[Link] (2 responses)
A model that worked like that over multicast IP, plus a relatively smaller number of unicast streams for 'catch up' services, should be more bandwidth efficient than unicasting to everyone.
Posted Feb 18, 2013 13:27 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link] (1 responses)
Posted Feb 18, 2013 16:35 UTC (Mon)
by butlerm (subscriber, #13312)
[Link]
Posted Feb 18, 2013 14:47 UTC (Mon)
by nye (subscriber, #51576)
[Link] (3 responses)
There actually seems to be a fairly large market for live-streamed video, like Twitch and Ustream, or YouTube's live option. However, I don't think any services like this actually try to multicast over the internet; it's unicast all the way[0].
So I guess the answer is 'no' - it's not really worth the effort, even when live video broadcast to tens of thousands of destinations is your principle business; possibly once it scales up to millions is where things start to look different, but at that volume you can invest in the infrastructure to avoid having to send it over the internet.
[0] Last year's edition of TCP/IP Illustrated still describes multicasting over the internet as "ongoing effort...for more than a decade", which seems to correlate with the general consensus I got from Google of "don't even try"
Posted Feb 18, 2013 15:54 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (2 responses)
Here in Germany, Deutsche Telekom is happy to sell you access to IP-based high-definition broadcast television, in competition with traditional cable TV providers. They do have a couple of million users. In this context, IP multicast, which is in fact being used, makes a great deal of sense.
Posted Feb 18, 2013 16:02 UTC (Mon)
by johill (subscriber, #25196)
[Link]
Posted Feb 18, 2013 19:12 UTC (Mon)
by dlang (guest, #313)
[Link]
If so, it's not really a broadcast situation.
Posted Feb 18, 2013 2:14 UTC (Mon)
by tialaramex (subscriber, #21167)
[Link] (4 responses)
Any manufacturer that does even a little testing will notice if they've actually completely broken link-layer multicast on a LAN because such a variety of things break in that scenario, including features in famous brand name products like Office and Mac OS X.
But if their implementation does something dumb with multicast that only shows up under /load/ then these features (which are mostly discovery mechanisms, using multicast to avoid wasting CPU on uninterested nodes) won't trigger it. A multicast video payload, even on a LAN (so no actual multicast routing) will run into problems though, as will pseudo-reliable multicast file delivery and other techniques. Whoops.
Posted Feb 20, 2013 2:25 UTC (Wed)
by butlerm (subscriber, #13312)
[Link] (3 responses)
Or short of that an IP multicast option that does not use link layer multicast addresses, to avoid this problem? Where an actual router does all the multicasting?
Posted Feb 20, 2013 5:54 UTC (Wed)
by dlang (guest, #313)
[Link] (2 responses)
There is IP based multicast (224.x IP addresses). This layer is managed by the routers knowing what downstream routers need copies of the traffic and sends it to all of them
There is ethernet link-layer multicast (a '1' in the least significant bit of the first byte of the MAC address, i.e. 01:00:00:00:00:00), traffic to these MAC addresses get handled by the ethernet switch.
These two can be used in combination with each other, but you can use the link-layer multicast with any IP address (and with no modification to the sending or recieving software), and I assume that you could use the IP based multicast without using the link-layer multicast, but I also wouldn't be surprised to learn that most of the time things default to using link-layer multicast if you are using the IP multicast range.
I also think that you will find that even the rather cheap switches handle link-layer multicast nowdays.
Posted Feb 20, 2013 17:59 UTC (Wed)
by butlerm (subscriber, #13312)
[Link] (1 responses)
A practical example of this is where you have multicast IPTV traffic arriving from a ISP network into a home/office network. You generally have to filter that out and direct it over a separate network of some kind to every IPTV device, because if you just forward it directly onto the local subnet, inexpensive switches will broadcast everything to every port, which is a problem.
Does anyone really want to run a separate network to their set top boxes simply because link layer multicast is synonymous with link layer broadcast? It makes it difficult to watch television on ordinary desktops because they are connected to the wrong network, for example. Perhaps inexpensive Ethernet switches will implement IGMP snooping in the future for this reason. It isn't common yet though.
Posted Feb 21, 2013 4:03 UTC (Thu)
by foom (subscriber, #14868)
[Link]
E.g. Netgear GS108 ($53) has no IGMP snooping, while Netgear GS108T does ($80).
Posted Feb 17, 2013 21:01 UTC (Sun)
by ibukanov (subscriber, #3942)
[Link]
Once I talked to a person at Opera who had being involved into their networking stack. Their approach was to make sure that they could always restart connections in non-pipelining mode after the first sign of troubles. And troubles could mean unexpected timing in response, suspicions header content or their order etc. He claimed that after a couple of years they got enough information to make this works.
To apply such extensive blacklisting at many implementation levels to Mozilla's networking stack would require a very substantiate code rewrite.
Posted Feb 17, 2013 3:32 UTC (Sun)
by jimmyj (guest, #89388)
[Link] (5 responses)
Posted Feb 17, 2013 18:03 UTC (Sun)
by heijo (guest, #88363)
[Link] (2 responses)
Developers probably check the code, but if they are all conspiring together across companies we are likely fucked unless some random guy happens to be looking and notices.
Anyway, the real issue is that you can apparently make $50-100k for an exploitable bug, so there's quite an incentive for individual developers to put them in or otherwise not report them.
Posted Feb 18, 2013 0:29 UTC (Mon)
by butlerm (subscriber, #13312)
[Link] (1 responses)
Posted Feb 18, 2013 8:46 UTC (Mon)
by deepfire (guest, #26138)
[Link]
In other words, it is really this bad, that the use of this word is completely warranted in this situation.
Posted Feb 17, 2013 21:16 UTC (Sun)
by ibukanov (subscriber, #3942)
[Link]
Posted Feb 19, 2013 4:33 UTC (Tue)
by rgmoore (✭ supporter ✭, #75)
[Link]
Posted Feb 18, 2013 22:52 UTC (Mon)
by SecretEuroPatentAgentMan (guest, #66656)
[Link]
Sources (for feeding into your favourite translation system):
Patches: http://www.digi.no/911489/opera-vraker-egen-webmotor
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Nothing major - and often worked around by simply using Site Preferences to spoof the user-agent. But it's not perfect.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Standards are meaningless with a single implementation.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Yeah, "a witty saying might allow your name to live forever" (c) anonymous.
Opera moves to WebKit and V8
But in reality Java works pretty fine across all supported platforms.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Come on. I realize that it's popular now to bash Java, but remember 90-s.
There was NO fast cross-platform language in 95-97.
Java provided a way to create programs that run without large changes pretty much on all major platforms (even classic Mac OS).
Opera moves to WebKit and V8
You're projecting.
Which ones (with cross-platform GUI frameworks)?
Opera moves to WebKit and V8
A lot of technical people were thrilled with Java, because it FINALLY allowed to write large programs without spending hours compiling stuff or waiting for Perl interpreter to muddle through the code.
I know only of Tcl/Tk which requires, shall we say, quite a bit of getting used to.
Opera moves to WebKit and V8
Java compilation was fast enough not to care about it. And JVM has had a JIT compiler since 96.
Blergh. I remember hours of debugging trying to find where it concatenated strings incorrectly. Thanks, but no thanks.
Thankfully, it's getting used less and less.
Opera moves to WebKit and V8
And JVM has had a JIT compiler since 96.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
If you have just one implementation then interoperability is achieved automatically.
effect on open web
Opera moves to WebKit and V8
Just like XFree86!
Or gcc's failure to be a library!
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Webkit not well taken care of
What to do about it
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
It's hard to ignore browsers which drives over 20% of traffic. You may as well say "I am still kinda surprised anybody cares about iPhone" (which has similar penetration in many countries).
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
You are Russian, I suppose?
Otherwise, how hard did you have to dig before you found a site statistic like that?
No. Opera isn't anything remotely like a iPhone in terms of popularity or widespread usage.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
2) make it more time consuming to test things;
3) reduce the set of functions I can use, or
4) force extra code to be written that deals with the unacceptable reductions from 3.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
# here is virtually unreadable
# crashing konqueror (2011.0426)
Opera moves to WebKit and V8
Didn't you hear? There are no versions any more! All anyone cares about is that things are up-to-date. That's why Chrome and Firefox silently update and why the Firefox devs wanted to remove the version number from the About dialog. Even the HTML5 spec doesn't have a version, it's a "Living document"--live in the now, man! Versions are so last decade.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
So I guess the answer is 'no' - it's not really worth the effort, even when live video broadcast to tens of thousands of destinations is your principle business; possibly once it scales up to millions is where things start to look different, but at that volume you can invest in the infrastructure to avoid having to send it over the internet.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
multicast
Link Layer Multicast
Link Layer Multicast
Link Layer Multicast
Link Layer Multicast
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
Opera moves to WebKit and V8
I don't think it's malware and spyware that are the greatest threat. The risk of being caught is too high and the damage to the reputation of the guilty party would be too great for anyone reputable to risk it. The big danger is in subtle but exploitable bugs that can be plausibly blamed on sloppy coding rather than malicious intent.
Opera moves to WebKit and V8
Opera moves to WebKit and V8
and: https://bugs.webkit.org/show_bug.cgi?id=15553
Redundancies: http://www.digi.no/911787/opera-sendte-90-paa-doren
