Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Posted May 15, 2010 3:01 UTC (Sat) by drag (guest, #31333)In reply to: Canonical Goes It Alone with Unity by jspaleta
Parent article: Canonical Goes It Alone with Unity
Which is to make Linux/Gnome usable and friendly. This is much tougher then most people assume it is.
Seriously... Before Ubuntu came along systems like Debian were about as comfortable for the average person to use as a inside-out softball shoe.
If Canonical can figure out how to turn that into a profitable enterprise then that will make me happier still.
As long as they stick with improving something then I am happy about it. Let the kernel heavy hitting, GNU userland stuff, and other stuff get done by people with much more expertise and desire. Canonical is better off expending there resources on something like this then anything else on the OS.
Redhat cannot make a usable and friendly desktop system any better then Ubuntu can develop a top-notch virtualization infrastructure or file system.
In fact I think it's sad that Linux distributions are not able to benefit more from each other's work in a much more direct fashion. So much wasted effort, time, and opportunities re-doing everything that a half of dozen other groups have already done. Ubuntu's guilty of this as much as anybody else, of course.
Posted May 15, 2010 8:51 UTC (Sat)
by bojan (subscriber, #14302)
[Link] (4 responses)
This user has zero technical background and wouldn't know the first thing about UI usability studies. Go figure.
Posted May 15, 2010 13:05 UTC (Sat)
by jwakely (subscriber, #60262)
[Link] (1 responses)
presumably as punishment for some awful crime they'd committed?
Posted May 16, 2010 12:14 UTC (Sun)
by bojan (subscriber, #14302)
[Link]
:-)
Seriously, no - just unavailability of a machine with Fedora/Gnome at that point in time.
Posted May 15, 2010 14:04 UTC (Sat)
by rvfh (guest, #31018)
[Link] (1 responses)
Posted May 16, 2010 12:13 UTC (Sun)
by bojan (subscriber, #14302)
[Link]
Posted May 15, 2010 13:55 UTC (Sat)
by paulj (subscriber, #341)
[Link] (26 responses)
As long as they stick with improving something then I am happy about it. Let the kernel heavy hitting, GNU userland stuff, and other stuff get done by people with much more expertise and desire. Canonical is better off expending there resources on something like this then anything else on the OS.
Except those others then take great delight in bashing Canonical for not having, say, kernel-fs expertise.
Posted May 15, 2010 15:47 UTC (Sat)
by arjan (subscriber, #36785)
[Link] (25 responses)
if you want to provide commercial support to server customers, you need to have expertise for basically all pieces in your stack. That doesn't mean you need people to develop proactive there (although that helps), but you need at least enough involvement that you can fix bugs....
... at least rather than filing them in fedora bugzilla (that was hillarious)
Posted May 15, 2010 15:54 UTC (Sat)
by ewan (guest, #5533)
[Link] (7 responses)
Posted May 15, 2010 15:56 UTC (Sat)
by arjan (subscriber, #36785)
[Link] (6 responses)
Posted May 15, 2010 18:17 UTC (Sat)
by AlexHudson (guest, #41828)
[Link] (5 responses)
Posted May 15, 2010 19:32 UTC (Sat)
by seyman (subscriber, #1172)
[Link] (4 responses)
Because filling in bug reports against 200+ distributions is an immense waste of time, both for the reporter and the people doing triage on the 200+ bug trackers. Once you're able to reproduce the bug with an unpatched upstream, I think it's safe to consider all distributions to be suffering from the problem and filing a bug upstream should be the only thing you need to do.
Note that Fedora has a policy that all upstream bugs should be taken care of in the upstream bug tracker. That's why bugzilla.redhat.com has an UPSTREAM resolution and that's why this bug was closed with this resolution. The only thing filing it accomplished was wasting people's time.
Posted May 15, 2010 20:38 UTC (Sat)
by paulj (subscriber, #341)
[Link] (3 responses)
I see your point that the posting to the Fedora bug-tracker was redundant and perhaps wasting people's time, so far as Fedora's processes go. However, it does NOT seem like Kees was trying to sneakily get RedHat to work on Canonicals' bugs. All it looks like is that he's trying to raise awareness amongst the relevant technical people about a fairly serious ext4 performance regression, in an open, technical manner.
So the "Go fix your own vendor bugs, nyeeh nyah!" responses still don't sit quite right with me.
FWIW, I'm a general free Unix/Linux user. The logos and branding on my preferred Linux distro say "Fedora", but the software I use is maintained by engineers/hackers from a *variety* of vendors, including Canonical.
Posted May 16, 2010 4:18 UTC (Sun)
by jspaleta (subscriber, #50639)
[Link] (2 responses)
I would dare say that Ted's comment actually sort of encouraged, indirectly, Kees to do the additional posting in the hopes of getting Red Hat engineering resources interested in solving the problem on a mutually beneficial timescale.
Though I do sort of have to wonder why it took 3 weeks after Ted Ts'o to confirm it was happening with an upstream kernel for the upstream kernel report to be filed...and only after the Ubuntu specific workaround was found to be insufficient.
-jef
Posted May 16, 2010 9:12 UTC (Sun)
by paulj (subscriber, #341)
[Link] (1 responses)
Isn't part of the benefit of Linux that it provides a way for commercial organisations to work semi-mutually to further the interests of *shared* code.
Again, these accusations appear less than solidly founded. It surely can not be good to start creating an atmosphere where people are afraid to talk to other developers of a project about a bug just because they work for a different vendor.
Posted May 17, 2010 0:50 UTC (Mon)
by bryce (guest, #16388)
[Link]
If you're looking at this and merely seeing evidence at attempts to collaborate, well that's hardly fun and interesting. Try harder to blur the facts around to support some sinister conspiracy theory. That is a LOT more interesting, and sells a lot more ads. This whole talk about thinking from solid foundations is plain silly; everyone knows better than to do that.
But whatever you do, DON'T just go talk to the developer directly to get the actual facts. That makes it a *lot* harder to maintain all the lovingly crafted anti-Canonical memes we've got. Next you'll be saying Kees contributes to upstream or some other madness like that.
Posted May 15, 2010 16:41 UTC (Sat)
by paulj (subscriber, #341)
[Link] (14 responses)
I wonder can someone acquire sufficient expertise in some area of, say, the kernel fs to be able to quickly *fix* a problem without having or developing an inclination for developing that area. If they can't do that because their employer is focused elsewhere, they may move. I.e. a distro is either going to have to:
a) acquire its own broad-based expertise, replicating the expertise of every other major distro. I'm not sure this is scalable, but even if it is it may be wasteful.
b) work out agreements to get support from every other vendor
c) be at the mercy of those other vendors
Further, I have to say the public ridiculing by one vendor's employees of a another vendor for filing bugs with their *community* bug-tracker left a very bad taste in my mouth. Smacks of all the worst vendor tribalism of the old Unix war days.
If someone can replicate a bug on Fedora why shouldn't they be able to file them with Fedora if that's where the maintainers for that software are most focused on? Why should it matter /who/ files the bug? Shouldn't it be judged on its technical merit, as a record of fact?
Posted May 15, 2010 18:36 UTC (Sat)
by ewan (guest, #5533)
[Link] (3 responses)
Because at the point that you can replicate it in multiple distributions it's clearly an upstream bug, not a Fedora bug.
Besides which, I'm not sure that's the point - surely the problem is that Canonical is selling people contracts in which they agree to support Ubuntu, despite (apparently) not having the necessary expertise to actually do that. Would you buy a support contract from them if all they're going to do with the hard problems is file a bug with Redhat? That's an awfully expensive way to avoid having to deal with bugzilla.
Posted May 16, 2010 10:36 UTC (Sun)
by AlexHudson (guest, #41828)
[Link]
You wouldn't want to be doing this across the board, but to complain that someone has tested your distro for a bug and then filed it when they found it seems to be extremely bad manners to my mind.
Posted May 20, 2010 7:59 UTC (Thu)
by jschrod (subscriber, #1646)
[Link]
AFAIR, Kees opened an upstream bug before. He included a link to that in his RH report.
FTR: I use neither RHEL, Fedora, nor Ubuntu on my company or personal systems; I use openSUSE. So I'm not partial about any party. But having read about that storm in the teapot, I side with Kees and find yours and others accusations distasteful, as they leave off a very important part of that picture: that a non-kernel developer wanted to raise awareness of a serious bug and went to great length to develop test cases and confirmed that it is an upstream bug by testing the problem on another distribution now gets flamed for all his activity.
Obviously he is only allowed do so after Canonical has hired more kernel developers. That's ridiculous.
Posted May 26, 2010 21:32 UTC (Wed)
by BackSeat (guest, #1886)
[Link]
No. At the point when you can reproduce it with upstream source (as was the case here) it's clearly an upstream bug.
Posted May 16, 2010 15:25 UTC (Sun)
by acathrow (guest, #13344)
[Link]
Posted May 22, 2010 6:58 UTC (Sat)
by riteshsarraf (subscriber, #11138)
[Link] (8 responses)
In the early days, there was no problem because Red Hat served upstream and could not live without this "Community Users". These users were invaluable testers and bug triagers.
But things have changed now. Now, it is a TTM game. Every vendor, that finds a bug or a fix, preferably wants to keep it a secret until its release. All vendors effectively maintain forks and carry them for the entire lifecycle.
There does not seem to be a "Community".
There's also the war of "I be the upstream for everything I ship in my product". So, if you are not the upstream of something you ship, drop it, fork it, re-label it, re-invent it and then re-ship it.
It will be interesting to see how well and how long can the GNU/Lnux Community Model sustain going forward.
Posted May 22, 2010 21:49 UTC (Sat)
by dlang (guest, #313)
[Link] (7 responses)
Posted May 22, 2010 21:56 UTC (Sat)
by rahulsundaram (subscriber, #21946)
[Link] (6 responses)
Posted May 22, 2010 22:02 UTC (Sat)
by dlang (guest, #313)
[Link] (5 responses)
I remember trying to run some commercial closed source apps and finding that they would only run on the redhat kernel, not on _any_ vanilla kernel because redhat had added features that were not upstream and never did go upstream
Posted May 23, 2010 1:46 UTC (Sun)
by rahulsundaram (subscriber, #21946)
[Link] (4 responses)
Posted May 23, 2010 2:05 UTC (Sun)
by dlang (guest, #313)
[Link] (1 responses)
the new 2.6 development model was created specifically to address these sorts of problems.
I'm not saying that when RedHat implemented a feature they did so to deliberately lock users in to their flavor of Linux, but when the upstream developers went a different direction and didn't accept what they had already shipped to customers (and had companies like Oracle depend on) that's the effect that it had.
Posted May 23, 2010 3:12 UTC (Sun)
by rahulsundaram (subscriber, #21946)
[Link]
Posted May 23, 2010 15:26 UTC (Sun)
by corbet (editor, #1)
[Link] (1 responses)
Whether competition has anything to do with any perceived changes is not clear to me, though. I suspect it has more to do with both the company and the communities it works in growing up.
Posted May 24, 2010 18:30 UTC (Mon)
by foom (subscriber, #14868)
[Link]
RH's non-upstreamed tux patches caused me pain recently(ish). Turns out TUX added a flag to the open syscall named O_ATOMICOPEN. It used the next available O_ flag value after those used in upstream kernels. That flag was not upstreamed. Upstream then added a flag to open called O_CLOEXEC. They (of course) also used hte next available flag value.
So, new binaries which attempted to use O_CLOEXEC (even when they attempted to test for its existence before using it) would fail badly when it returned some completely unexpected random error value...
Posted May 17, 2010 22:06 UTC (Mon)
by tzafrir (subscriber, #11501)
[Link]
Posted May 17, 2010 22:07 UTC (Mon)
by paulj (subscriber, #341)
[Link]
Posted May 16, 2010 12:39 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (3 responses)
Yeah, yeah...
Now compare to the old days before the Internet and wide-spread open-source, when proprietary was the norm, and rejoice!
Like most professionals I guess you have already witnessed much worse: developers re-doing what has already been done *in their own company*! That's just how things work; most developers feel that it takes more effort to collaborate than to just do (sometimes they are right). And you get less credits when collaborating. A pity but c'est la vie.
Posted May 17, 2010 18:56 UTC (Mon)
by drag (guest, #31333)
[Link] (2 responses)
------------------------------------------
The thing I have discovered over the years of being a Linux user and then being a professional working with Linux is that the differences between Linux distributions are negligible on a technical level. There is, basically, nothing that I cannot do in Redhat or CentOS that I cannot do in Debian or Ubuntu or Slackware.
The common advice that people provide to fix problems that revolve around 'Oh, ZYX sucks, try using XYZ instead' is just about the worst and most inappropriate advice possible the majority of times it's used. (not every time, but just the majority of times)
That the idea that you have all these different approaches will yield a superior result in the long run is overrated. It was probably true at some point in the past when the whole concept of 'what is a Linux OS' was still up in the air, but nowadays there is actually very little difference in approaches. Instead the different value that distro offers is based on the level of support, social environment, and other policies that that distro offers.
My favorite example is the RPM file format vs the Deb file format. The RPM format, while it has it's warts, is technically superior to the Deb format as far as I can tell. Yet if you were to ask a typical end user that has used Redhat/Fedora in the past and uses Ubuntu now which is better or more easy to use they will, generally, tell you that the Deb format is better.
Why is this so?
It's because the technical features and advantages that RPM has at the lowest level is completely and totally overshadowed by just the massive amount of effort, time, and dedication that the Debian folks put into making their package management work. There is nothing magical about how Deb or apt-get works, it's just a huge amount of work that makes it work.
And I think this applies to pretty much most of what makes up a Linux OS. There is some value to introducing new approaches and being different... but nowadays all new stuff that is successful or showing high levels of promise (like PulseAudio, Upstart, Dbus, KVM, etc) needs to fit into existing systems in a way that is minimally disruptive and needs to be applied across all major Linux distributions before their true value is realized.
Anybody (like Ubuntu or Redhat) introducing changes to just their system without first putting huge amounts of effort into making sure those changes are not only available for other distros to use, but _actually_integrated_ into those other distros will see a much diminished benefit to those changes. Very few application developers and few end users will be able to take full advantage of the improvements because they cannot depend on those improvements being available elsewhere; unless they are first willing to completely abandon their ability to choose (or end users to choose) a different distro (which is what you see in enterprise-ish environments)
So for a Linux-based OS to be most successful it has less to do with different approaches, but much more with it's ability to efficiently use 'human capital'. Developer's time and resources need to be as efficiently used as possible. Duplicate work is extremely inefficient... creating technical differences between systems with almost no benefit to end users or application developers is inefficient.
Posted May 17, 2010 19:20 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted May 18, 2010 5:17 UTC (Tue)
by pjm (guest, #2080)
[Link]
Anyone wishing to discuss this further, I suggest/request that you put your points somewhere useful such as in Wikipedia, and simply post a link to it so that any further discussion can take place there. The data files linked to from http://kitenet.net/~joey/pkg-comp/ would be relevant to such a discussion.
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
2010-03-21 : Launchpad bug filed
2010-04-14 : Comment from Ted Ts'o
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/5436...
2010-05-04 20:36:58 kernel and Fedora bug filed.
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
If someone can replicate a bug on Fedora why shouldn't they be able to file them with Fedora
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Because at the point that you can replicate it in multiple distributions it's clearly an upstream bugCanonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
If the goal was to file the bug so Fedora could fix it then I'd say there was a problem.
But the only person who knows the motivation is the person who filed the bz.
Canonical Goes It Alone with Unity
Novell changed the definition of "Linux Enterprise Product" and Canonical slapped all who said "Linux is not yet ready for Desktop".
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
TUX is a clear example of a kernel feature shipped by Red Hat which was never upstreamed - or even attempted to upstream. I also had fun in the first revision of the driver book because there were RH-specific API features in their kernel.
Examples
Examples
Canonical Goes It Alone with Unity
This blog post by Kees seems relevant.
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
Canonical Goes It Alone with Unity
The thing I have discovered over the years of being a Linux user and then being a professional working with Linux is that the differences between Linux distributions are negligible on a technical level. There is, basically, nothing that I cannot do in Redhat or CentOS that I cannot do in Debian or Ubuntu or Slackware.
There is one difference: the administration frontends are radically different. You can avoid most of them and bash the config files directly, except that you can't avoid the package manager. With the advent of yum, simple operations ('upgrade everything', 'install this') are much the same on most major distros (even source-based ones such as Gentoo), and it only takes reading one manpage to get used to the differences between apt and yum on a simple-use level.
package format superiority