User: Password:
|
|
Subscribe / Log in / New account

Care to share your stats?

Care to share your stats?

Posted Jun 3, 2010 10:02 UTC (Thu) by khim (subscriber, #9252)
In reply to: Upstream first policy by neilbrown
Parent article: A suspend blockers post-mortem

From want I'm seeing the companies which employ 'upstream first' tactic routinely fail in marketplace. And this understandable: they can not ship stuff when there is market demand for it - they are stuck with pleasing the upstream.

Sure, if you'll try to support you changes indefinitely it'll become huge drain over time and you'll lose too - so you need to upstream your changes at some time. Sometimes different solution is accepted, not what you proposed initially - but that's not a problem, the goal is to solve the problem end-users are having, not to have your code in kernel. This is how RedHat worked for a long time (till they got enough clout in kernel community to muscle through their changes without much effort), this is how successful embedded developers work (including Google), etc. Novell tried to play 'upstream first' game and the end result does not look good for the company (even if it's may be good for the kernel).

If you have stats which show that 'upstream first' is indeed the best policy for the developers, please share them - I've certainly heard this claim often enough, but rarely, if ever, with numbers.

The only exception are "leaf" drivers which don't change any infrastructure at all and are usually accepted without even looking - here upstreaming is so cheap that it really makes sense to do this.


(Log in to post comments)

Care to share your stats?

Posted Jun 3, 2010 10:25 UTC (Thu) by neilbrown (subscriber, #359) [Link]

Nope, no statistics. Just "a stitch in time saves nine" style anecdotal observations.

And it is only a long-term benefit. I can easily imagine a situation where the short term cost of going upstream-first would cause the business to fail so there is no possibility of a long term reward. But as soon as the horizon stretches out a bit, the more you get upstream the less you have to carry yourself.

Care to share your stats?

Posted Jun 3, 2010 13:00 UTC (Thu) by corbet (editor, #1) [Link]

So companies like Intel, which are very strongly in the upstream first camp these days (most of the time) are failing in the marketplace?

"Upstream first" is not a hard and fast rule. It's also not exactly "get the code into the mainline kernel first"; it's more along the lines of "be sure that the code can get into the mainline kernel first." There is a difference there.

I'm not sure I see "upstream first" holding back Novell. Citation needed. Instead, I see the times they didn't do things that way (AppArmor), that that didn't work out all that well for them.

Care to share your stats?

Posted Jun 3, 2010 17:43 UTC (Thu) by jwarnica (guest, #27492) [Link]

Well, it seems that the "right thing" in the view of some company depends on what kind of market the company is in.

Component hardware companies typically don't sell software. Getting their new code into the kernel means *poof* they now have a bazillion systems that can use their hardware. It isn't to Intels advantage to keep their own git repository somewhere. If me, as an end user of some intel chipset cant get it to work on my software far, far removed from Intels repo, maybe next time, I won't get a mobo with Intel Inside.

Appliance/embedded hardware companies, or OS companies, are a different story. Doing the globally "right thing": "upstream first" means they are slower to deliver their actual product, and (it should be noted) their actual product has less distinction then do its competitors. Sure, the patch may very well be GPL'd, but their competitors patch which they just threw over the wall is harder for someone to use then something upstream. In a sense, it may as well be a secret.

More simply: If the end user is likely to interact directly with a single vendor, then that vendor can put their patches wherever they want, and not trying the gauntlet of the LKML is cheaper. If the end user is far removed from the provider, the provider should try to get that patch wide and far, which means in the upstream kernel.

So companies that do the globally "right thing" are rewarded by being slower, and less distinct, then those not.

Moving on:

I think part of the lesson here is that "be sure that the code can get into the mainline kernel first" is impossible to test. Until you actually submit code to the LKML, you have no idea the kinds of helpful, productive, petty, or absurd comments you will get in response. No one can predict with any level of accuracy if something will be accepted until it actually shows up in a release.

Care to share your stats?

Posted Jun 4, 2010 12:46 UTC (Fri) by kpvangend (guest, #22351) [Link]

I don't think bringing in Intel as an example is fair nor correct.
Intel can ship their processors without specific Linux support if they want to and the Linux code is not inside the box they ship.

Doing feature development like Intel or IBM can afford has interesting dynamics. For starters: not much secrecy. Secondly, no time-to-market pressure. Thirdly, the freedom to pick versions and platforms you want.

In contrast, most embedded vendors (and for now, I'm putting Google in that box, too) ship a Linux inside their box, running on some platform the software guys didn't choose.

If they take the time to merge their code upstream, they cannot ship.
And yes, many companies have failed by spending too much time in the community. Just compare the amount of announcements on LinuxDevices.com with the amount of code merged and the amount of products shipped.

When doing embedded development, your boss will only allw you a small window in which you can merge stuff upstream and benefit from it at the same time:
* after the prototype starts working
* before the code freeze happens
That period - in most cases I've seen is only a month or so - will be quickly over if you get push-back.
And then the madness of everyday work (bug hunts, etc) will draw you back inside your company.

Care to share your stats?

Posted Jun 11, 2010 21:00 UTC (Fri) by aliguori (subscriber, #30636) [Link]

Doing feature development like Intel or IBM can afford has interesting dynamics. For starters: not much secrecy. Secondly, no time-to-market pressure. Thirdly, the freedom to pick versions and platforms you want.

I can promise you, there certainly is time-to-market pressure. And every public traded company cannot discuss products before they've been officially announced so that does mean working with the community on a feature for a product that you can't talk about.

Care to share your stats?

Posted Jun 3, 2010 16:11 UTC (Thu) by anton (subscriber, #25547) [Link]

So I guess you are saying that "upstream first" costs more in opportunity costs (worse time-to-market) than releasing before it has been upstreamed costs in additional development time for maintenance and increased later upstreaming effort.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds