User: Password:
|
|
Subscribe / Log in / New account

GNU + Linux = broken development model

GNU + Linux = broken development model

Posted Jul 29, 2009 18:18 UTC (Wed) by mheily (guest, #27123)
In reply to: GNU + Linux = broken development model by me@jasonclinton.com
Parent article: A tempest in a tty pot

> For a counter example of how this "un-scalable" development model can actually work just fine--regardless of OS--look no further than the DRI2 and KMS changes coordinated across X.org, other user space and the Linux kernel, all at the same time with many hands involved in the work and with very little user-visible disruption.

Your counter-example actually illustrates my point. It is relatively easy to coordinate changes between the kernel and a single userspace project such as X.org. Once you try to make kernel changes that impact many userspace programs, it becomes very difficult to coordinate the necessary changes. The story of what happened with the Linux TTY changes is concrete evidence of the drawbacks of the split kernel v.s. userspace development model.


(Log in to post comments)

GNU + Linux = broken development model

Posted Jul 29, 2009 18:23 UTC (Wed) by me@jasonclinton.com (✭ supporter ✭, #52701) [Link]

You ignored the part of my response where I pointed out that there are thousands of TTY consumer in *BSD. At this point, I'm just going to assume that you're trolling and not respond to this thread any further.

GNU + Linux = broken development model

Posted Jul 29, 2009 18:55 UTC (Wed) by mheily (guest, #27123) [Link]

Fine. Please ignore the rest of this comment because anything you don't agree with must be a troll. Continuing with the hypothetical example of TTY changes to a BSD-based operating system...

In order to coordinate changes to the kernel TTY code that potentially impact thousands of userland programs, you would need a team of developers and testers to go through the code looking for problems. First, you fix problems in the the core userland programs (aka. the "base system"). The source code for everything is under /usr/src, so you can start with:

$ find /usr/src -type f -exec grep '#include <pty.h>' {} \;

After you have a patch for the base system, you install the entire ports tree and repeat the search. Ports are installed under /usr/ports, so the command is:

$ find /usr/ports -type f -exec grep '#include <pty.h>' {} \;

Once you have a list of potential problematic ports, you send out a notice to the ports maintainers and users of the -CURRENT branch asking them to test the affected ports against the experimental TTY patch for the base system. Some ports, such as Emacs, may rely on the old behavior, so they will need to be patched. These ports will require patches in order to make them work with the new kernel. Other ports may not need any changes at all.

Once all the changes are tested and reviewed (kernel, base system, and ports), the combined patchset is applied to the -STABLE branch for inclusion in the next stable release. All of this development and testing is costly, so hopefully the kernel changes were worth it :)

GNU + Linux = broken development model

Posted Jul 29, 2009 19:12 UTC (Wed) by bronson (subscriber, #4806) [Link]

KDE and Emacs, the two things that broke in the article, aren't a part of BSD. It's true that fixing code in ports is easy, just like patching a .deb or .rpm is easy.

That's not where the problem lies. Here's are the hard parts:

- testing the app, figuring out if there are bugs
- finding the bugs, fixing the bugs
- getting code review, regressing your fixes
- upstreaming your patches. All they do is work around a new and obscure kernel bug? Good luck with that!

And THAT is why the kernel <-> user space API should be stable. I like ports as much as anybody else but they just don't help much here.

GNU + Linux = broken development model

Posted Jul 29, 2009 20:02 UTC (Wed) by nix (subscriber, #2304) [Link]

And if TTY code was that easy to grep for, maybe it would be simple: we
have distributors who can point to such code.

But it is not. TTY users can be people who accept file descriptors via
pipes and have no idea there are TTYs at the other end: they can be people
who use the Unix98 or the old BSD pty interface (which still has users!);
every use of ioctl() has to be audited: the signal handling in TTY users
has to be checked; it ties in with process groups...

The TTY stuff introduced in the early BSD Unix is, let's be blunt, a
bloody design mess, and a pervasive one. It's not a nice simple <pty.h>
interface, by any means, although it should have been.

Passing fds via pipes?!

Posted Jul 30, 2009 0:01 UTC (Thu) by i3839 (guest, #31386) [Link]

> TTY users can be people who accept file descriptors via
> pipes and have no idea there are TTYs at the other end

How is this possible? Did you mean unix domain sockets instead of pipes?

Passing fds via pipes?!

Posted Jul 30, 2009 0:37 UTC (Thu) by nix (subscriber, #2304) [Link]

Wrong way round. An fd to a TTY can be passed over a unix-domain socket
and then used (which will trigger line discipline magic even though the
app has no idea it's using it), so it's using a TTY even though it never
opened it or looked at /dev/ptmx. (This is probably not common, but it
makes a comprehensive audit of TTY users ridiculously hard, because use of
AF_UNIX sockets *is* common and fd passing is not particularly rare. One
variation of it, in which the fd is passed into the application as one of
fds 0 to 2, is of course exceedingly common. You don't even need AF_UNIX
sockets for that.)

GNU + Linux = broken development model

Posted Jul 29, 2009 22:46 UTC (Wed) by jond (subscriber, #37669) [Link]

...and then your user's custom code, not part of the OS, breaks when you release.

GNU + Linux = broken development model

Posted Jul 30, 2009 4:47 UTC (Thu) by daniels (subscriber, #16193) [Link]

On GNU userspace, you could use grep -r. ;)

(This is a blatant, content-free, troll.)

GNU + Linux = broken development model

Posted Aug 7, 2009 11:01 UTC (Fri) by efexis (guest, #26355) [Link]

"(This is a blatant, content-free, troll.)"

You kiddin? That's the most helpful thing I've read this thread! :-)

GNU + Linux = broken development model.

Posted Jul 29, 2009 19:02 UTC (Wed) by berndp (guest, #52035) [Link]

Your perception of the kernel development model is broken: It's not that 2.6.30-rc5 (or 2.6.31) hits the desktop of Joe Plumber the day
after publication on http://www.kernel.org/ of distributions. That's the job of distributors (and it's the same in the
BSD world - they are distributing the complete OS so they must be compared the Linux distributors).

Assuming that "kdesu" is broken, there is plenty of time for "kdesu" to get fixed (and pushed expedited downstream because it's a bug fix!).
Maintaining bug compatibility is plain simply not worth the effort (and it is IMHO violating any sane open-source development model
where it's at least possible to *fix* bugs and not maintain an enormous amount of bug compatibility code - and maintenance effort - for ages.
Why else should Windows needs exponentially more resources with each release?).

If someone wants eternal backward bug compatibility, please *do* it - but do not complain to others or even ask them.
Especially not for some random buggy app which just happen to not trigger a race condition.

GNU + Linux = broken development model.

Posted Jul 29, 2009 19:32 UTC (Wed) by mheily (guest, #27123) [Link]

Sometimes it's not clear which codebase is "broken" and needs to be fixed. Alan Cox's changes may have been totally legal according to the RS-232 standards, but if Emacs and many other programs depend on the old behavior, it's difficult to say that all userspace programs must be changed. One person's "bug" is another person's "feature" :)

I do agree with Linus that increased kernel parallelism should avoid impacting userland code that depends (rightly or wrongly) on serialization.

GNU + Linux = broken development model.

Posted Aug 7, 2009 11:24 UTC (Fri) by efexis (guest, #26355) [Link]

"Why else should Windows needs exponentially more resources with each release?"

Sloppy development, greater number of features we're greater for, and greater number of features that we're really really not.

Windows uses subsystems for different API sets (whether it's dos, posix, win16, win32, .net etc) that are loaded on use, with backward compatibility confined within the subsystem, or even different versions of the library or live patching. MS are also always pushing their newest 'bestist' API sets, which means if you're not running old software, you tend to not be needing to load the old APIs (you can have windows run with win16/dos support completely removed for example) so there's no extra resource requirement there... and if you are running old software, then of course you need the old support.

Win7 looks to have lower requirements than Vista (I can even run it on my laptop, something I couldn't dream of with Vista), but still, it's much slower for basic functionality than my 2003 install. Basic stuff like recursive file meta data scanning, more progress bars (which require two-pass operations, one to find out how much stuff there is to be done, and the second to do it) slows down the UI. Services to control CPU scheduling for media applications adds complexity there, as do the various annoyance^H^H^H^H^H^H^H^H^Hsecurity services. It's genuine complexity of the OS combined with less than optimal coding that makes it chug, backward compatibility cruft really isn't as big a player as you might think.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds