Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Object-oriented design patterns in the kernel, part 1
Posted Jun 2, 2011 22:08 UTC (Thu) by chad.netzer (✭ supporter ✭, #4257)
I'd bet a kernel project that was started from scratch using C++, could probably *not* sustain the rate of development that Linux has seen, given the nature of it's contributor base. Those projects you mentioned all started in C++ (AFAIK), typically with a small focused team of programmers, all commercially funded (except LLVM, which had research funding initially), whereas Linux had an explosion of early volunteer contributors. And I'm not completely sure, but it seems that Linux still has the widest range of commercial and volunteer contributors compared to any of the other projects you mentioned. It's just not comparable; a C++ codebase *would* drastically affect the set of contributors.
Posted Jun 2, 2011 22:23 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
Any real refactoring of Linux-sized codebases MUST be gradual. Hypothetically, I'd first clean up kernel source to allow C++ drivers, introduce kernel-side C++ library and then slowly convert subsystems to it.
Though by now it might not get us much benefit, I admit. A lot of kernel code is fairly conservative and rewriting it just for the sake of rewriting won't get us anything useful.
>I'd bet a kernel project that was started from scratch using C++, could probably *not* sustain the rate of development that Linux has seen, given the nature of it's contributor base. Those projects you mentioned all started in C++ (AFAIK), typically with a small focused team of programmers, all commercially funded (except LLVM, which had research funding initially), whereas Linux had an explosion of early volunteer contributors.
KDE started the same way - an explosion of contributors working on the same goal. By now it's comparable in size with the Linux kernel. Ditto for Haiku OS (though it can't compare with Linux).
At smaller scales, we're seeing addition of C++ to the Mesa project right now (new shader compiler and optimizer is written in C++) and it seems to be working out fine. Though they use it in C-with-classes fashion, mainly for the 'Visitor' pattern.
There are some case studies on transitioning large codebases from C to C++: http://www.bleading-edge.com/Publications/C++Report/v9507...
Posted Jun 2, 2011 23:07 UTC (Thu) by chad.netzer (✭ supporter ✭, #4257)
But KDE was also a collection of many independent, even orthogonal, applications such that developers could work on their piece without any affect on another (ie. less code merging across larger groups, etc.) Even the dependent pieces were often linked together by abstractions at a higher level than the language (object brokers, interprocess communication protocols, etc.) right? So the real core C++ libraries and such, that were linked into programs directly were a much smaller group of developers and contributors (I'm guessing). And furthermore, a significant portion of that core was based around the commercially developed QT, which started completely as C++ (and hell, even extended it) and was a small tight team. KDE is definitely a great example of a large and active C++ based project, but still, I think, a *very* different contribution model than a community driven C++ monolithic kernel would be. It all comes down to merging, and merging lots of separate contributed C++ code and requires a lot more pre-planning, design, discipline, coordination, and review *just at the language level* than C (imo).
However, my summary of KDE could be wrong, so someone please correct me if so. I've not used it much personally.
Posted Jun 2, 2011 23:20 UTC (Thu) by Cyberax (✭ supporter ✭, #52523)
Ah, and I've forgotten about another large C project switching to C++ - http://gcc.gnu.org/wiki/gcc-in-cxx
Posted Jun 3, 2011 2:34 UTC (Fri) by chad.netzer (✭ supporter ✭, #4257)
Sure it is. Certainly in the classic sense of not having memory access isolation between all services, drivers, etc. The Linux kernel may be modular, and have some kernel threads, but it is "exactly monolithic" by the standard definition, is it not?
Thus, it isn't paranoia on the part of the developers to use a language that allows one to fairly easily see what memory accesses are occurring on a roughly line per line basis (such as C).
> Ah, and I've forgotten about another large C project switching to C++ (gcc)
That's interesting, because the gcc development seems so different from Linux's; things like the copyright assignment provisions may (or may not, I'm speculating) affect the contributor pool in a way that makes the transition more practical. In any case, compilers work at a different level, so that the memory access abstractions of C++ aren't so objectionable. In fact, I suspect hey have more data structures and inheritance possibilities that would benefit directly from it, and it's probably a good choice for them. But I don't think that necessarily translates into a reason for a kernel project to do the same.
In any case, while I disagree with daglwn's assertion that Linus was "flat out wrong" about C++, it's exciting to see projects that implement a kernel in a language like C++ (or D, or Go, etc.) to know how the language features influence design decisions and ease of implementation, to see if it really matters.
Posted Jun 3, 2011 14:46 UTC (Fri) by Cyberax (✭ supporter ✭, #52523)
I don't mean 'monolithic' in the sense of 'monolithic vs. microkernels'. I meant it in the sense of 'one giant C file vs. modular code'.
Linux Kernel is divided into subsystems which are pretty independent: network layer doesn't really care about DRI, for example.
>In any case, while I disagree with daglwn's assertion that Linus was "flat out wrong" about C++, it's exciting to see projects that implement a kernel in a language like C++ (or D, or Go, etc.) to know how the language features influence design decisions and ease of implementation, to see if it really matters.
It doesn't look like that C vs. C++ matter much in kernel development (look at L4 kernel, for example).
Posted Jun 3, 2011 17:49 UTC (Fri) by chad.netzer (✭ supporter ✭, #4257)
Well, that's confusing then. The term already has meaning in kernel discussions, and you were responding to *my* usage of the term. But, ok.
The concerns I mentioned still apply for pretty independent codebases: if you intend to build the whole kernel with C++, you have do deal with all the legacy C code and interaction issues (function namespace, type incompabilities, etc.) so as you said it must be gradual. But, if it's gradual, you now have a complicated mixed build system and have to worry about how to interact across the C/C++ layers (since there is no agnostic "message passing" layer for the components, like a microkernel would have). It could be done, I'm sure, it just a matter of what is motivating it.
It might make sense if there already existed some well tested code bases that were worth integrating; let's say hypothetically that ZFS had been released as GPL years ago, but it's implementation was in C++. Then I could see dealing with the pain (or attempting it), rather than a rewrite.
> It doesn't look like that C vs. C++ matter much in kernel development (look at L4 kernel, for example).
Well, L4/Fiasco *is* a microkernel, with a well defined message passing ABI, built by a small team, and is *tiny*. But it demonstrates that design of the OS is the much bigger issue than language implementation for the most part. The language issue really matters more (imo) from a community and potential contributor perspective.
Posted Jun 3, 2011 8:48 UTC (Fri) by tialaramex (subscriber, #21167)
In their kernel they're using a subset of 1990s C++ that gives them much less functionality than the C++ aficionados have been talking about in these comments.
All non-core components use only C APIs even though they may be in C++. So many of the examples mentioned for Linux would still have to be done the same way since they're available in blessed module APIs for Haiku.
And despite a decade's work what they have is basically a BeOS clone. Nasty shortcuts to rush BeOS to market before Be Inc. ran out of money, faithfully reproduced. That goes from big picture things like no privilege separation and no power management to little annoyances like no real hot plug (they have a hack that lets them hotplug USB devices by first loading all the drivers they might want...)
Posted Jun 3, 2011 13:07 UTC (Fri) by cmccabe (guest, #60281)
*Everyone* is using some kind of subset of C++.
Firefox and Chrome, as well as Webkit, are using -fnoexceptions and -fnortti. This is a pretty important design choice because it means that you can't do things that can fail in your constructors, since there is no way for them to report errors except throwing exceptions.
XNU, which later became the basis of the Mac OS kernel, uses a restricted subset of C++ that doesn't allow exceptions, multiple inheritance, or templates.
If projects do use exceptions, they all do it differently. Some old Microsoft APIs throw pointers to exceptions, which the caller must then manually call delete() on. Most projects roll their own exception hierarchy. Sometimes they inherit from std::exception; other times not. Sometimes they throw other things. I heard from a friend that his team is writing new code that throws ints! Yes, new code, written in 2010, that throws ints.
Some projects use char* almost everywhere, other projects use std::string. QT has its own string class, which is supposed to be better at internationalization, that a lot of projects use. Some projects use a mix of all of this stuff. Some projects roll their own string class.
A lot of projects rolled their own smart pointer, or used one from boost, prior to the introduction of tr1::shared_ptr. Some of them work similarly, others not. Some projects barely use smart pointers; other projects use them almost everywhere.
*Everyone* is using some kind of subset of C++. Everyone is bitterly convinced that they are right and everyone else is wrong. When someone advocates "using C++," a legitimate question is "which C++"? When you add a new person to your team, you can expect to spend quite a bit of time getting him or her up to speed.
And of course, the different subsets of C++ don't interoperate that well at the library level. So when designing APIs, everyone just uses the lowest common denominator, which is C or something that looks almost exactly like it.
Posted Jun 3, 2011 15:43 UTC (Fri) by daglwn (subscriber, #65432)
Not true, and if they are, they're Doing It Wrong.
Boost, for example, places no such restrictions on the project. Instead, members use vigorous code review to ensure quality. That is the right way to go because terrible interfaces get designed in every language every day. Restricting the set of allowed language features doesn't solve that problem, it exacerbates it.
Posted Jun 3, 2011 19:16 UTC (Fri) by cmccabe (guest, #60281)
Really? Let me ask you: when was the last time you wrote code that used throw specifications? Or the "export" keyword for templates? Or wide character streams (wchar)? Have you ever used protected inheritance?
Wake up and smell the coffee. You're programming in a subset of C++. You are no doubt convinced that your subset is "modern" and "progressive", whereas everyone else's is "backwards" and "old-fashioned". But it's still a subset.
Posted Jun 3, 2011 21:09 UTC (Fri) by daglwn (subscriber, #65432)
The point is that the tools to use shouldn't be artificially restricted. If someone wants to use protected inheritance, let them as long as they can show why it's necessary or beneficial.
Posted Jun 4, 2011 1:07 UTC (Sat) by cmccabe (guest, #60281)
> The point is that the tools to use shouldn't be artificially restricted.
> If someone wants to use protected inheritance, let them as long as they
> can show why it's necessary or beneficial.
You didn't answer my question. When was the last time you used those features?
You say that programmers shouldn't be "artificially restricted" from doing things that are "necessary and beneficial", but those are weasel words. The reality is, you'll just define necessary and beneficial as whatever you've been doing. So if you've been throwing exceptions as pointers, it's obviously "necessary and beneficial" for the new code to do the same. If you haven't been using throw specs, obviously the new code shouldn't have them. But you're not using a subset of the language, oh no.
As a side note, what's with the gets() obsession in these programming language debates. I don't think I was even alive the last time someone used gets() in a real program.
Posted Jun 4, 2011 1:11 UTC (Sat) by cmccabe (guest, #60281)
Literally every comment has a completely different view of which C++ features are "evil." I don't think you can find even two distinct answers that agree. I can only imagine what a novice programmer, fresh out of school, would think after reading this :)
Posted Jun 4, 2011 5:52 UTC (Sat) by elanthis (guest, #6227)
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds