|
|
Subscribe / Log in / New account

Russell: On C Library Implementation

Rusty Russell has some suggestions for C library implementers on his blog. Among various other hacking efforts, Russell is behind the Comprehensive C Archive Network (CCAN). "3. Context creation and destruction are common patterns, so stick with "mylib_new()" and "mylib_free()" and everyone should understand what to do. There's plenty of bikeshedding over the names, but these are the shortest ones with clear overtones to the user. [...] 14. There's a standard naming scheme for C, and it's all lower case with underscores as separators. Don't BumpyCaps."

to post comments

My advice on implementing stuff in C:

Posted Oct 14, 2010 21:03 UTC (Thu) by HelloWorld (guest, #56129) [Link] (102 responses)

Don't. It's that easy :)

My advice on implementing stuff in C:

Posted Oct 14, 2010 21:34 UTC (Thu) by butlerm (subscriber, #13312) [Link] (80 responses)

If you want your compiler / parser / runtime library / image / compression / encryption / numeric processing code to run at a competitive speed there aren't many alternatives. Virtually every other language is implemented in C and uses a number of runtime libraries written in C for a reason.

We would all be better off if that could all be done in a comparably efficient statically compiled language with some semblance of pointer safety of course. Not too many people developing those these days, unfortunately.

My advice: implement stuff in D:

Posted Oct 14, 2010 22:49 UTC (Thu) by Ed_L. (guest, #24287) [Link]

Perhaps not too many people. But it only takes two, and the D community is by now far larger than that, including a gdc, a front-end for gcc. See The D Programming Language by Andrei Alexandrescu. D seems extremely well thought-out including "extern (C)" which (ahem) does exactly what one would hope, and expect...

My advice on implementing stuff in C:

Posted Oct 15, 2010 13:38 UTC (Fri) by HelloWorld (guest, #56129) [Link] (78 responses)

C++ has been around for a long time, and while it's not exactly a beautiful language, it does provide huge benefits over C. Templates is one, destructors (which facilitate resource management immensely) is another, and the list doesn't end there.
There are also numerous other languages, like Go, Rust or D, which are all comparable to C in terms of performance (and miles ahead in most other respects).

My advice on implementing stuff in C:

Posted Oct 15, 2010 14:00 UTC (Fri) by mjthayer (guest, #39183) [Link] (76 responses)

> C++ has been around for a long time, and while it's not exactly a beautiful language, it does provide huge benefits over C.

Perhaps it is just me, but C++ always seems to me like a poisoned chalice. It has lots of useful and powerful features, but they all seem to come with lots of fine print, so that by the time you have made them work the way you want, and debugged all the hidden edge cases, you need more time than you would have needed with C. Like -

C++ classes with their automatic destruction when they get out of range (if they are on the stack, which one quickly gets into the habit of overusing) - very useful, but you also have to get your head around exceptions and copy constructors which are firmly embedded in them.

Templates are also very powerful but have a tendency to start taking over all your programming time before you get them right.

Exceptions themselves would also be great, except that they are also horribly hard to do right, witness the number of long papers telling you that "exceptions are easy, just follow this methodology to use them".

Or typecast operators which are very hard to predict, especially as a compiler may decide to cast an object to an intermediary form (without telling you which) before casting it to its final incarnation.

Anyone feel like adding to the list?

My advice on implementing stuff in C:

Posted Oct 15, 2010 14:09 UTC (Fri) by dskoll (subscriber, #1630) [Link] (39 responses)

I agree with the comments about C++. It's a horrible, horrible language that doesn't know if it wants to be object-oriented or not. Other brokenness besides that mentioned in the parent:

Virtual Base Classes. A real WTF if ever there was one.

Insanely complex rules about how overloaded functions are picked, what you can and can't do in constructors/destructors, etc. that mostly come about because of compiler design constraints rather than intentional language design.

RTTI. Doesn't that go against OO design?

My advice on implementing stuff in C:

Posted Oct 15, 2010 15:16 UTC (Fri) by nix (subscriber, #2304) [Link]

I think I agree with michaeljt's comments more than yours. Most of the things you complain about are either very rare (virtual base classes) or provided by every other OO language (RTTI) *and* rare in C++ code.

The overloaded function resolution rules aren't all that bad... however, the name lookup rules all added together are fiendish: add 'export' and I can understand people's brains dribbling out of their ears.

My advice on implementing stuff in C:

Posted Oct 15, 2010 16:57 UTC (Fri) by HelloWorld (guest, #56129) [Link] (2 responses)

A language doesn't need to choose if it's object-oriented or not, it can support many different programming styles. I think that this is a good thing, and many other people seem to think so too. For example, Python is also a multi-paradigm language. It allows you to do imperative, object-oriented and functional programming, yet nobody seems to complain about it.

Also, I don't see your point about virtual inheritance. It was added to the language in order to solve the diamond inheritance problem, which occurs very rarely anyway, and it does solve that problem for me. If you absolutely can't stand the feature, then just avoid it by avoiding diamond inheritance.

My advice on implementing stuff in C:

Posted Oct 15, 2010 19:20 UTC (Fri) by mjthayer (guest, #39183) [Link] (1 responses)

> Also, I don't see your point about virtual inheritance. It was added to the language in order to solve the diamond inheritance problem, which occurs very rarely anyway, and it does solve that problem for me.

I'm wondering if I understood dskoll correctly, but pure virtual classes are actually one of the things in C++ that I find good without having to qualify. I find that inheriting behaviour often makes code harder to understand without a good reason.

I do like the way go handles this with its semi-implicit interfaces (from reading about it, not actually programming in it). But of course go is still too young and little used for people to have discovered its warts.

My advice on implementing stuff in C:

Posted Oct 18, 2010 0:10 UTC (Mon) by HelloWorld (guest, #56129) [Link]

I'm wondering if I understood dskoll correctly,
I don't think so. I think that dskoll was talking about virtual inheritance. Say, you have code such as this:
struct A { int a; };
struct D1 : A { int d1; };
struct D2 : A { int d2; };
struct B : D1, D2 { int b; };
Then, B will inherit A::a twice, once from D1 and once from D2. So, inside B, you'll actually have two fields named a, and if you want to use one of them, you always have to use full qualification, that is, you have to write D1::a or D2::a instead of just plain a. Virtual inheritance solves this problem. If you change the code as follows,
struct A { int a; };
struct D1 : virtual A { int d1; };
struct D2 : virtual A { int d2; };
struct B : D1, D2 { int b; };
then the two A sub-objects in B will be collapsed to a single one. It's also explained in the C++ FAQ lite: http://www.parashift.com/c++-faq-lite/multiple-inheritance.html

My advice on implementing stuff in C:

Posted Oct 15, 2010 17:19 UTC (Fri) by HelloWorld (guest, #56129) [Link] (33 responses)

Oh, and by the way, if you honestly think that rules such as "Don't call virtual functions in constructors or destructors. Don't throw exceptions in a destructor." are "insanely complex", then perhaps you shouldn't be programming at all (at least not low level programming, which is what C++ is about). There are very good reasons for these rules, and you probably wouldn't be complaining if you had understood them.

You do have a point about the name lookup rules, but at least some of those problems ultimately come from C, for example the "implicit int" rule from C89 assigns a meaning to some constructs that could otherwise be made illegal (and will be made illegal in the next C++ standard). Also the rule that a function declaration must come before the first call to the function comes from C. It's actually funny to see how almost every design mistake in C++ can be traced back to some kind of fuckup in C.

My advice on implementing stuff in C:

Posted Oct 15, 2010 17:28 UTC (Fri) by dskoll (subscriber, #1630) [Link] (18 responses)

There are very good reasons for these rules, and you probably wouldn't be complaining if you had understood them.

I understand the rules perfectly. They came about because Stroustrup et. al. threw everything + the kitchen sink into C++. Then when the dust settled, they discovered all kinds of corner-cases that needed clarification, or weird combinations that just don't work so need to be forbidden, or silly rules that came about because of how C++ compilers must be implemented.

I've programmed since 1982 (professionally since 1990) and used C and C++ extensively. While it's obvious that C was designed carefully and thoughtfully and the C standardization committees have done a stellar job, it's also obvious that C++ was a "yeah, throw in that feature!" design followed by "Oh crap... now we have to document the weird corner-cases."

My advice on implementing stuff in C:

Posted Oct 15, 2010 18:25 UTC (Fri) by HelloWorld (guest, #56129) [Link] (17 responses)

Oh, another victim of the C hacker syndrome. Please get well soon :)

My advice on implementing stuff in C:

Posted Oct 15, 2010 21:33 UTC (Fri) by dskoll (subscriber, #1630) [Link] (1 responses)

What did I write (in grandparent to this comment) that isn't factual?

My advice on implementing stuff in C:

Posted Oct 15, 2010 22:36 UTC (Fri) by HelloWorld (guest, #56129) [Link]

Err, like, everything? Aside from the statement that you've been programming since 1982, all of what you've written is not a fact but your personal opinion, and it's also so vague that it's basically impossible to disprove it, making any discussion pointless. The good thing is that I don't need to convince you, which is why this comment ends here :)

My advice on implementing stuff in C:

Posted Oct 15, 2010 23:21 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (9 responses)

Why does the author of this essay not sign his name to it? It took some hunting to find the presumed author.

In any case, C++ has had 25 years to convince the "C hackers" that it is the logical and valid step up from C. It should stand on its own merits, and not need a bunch of evangelists *still* trying to convince people from year to year. It isn't just stubborness; even after all this time, there are many valid reasons not to use it. I mean, if you set out to design a language from scratch, today, who in their right mind would come up with C++?

It would be much more convincing (IMO) to simply tell the C hackers to skip C++, and start using one of the languages that has learned the lessons of C++, and is trying to replace it. And in fact, many organizations have been essentially doing that exact thing throughout the lifetime of C++. It's quite telling...

My advice on implementing stuff in C:

Posted Oct 15, 2010 23:34 UTC (Fri) by dlang (guest, #313) [Link] (8 responses)

what language 'learned the lessons of C++' and is so clearly superior?

the fact that C++ is still being used so heavily (in spite of people saying for 25 years that it is junked and should be skipped and people instead use a 'real' language ;-) makes me believe that all of these next-generation languages are missing something.

companies keep trying to ignore C++, but they aren't succeeding so spectacularly that C++ is dieing away.

My advice on implementing stuff in C:

Posted Oct 16, 2010 1:25 UTC (Sat) by dskoll (subscriber, #1630) [Link] (6 responses)

what language 'learned the lessons of C++' and is so clearly superior?

C++ is "good enough" for many purposes, and it's certainly possible to write decent C++. The problem is that you have to restrict yourself to a subset of the language and have a very disciplined programming team with strict style guidelines.

Once you start going mad with esoteric C++ features (or even not-so-esoteric ones like templates), you code can become unreadable and unmaintainable.

Too bad Objective-C didn't catch on more than C++. I think it is a much better language if your goal is "C with objects".

My advice on implementing stuff in C:

Posted Oct 16, 2010 10:59 UTC (Sat) by marcH (subscriber, #57642) [Link]

> C++ is "good enough" for many purposes, and it's certainly possible to write decent C++. The problem is that you have to restrict yourself to a subset of the language and have a very disciplined programming team with strict style guidelines.

and the result of such discipline is called... Java.

OK, maybe too much discipline :-)

My advice on implementing stuff in C:

Posted Oct 16, 2010 13:15 UTC (Sat) by HelloWorld (guest, #56129) [Link] (3 responses)

If your code is unmaintainable, it's usually not because of language features, but because you have misused a language feature or you have chosen the wrong one for the problem.

My advice on implementing stuff in C:

Posted Oct 16, 2010 14:31 UTC (Sat) by Baylink (guest, #755) [Link]

And I believe that the assertion being made here by the anti-C++ faction is that *the fundamental design of the language and it's library/template environment is such* that this is much much harder than seems warranted, given the competition.

My advice on implementing stuff in C:

Posted Oct 16, 2010 19:50 UTC (Sat) by dskoll (subscriber, #1630) [Link] (1 responses)

If your code is unmaintainable, it's usually not because of language features, but because you have misused a language feature or you have chosen the wrong one for the problem.

What I assert is that there are many dangerous features in C++ that are easy to misuse. This is what I mean when I write that C++ is a horrible language; there are much better-designed languages that take a lot more effort to misuse. :) (C, Tcl, Lisp spring to mind immediately...)

My advice on implementing stuff in C:

Posted Oct 19, 2010 10:23 UTC (Tue) by nix (subscriber, #2304) [Link]

Tcl and Lisp are so flexible that they can be very easy to misuse in the wrong hands. This is mostly because of the very feature that gives them expressivity: the macro system (for Lisp) or ability to redefine every word in the language (for Tcl and for that matter Forth).

Subsets of C++

Posted Oct 17, 2010 4:53 UTC (Sun) by CChittleborough (subscriber, #60775) [Link]

dskoll is right: if you select an appropriate subset of C++, you can use it to write perfectly good code, even very large systems. Several teams have done this.

The problem is that the other teams using C++ are very unlikely to be using the same subset. Worse still, the team that wrote a library that your team would like to use probably selected a different subset than your team ...

My advice on implementing stuff in C:

Posted Oct 16, 2010 3:24 UTC (Sat) by chad.netzer (subscriber, #4257) [Link]

Note that the context is convincing diehard "C hackers", not necessarily C++ programmers, to migrate. Although, for the latter case, certainly Java is an example of a language directly intended to replace many of C++'s use cases, and both it and C# have been quite successful overall. C++ obviously has a long road ahead of it, and some exciting changes are on the horizon.

That said, of the languages meant to be "a better C++ for C hackers", several have been mentioned, and I don't claim any of will ever be "superior", in the sense of mindshare, marketshare, etc. Just that convincing those C hackers who have repeatedly objected to C++ (pardon the pun), is perhaps a fruitless battle. Personally, I'd kind of like "ooc" to become popular…

As for the "lessons learned", its no surprise that many of the newer languages deliberately make an effort to ease the burden of implemention, speed of compilation, etc. Waiting for non-buggy implementations of C++'s newer features over the years left quite an impression on people…

Ignorance at work

Posted Oct 16, 2010 11:07 UTC (Sat) by marcH (subscriber, #57642) [Link] (3 responses)

Oh, another victim of the C hacker syndrome.
I have read until the 5th line:
"[Linus'] opposition to any programming paradigms and concepts related to those paradigms which are not possible or very awkward to use in C. These include things like object-oriented design, abstraction, etc."
... which immediately helped me stop wasting my time.

Ignorance at work

Posted Oct 19, 2010 10:32 UTC (Tue) by nix (subscriber, #2304) [Link] (2 responses)

There's no object-oriented design or abstraction evident in the kernel until you look really deep into it, like the directory structure or the header files.

(oops)

Ignorance at work

Posted Oct 25, 2010 1:00 UTC (Mon) by vonbrand (subscriber, #4458) [Link] (1 responses)

Au contraire, it is very evident each time you take a peek a device drivers, filesystems, ...

It is in operating systems (and then simulation) where OOP was first used...

Ignorance at work

Posted Oct 25, 2010 6:54 UTC (Mon) by nix (subscriber, #2304) [Link]

Exactly my point. :)

My advice on implementing stuff in C:

Posted Oct 16, 2010 14:28 UTC (Sat) by Baylink (guest, #755) [Link]

Yes, I've read the first third of that page, and the C++ partisan is clearly purposefully failing to interpret Linus' words in a reasonable context, so as to have something to attack.

Nope, sorry.

My advice on implementing stuff in C:

Posted Oct 15, 2010 18:22 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (5 responses)

> then perhaps you shouldn't be programming at all

That suggestion seems overly condescending.

> (at least not low level programming, which is what C++ is about).

I assert that "low level programming" is *not* what C++ is about. Modern C++ style recommends that you use smart pointers, rather than C pointers, for example. Nor should you be using C strings, C arrays, C structures, C stdlib functions, C-like error handling, C macros, etc. Basically, modern C++ encourages using full high-level abstractions for data structures and algorithms, and is thus, fundamentally, high-level. And when all these new features and abstractions are used properly, it can be a beautiful, elegant thing IMO (that takes a lot of time and memory to compile). But it's not low-level.

The fact that many people still want to use C++ as only "a better C" (ie. no exceptions, multiple inheritance, namespaces, virtual functions, the stdlib, RTTI, or even templates and RAII), and thus *not* use the new features added in the last 15 years, is a direct consequence of many of those features being "insanely complex".

But the C++ FAQ, and C++ FQA make both ends of this argument in a more elegant fashion than I can:

http://www.parashift.com/c++-faq-lite/index.html
http://yosefk.com/c++fqa/

Note how rarely the FAQ mentions pointers, btw.

My advice on implementing stuff in C:

Posted Oct 15, 2010 19:40 UTC (Fri) by Ed_L. (guest, #24287) [Link] (1 responses)

"But its not low level."
To a certain extent its a circular argument. As others have observed, if you want to do system level (low level) programming on *nix, then you will ultimately end up calling libc, which libraries like glibmm admittedly do a wonderful job of wrapping. For the most part. But for that small part they don't, I've yet to find a substitute for just calling libc (or a syscall) directly. And for me that's one of the beautiful things about C++: its not dogmatic, and allows one to write grotty Fortran when nothing else will do.

:-)

My advice on implementing stuff in C:

Posted Oct 16, 2010 14:32 UTC (Sat) by Baylink (guest, #755) [Link]

> And for me that's one of the beautiful things about C++: its not dogmatic, and allows one to write grotty Fortran when nothing else will do.

How come that's not one of the Quotes of the Week?

My advice on implementing stuff in C:

Posted Oct 15, 2010 20:29 UTC (Fri) by HelloWorld (guest, #56129) [Link] (2 responses)

I assert that "low level programming" is *not* what C++ is about. Modern C++ style recommends that you use smart pointers, rather than C pointers, for example.

How does that make the language any less "low level"? A smart pointer just automates stuff you'd normally do by hand (i. e. free resources or decrement a reference counter). It's just as efficient

Nor should you be using C strings, C arrays, C structures, C stdlib functions, C-like error handling, C macros, etc. Basically, modern C++ encourages using full high-level abstractions for data structures and algorithms, and is thus, fundamentally, high-level.

The use of C arrays isn't discouraged because they're "low-level", but because there are better alternatives. std::tr1::array is just as low-level-ish as a C array, it does't do bounds checking or anything fancy, and it's just as efficient. The difference is that it offers the interface of an STL container, allowing you to use STL algorithms with it.

The same basically applies to C macros. The MAX(x,y) macro kind of works, but std::max(x,y) works better. It'll complain if x and y are not of the same type, and it won't evaluate its arguments more than once. std::max isn't somehow higher-level than MAX, it just sucks less.

Some things in C++ actually raise the level of abstraction, for example with std::string you don't have to worry about memory allocation any longer, since the class will do it for you when needed. If you can't afford that, nobody is going to blame you for not using it. C++ was deliberately designed not to force some style of programming on the user, be it a high or a low level one (unlike C, which forces you to program on a low level of abstraction all the time).

My advice on implementing stuff in C:

Posted Oct 15, 2010 22:49 UTC (Fri) by chad.netzer (subscriber, #4257) [Link] (1 responses)

> unlike C, which forces you to program on a low level of abstraction all the time

And so why did you claim above (while admonishing others) that: "low level programming [...] is what C++ is about"? My claim is that it is about much more than that. Agree?

My advice on implementing stuff in C:

Posted Oct 15, 2010 22:55 UTC (Fri) by HelloWorld (guest, #56129) [Link]

Yes, perhaps I should have made it more clear that C++ is also about low level programming.

My advice on implementing stuff in C:

Posted Oct 15, 2010 22:23 UTC (Fri) by wahern (guest, #37304) [Link] (4 responses)

the "implicit int" rule from C89 assigns a meaning to some constructs that could otherwise be made illegal (and will be made illegal in the next C++ standard). Also the rule that a function declaration must come before the first call to the function comes from C. It's actually funny to see how almost every design mistake in C++ can be traced back to some kind of fuckup in C.

Both of those were features at the time (and the latter at least arguably still). They made writing a compiler and linker significantly easier. Given that ease of implementation was evidently at the very, very, very bottom of C++'s list of priorities, you should blame C++, not C, if those were carried forward.

Compatibility is no excuse because C++ is not compatible with C at the source level. C++ people always seem to demur on this issue, arguing that they're only incompatible at the fringes. As a primarily C developer who occasionally has to muck around w/ C++, getting real C code to compile in "extern C" mode is a nightmare. In my experience, I prefer to view C++ and C as not compatible at all; this makes for fewer headaches. In practice compatibility really stems from shared ABIs, and languages like Go and D make little pretense about this reality.

I don't have any real gripes with C++. I choose not to use it for very idiosyncratic reasons; namely that it dropped implicit conversion of void pointers. I also eschew Java largely because it has no unsigned integers, and also because Java is incredibly unportable outside of Windows and Linux.

My advice on implementing stuff in C:

Posted Oct 16, 2010 21:40 UTC (Sat) by jzbiciak (guest, #5246) [Link] (3 responses)

Compatibility is no excuse because C++ is not compatible with C at the source level. C++ people always seem to demur on this issue, arguing that they're only incompatible at the fringes. As a primarily C developer who occasionally has to muck around w/ C++, getting real C code to compile in "extern C" mode is a nightmare. In my experience, I prefer to view C++ and C as not compatible at all; this makes for fewer headaches.

Hmmm... I haven't had too much trouble moving C code to C++. If your point is that the experience isn't edit-free, I'll give you that though. C++ is generally much pickier. My main issues have been that the C++ compiler is much more righteously indignant about const-abuse1, and it wants me to explicitly cast pointers to void * (which you also mentioned).

I never thought I'd get into C++ much, but I have totally gotten hooked on templates and the stricter type checking. Modern compilers do a fantastic amount of work at compile time, and I love bringing that force to bear on programming problems. I also actually like that I have to propagate const around more proactively: it exposes thinkos in my design earlier. And I especially like the new reinterpret_cast vs. static_cast vs. dynamic_cast vs. const_cast. It's like having a torque wrench, flat head screwdriver, Philips head screwdriver and a hammer, rather than just having a 20lb sledge.

All that said, I've written way more C than I have C++ and still find C my default go-to language when writing in a compiled language. And lately, I've been writing a lot more Perl. You won't catch me CamelCasing in C or C++, although I will name classes Like::This in Perl. When in Rome...

What I don't understand is all the language hate between C and C++. Save your ire for Python. ;-)

(Just kidding on the Python part!)


1 Yes, I know you can get the same with C code if you use a good compiler and crank up the compiler warnings. And believe me, I do crank them up.

My advice on implementing stuff in C:

Posted Oct 17, 2010 19:14 UTC (Sun) by wahern (guest, #37304) [Link] (2 responses)

C99 has diverged considerably from C89, the forking point of "extern C". The biggest headaches for me are named initializers and compound literals, both of which are used in headers and macros of newish C code.

While the different kinds of casts are nice in C++, casting is frowned upon in both C and C++. Bjarne says that he purposefully made casting in C++ ugly to dissuade people from casting, and that part of the design criteria of C++ was to reduce the need to cast. And yet in actual code I see casting as far more prevalent in C++ than in C, maybe because people see the feature and feel it was put there to use freely; I dunno. Much complexity was added to replace the loss of implicit void pointer conversions, and I'm not sure there was any net gain. In any event, it's a PITA at the boundary of C and C++

My advice on implementing stuff in C:

Posted Oct 17, 2010 19:50 UTC (Sun) by jzbiciak (guest, #5246) [Link]

I hear you on the missing named initializers. I had forgotten about that. I guess, other than using restrict generously, I haven't started using too many C99-specific features.

I didn't know about the compound literals.... nifty!

I don't use a lot of casting, personally, but where I do, I like the ability to specify what exactly I'm trying to accomplish. Where I use casting most is in embedded programming, where I need to cast between a pointer type and an unsigned int. That comes up a lot when talking to peripherals. reinterpret_cast makes it so much clearer what I'm trying to do, IMHO.

My advice on implementing stuff in C:

Posted Oct 17, 2010 23:47 UTC (Sun) by foom (subscriber, #14868) [Link]

> And yet in actual code I see casting as far more prevalent in C++ than in C

I suspect this is simply because you *can* see the casting in C++: a "static_cast<Whatever *>(x)" sticks out like a sore thumb, vs the almost-invisible C-style parenthesized type expression.

My advice on implementing stuff in C:

Posted Oct 19, 2010 9:54 UTC (Tue) by nix (subscriber, #2304) [Link] (2 responses)

What? Implicit int doesn't have anything to do with the complexity of name lookup. I was thinking of things like Koenig lookup, the effect of templates on name lookup in general, and what happened to it when 'export' came in. All the rules in isolation are sensible, but in combination it's fearsome.

My advice on implementing stuff in C:

Posted Oct 19, 2010 14:46 UTC (Tue) by HelloWorld (guest, #56129) [Link] (1 responses)

It does have to do with the name lookup rules in very non-obvious ways. I quote from "Design and Evolution of C++", page 141/142:
typedef int P();
typedef int Q();
class X {
  static P(Q); // define Q to be a P. 
               // equivalent to ''static int Q()''
               // the parentheses around Q are redundant

               // Q is no longer a type in this scope

  static Q(P); // define Q to be a function taking an argument of type P
               // and returning an int.
               // equivalent to ''static int Q(int());
};

Declaring two functions with the same name in the same scope is fine as long as their argument types differ sufficiently. Reverse the order of member declarations, and we define twi functions called P instead. Remove the typedef for either P or Q from the context, and we get yet other meanings.

This example ought to convince anybody that standards work is dangerous to your mental health. The rules we finally adopted makes[sic] this example undefined.

Note that this example -- like many others -- is based on the unfortunate ''implicit int'' rule inherited from C.

My advice on implementing stuff in C:

Posted Oct 19, 2010 15:32 UTC (Tue) by nix (subscriber, #2304) [Link]

Ah yes, I forgot that ingenious example. However, my point stands: this is not a name lookup problem, it is a particularly ingenious use of the parsing rules around implicit int to produce radically different parse trees from nearly identical input (and by no means the only example: see Alexandrescu's wonderful code in _Modern C++ Design_ to execute arbitrary code at compile time via abuse of sizeof().)

But, yes, this sort of example is probably an indictment of C++. Clarity in coding this is not!

My advice on implementing stuff in C:

Posted Oct 21, 2010 19:02 UTC (Thu) by ccurtis (guest, #49713) [Link]

Virtual Base Classes. A real WTF if ever there was one.

I've used a virtual base class to serialize access to a piece of hardware with software fallback. Allows for easy rate-limiting when the amount of queries might exceed the hardware precision and reading the device is (potentially) slow ...

My advice on implementing stuff in C:

Posted Oct 15, 2010 16:03 UTC (Fri) by Ed_L. (guest, #24287) [Link] (35 responses)

Its not just you, but for the sake of argument it may as well be :) If you feel you personally are a better, more productive programmer using C rather than C++, by all means use C. Aside from corner cases, its a subset :) :)

Me, I've been productive with C++ for over twenty years, and really like it. I'll grant there are more modern languages, but for HPC purposes I haven't found any more powerful, until I recently stumbled across D. (You know, the language C++ always wanted to be but was too rushed.) And that trip is too recent for me to draw a firm conclusion.

Some will ague that Java is just as good at HPC, and for them they are probably right. (Insert obligatory Fortran dereference here.) I also dabble in system programming, and just personally prefer one language that does it all. Others prefer to mix and match. And surely there must be places for Perl and its ilk -- provided they are kept brief and to the point.

"Although programmers dream of a small, simple languages, it seems when they wake up what they really want is more modelling power." -- Andrei Alexandrescu

My advice on implementing stuff in C:

Posted Oct 15, 2010 16:21 UTC (Fri) by mjthayer (guest, #39183) [Link] (34 responses)

> Its not just you, but for the sake of argument it may as well be :) If you feel you personally are a better, more productive programmer using C rather than C++, by all means use C.

I do now prefer to use C for that reason. But I still find C++ tantalisingly tempting, as it can do so many things that are just painful in C. I do know from experience though that it will come back to haunt me if I give in to the temptation. And I am experimenting to find ways to do those things more easily in C. The two that I miss most are automatic destruction of local objects (which is actually just a poor man's garbage collection) and STL containers.

Oh yes, add binary compatibility with other things to my list of complaints above; dvdeug's comment below is one example of the problem. That is something that has hurt me more often than I expected.

My advice on implementing stuff in C:

Posted Oct 15, 2010 20:25 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

I gave up on C for recreational programming for one very simple reason: It is impossible to write vector arithmetic in a civilized syntax in C.

My advice on implementing stuff in C:

Posted Oct 16, 2010 10:18 UTC (Sat) by paulj (subscriber, #341) [Link] (32 responses)

Have you looked at Vala? Modern OOP language that builds on GLib and spits out C. Seems reasonably sane, certainly compared to C++...

My advice on implementing stuff in C:

Posted Oct 18, 2010 8:41 UTC (Mon) by marcH (subscriber, #57642) [Link] (4 responses)

> Have you looked at Vala? Modern OOP language that builds on GLib and spits out C.

Compiling to a lower-level yet still "human-writable" language is an interesting approach that can be successful to some extend. However it always has this major drawback: debugging & profiling becomes incredibly more difficult. It also gives a really hard time to fancy IDEs. All these need tight integration and the additional layer of indirection breaks that. So handing maintenance of average/poor quality code over to other developers becomes nearly impossible.

My advice on implementing stuff in C:

Posted Oct 18, 2010 9:09 UTC (Mon) by mjthayer (guest, #39183) [Link] (3 responses)

> Compiling to a lower-level yet still "human-writable" language is an interesting approach that can be successful to some extend. However it always has this major drawback: debugging & profiling becomes incredibly more difficult.

Without having looked at Vala, I don't see why this has to be the case. C itself is implemented as a pipeline, this would just add one stage onto the end. The main problem to solve that I can see is how to pass information down to the lower levels about what C code corresponds to which Vala code.

My advice on implementing stuff in C:

Posted Oct 18, 2010 9:28 UTC (Mon) by cladisch (✭ supporter ✭, #50193) [Link] (2 responses)

> The main problem to solve that I can see is how to pass information down to the lower levels about what C code corresponds to which Vala code.

C has the #line directive for that (GCC doc); AFAIK Vala generates it when in debug mode.

My advice on implementing stuff in C:

Posted Oct 18, 2010 10:58 UTC (Mon) by mjthayer (guest, #39183) [Link]

> C has the #line directive for that (GCC doc); AFAIK Vala generates it when in debug mode.
Sounds reasonable as long as they skip the pre-processor stage, otherwise things might get rather confused. I assume that their variables map one-to-one to C variables to simplify debugging.

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:23 UTC (Tue) by nix (subscriber, #2304) [Link]

I don't entirely understand why they don't generate it always. GCC's own preprocessor does. If you don't, even compiler error messages will be wrong, and you want them to be right even if you're not in debug mode.

My advice on implementing stuff in C:

Posted Oct 18, 2010 9:14 UTC (Mon) by mjthayer (guest, #39183) [Link] (26 responses)

> Have you looked at Vala? Modern OOP language that builds on GLib and spits out C.
I haven't looked at GLib that closely though. Is it used anywhere other than user space/desktop programming? If you are careful about what language features you use - and to disable exceptions! - C++ can be used very close to the bone (or iron or whatever).

My advice on implementing stuff in C:

Posted Oct 18, 2010 18:01 UTC (Mon) by paulj (subscriber, #341) [Link]

You can make Vala not use GLib if you wish, on a class by class basis by makring them "Compact". You lose some things, like the automatically refcounted classes, inheritance.

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:24 UTC (Tue) by nix (subscriber, #2304) [Link] (24 responses)

Yes, glib is used all over the place these days. syslog-ng and dbus aren't desktop programs by any means.

My advice on implementing stuff in C:

Posted Oct 19, 2010 15:19 UTC (Tue) by mjthayer (guest, #39183) [Link]

> Yes, glib is used all over the place these days. syslog-ng and dbus aren't desktop programs by any means.

They are still definitely user space though. If you are careful, C++ can be used for driver or even kernel code (e.g. the TU-Dresden implementation of the L4 micro-kernel with its unfortunate name was implemented in C++). Perhaps GLib would be too with a bit of work on it, I haven't used it enough to know.

My advice on implementing stuff in C:

Posted Oct 21, 2010 2:35 UTC (Thu) by wahern (guest, #37304) [Link] (22 responses)

Perfect. Core system daemons using a library that aborts on malloc() failure.

Geez.

This is why I never use Linux on multi-user systems.

My advice on implementing stuff in C:

Posted Oct 21, 2010 3:00 UTC (Thu) by foom (subscriber, #14868) [Link] (21 responses)

With the default settings on many distros, you're much more likely to just get a random process on your box forcibly killed when you run out of memory than for malloc to fail. So, there's really not much point in being able to gracefully handle malloc failure...

Just so long as pid 1 can deal with malloc failure, that's pretty much good enough: it can just respawn any other daemon that gets forcibly killed or aborts due to malloc failure.

My advice on implementing stuff in C:

Posted Oct 21, 2010 19:55 UTC (Thu) by nix (subscriber, #2304) [Link] (20 responses)

Quite so. Note that things like bash also abort on malloc() failure.

My advice on implementing stuff in C:

Posted Oct 21, 2010 20:15 UTC (Thu) by mjthayer (guest, #39183) [Link] (19 responses)

> Quite so. Note that things like bash also abort on malloc() failure.

Isn't that the FSF's standard recommendation (/requirement)? I find the thought amusing that if you subdivide your application well into different processes and make sure that you set atexit() functions for those resources that won't be freed by the system that isn't so far away from throwing an exception in C++.

My advice on implementing stuff in C:

Posted Oct 21, 2010 22:01 UTC (Thu) by nix (subscriber, #2304) [Link] (18 responses)

Yes, it is. It's really the only sensible thing to do. If you want to do something more complex than die on malloc() failure, do it in a parent monitor process: anything else is too likely to be unable to do whatever the recovery process is, because, well, you're still out of memory. (Bonus: overcommit and the OOM killer work fine with this model, as long as your monitor process is much smaller than the OOMing one, which is very likely. It's even more certain to work if the monitor oom_adj/oom_score's itself away from being OOM-killed.)

My advice on implementing stuff in C:

Posted Oct 22, 2010 21:42 UTC (Fri) by wahern (guest, #37304) [Link] (17 responses)

That's horrible design for a system, especially a server system. All of my daemons handle malloc failure. If I'm streaming a video feed to 2,000 clients and get a failure on the 2,001st (descriptor failure, malloc failure, any other failure), why would I destroy all 2,000 contexts when I can just fail one!? The practice is called graceful failure for a reason.

The first thing I do on any of my server systems is to disable overcommit. Even w/ it disabled I believe the kernel will still overcommit in some places (fork, perhaps), but at least I don't need to worry about some broken application 'causing some other critical service to be terminated.

If an engineer can't handle malloc failure how can he be expected to handle any other myriad possible failure modes? Handling malloc failure is hardly any more difficult, if at all, than handling other types of failures (disk full, descriptor limit, shared memory segment limit, thread limit, invalid input, etc, etc, etc). With proper design all those errors should share the same failure path; if you can't handle one you probably aren't handling any of them properly.

Plus, it's a security nightmare. If the 2,001st client can cause adverse results to the other 2,000 clients... that's a fundamentally broken design. Yes, there are other issues (bandwidth, etc), but those are problems to be addressed, not justifications for skirking responsibility.

And of course, on embedded system's memory (RAM and swap) isn't the virtually limitless resource as on desktops or servers.

Bailing on malloc is categorically wrong for any daemon, and most user-interactive applications. Bailing on malloc failure really only makes sense for batch jobs, where a process is doing one thing, and so exiting the process is equivalent to signaling inability to complete that particular job. Once you start juggling multiple jobs internally, bailing on malloc failure is a bug, plain and simple.

My advice on implementing stuff in C:

Posted Oct 22, 2010 22:18 UTC (Fri) by nix (subscriber, #2304) [Link] (14 responses)

Well, you still do need to worry about that. Not because of fork(): because of the stack. Unless your programs carefully start with a huge deep recursion to blow the stack out, you're risking an OOM kill every single time you make a function call. So you do need to deal with it anyway.

I don't know of any programs (other than certain network servers doing simple highly decoupled jobs, and sqlite, whose testing framework is astonishingly good) where malloc() failure is usefully handled. Even when they try, a memory allocation easily slips in there, and how often are those code paths tested? Oops, you die. From a brief inspection glibc has a number of places where it kills you on malloc() failure too (mostly due to trying to handle errors and failing), and a number of places where the error handling is there but is obviously leaky or leads to the internal state of things getting messed up. And if glibc can't get it right, who can? In practice this is not a problem because glibc also calls functions so can OOM-kill you just by doing that.

(And having one process doing only one job? That's called good design for the vast majority of Unix programs. Massive internal multithreading is a model you move to because you are *forced* to, and one consequence of it is indeed much worse consequences on malloc() failure.)

Even Apache calls malloc() here and there instead of using memory pools. Most of these handle errors by aborting (such as some MPM worker calls) or don't even check (pretty much all of the calls in the NT service-specific worker, but maybe NT malloc() never returns NULL, I dunno).

In an ideal world I would agree with you... but in practice handling all memory errors as gracefully as you suggest would result in our programs disappearing under a mass of almost-untestable massively bitrotten error-handling code. Better to isolate things into independently-failable units. (Not that anyone does that anyway, and with memory as cheap as it is now, I can't see anyone's handling of OOM improving in any non-safety-critical system for some time. Hell, I was at the local hospital a while back and their *MRI scanner* sprayed out-of-memory errors on the screen and needed restarting. Now *that* scared me...)

My advice on implementing stuff in C:

Posted Oct 23, 2010 1:14 UTC (Sat) by wahern (guest, #37304) [Link] (13 responses)

glib or glibc? Those are completely different libraries. If glibc is aborting on allocation error than it's non-conforming and it should be reported as a bug. There's a reason C and POSIX define ENOMEM.

As for the stack, the solution there is easy, don't recurse. Any recursive algorithm can be re-written as an iterative algorithm. Of course, if you use a language that optimizes tail-calls then you're already set. C doesn't, and therefore writing recursive algorithms is a bad idea, and it's why it's quite uncommon in C code.

As for testing error paths: if somebody isn't testing error paths than they're not testing error paths. What difference does it matter whether they're not testing malloc failure or they're not testing invalid input? It's poor design; it creates buggy code. And if you use good design habits, like RAII (not just a C++ pattern), then the places for malloc failure to occur are well isolated. It's not a very good argument to point out that most engineers write crappy code. We all know this; we all do it ourselves; but it's ridiculous to make excuses for it. If you can't handle the responsibility, then don't write applications in C or for its typical domain. If I'm writing non-critical or throw-away code, I'll use Perl or something else. Why invest the effort in using a language with features--explicit memory management--that I'm not going to use?

Using a per-process context design is in many circumstances a solid choice (not for me because I write HP embedded network server software, though I do prefer processes instead of threads for concurrency, so I might have 2 processes per cpu each handling hundreds of connections). But here's another problem w/ default Linux--because of overcommit, it's not always--perhaps not even often--that the offending process gets killed; it's the next guy paging in a small amount of memory that gets killed. It's retarded. It's a security problem. Can you imagine your SSH session getting OOMd because someone was fuzzing your website? It happens.

Why make excuses for poor design?

My advice on implementing stuff in C:

Posted Oct 23, 2010 3:18 UTC (Sat) by foom (subscriber, #14868) [Link]

> it's not always--perhaps not even often--that the offending process gets killed; it's the next guy paging in a small amount of memory that gets killed.

Actually, the OOM-killer tries *very* hard to not simply kill the next guy paging in a small amount of memory, but to determine what the real problem process is and kill that instead. It doesn't always find the correct culprit, but it often does, and at least it tends not to kill your ssh session.

My advice on implementing stuff in C:

Posted Oct 23, 2010 18:29 UTC (Sat) by paulj (subscriber, #341) [Link] (6 responses)

Why make excuses for poor design?

Nix isn't making excuses, he's pointing out reality. Which, sadly, is always far from perfect. A programme which is designed to cope with failure *despite* the suckiness of reality should do better than one that depends on perfection underneath it...

My advice on implementing stuff in C:

Posted Oct 23, 2010 19:39 UTC (Sat) by wahern (guest, #37304) [Link]

Robustness, like security, should be applied in-depth. Of course I use monitor processes and dead man switches to restart processes. But I don't rely on one to the exclusion of another.

My advice on implementing stuff in C:

Posted Oct 24, 2010 15:17 UTC (Sun) by nix (subscriber, #2304) [Link] (4 responses)

Indeed. It is simply reality that nobody ever tests malloc() failure paths -- at least, they do not and cannot test every combination of malloc-fails-and-then-it-doesn't, because there is an exponential explosion of them. People do not armour most programs, even important ones, to survive malloc() failure, because it would make the code unreadable and because available memory continues to shoot upwards so most people prefer to assume that reasonably sized allocations will not fail unless something is seriously wrong with the machine. And, guess what? They're right nearly all the time.

The suggestion to avoid stack-OOM by converting recursive algorithms to iterative ones is just another example of this, because while deep recursion is more likely to stack-OOM than the function calls involved in an iterative algorithm, the latter will still happen now and then. The only way to avoid *that* is to do a deep recursion first, and then ensure that you never call functions further down in the call stack than you have already allocated, neither in your code nor in any library you may call. I know of no tools to make this painful maintenance burden less painful. So nobody at all armours against this case, either.

I think it *is* important to trap malloc() failure so that you can *log which malloc() failed* before you die (and that means your logging functions *do* have to be malloc()-failure-proof: I normally do this by having them take their allocations out of a separate, pre-mmap()ed emergency pool). Obviously this doesn't work if you are stack-OOMed, nor if the OOM-killer zaps you. Note that this *is* an argument against memory overcommit: that overcommit makes it harder to detect which of many allocations in a program is buggy and running away allocating unlimited storage. But 'we want to recover from malloc() failure' is not a good reason to not use overcommmitment, because very few programs even try, and of those that try, most are surely lethally buggy in this area in any case: and fixing this is completely impractical.

Regarding my examples above: glib always aborts on malloc() failure, so so do all programs that use it. glibc does not, but its attempts to handle malloc() failure are buggy and leaky at best, and of course it (like everything else) remains vulnerable to stack- or CoW-OOM.

My advice on implementing stuff in C:

Posted Oct 25, 2010 10:05 UTC (Mon) by hppnq (guest, #14462) [Link] (3 responses)

The only way to avoid *that* [stack-OOM] is to do a deep recursion first, and then ensure that you never call functions further down in the call stack than you have already allocated, neither in your code nor in any library you may call.

You would have to know in advance how deep you can recurse, or you should be able to handle SIGSEGV. The maximum stack size can be tuned through rlimits, and that should solve wahern's problem of some other process draining out all available memory. This problem is not the result of bad programming, but of bad systems management.

(That said, rlimits are horribly broken. Just add more memory. ;-)

My advice on implementing stuff in C:

Posted Oct 25, 2010 22:28 UTC (Mon) by paulj (subscriber, #341) [Link] (2 responses)

FWIW, it's not defined what happens if you overflow the stack. You can't rely on getting a SEGV (isn't that a very recent addition to Linux, thanks to that Xorg security hole)?

My advice on implementing stuff in C:

Posted Oct 25, 2010 22:36 UTC (Mon) by nix (subscriber, #2304) [Link] (1 responses)

Even if you do get SIGSEGV from a stack-OOM, well, you'd better hope the system supports sigaltstack() as well, or you'll not be able to call the signal handler... oh, and, btw, it is (even now) easier to make a list of the systems on which sigaltstack() works properly than the systems on which it does not :(

My advice on implementing stuff in C:

Posted Oct 26, 2010 7:55 UTC (Tue) by hppnq (guest, #14462) [Link]

The point is, you can't safely expand the stack by recursing deeply in order to prevent running out of stack.

My advice on implementing stuff in C:

Posted Oct 25, 2010 11:04 UTC (Mon) by mjthayer (guest, #39183) [Link] (4 responses)

> As for the stack, the solution there is easy, don't recurse.

Just out of interest, are there really no simple ways (as nix suggested) to allocate a fixed-size stack at programme begin in Linux userland? I can't see any theoretical reasons why it should be a problem.

> And if you use good design habits, like RAII (not just a C++ pattern), then the places for malloc failure to occur are well isolated.

Again, I am interested in how you do RAII in C. I know the (in my opinion ugly and error-prone) goto way, and I could think of ways to do at run time what C++ does at compile time (doesn't have to be a bad thing, although more manual steps would be needed). Do you have any other insights?

My advice on implementing stuff in C:

Posted Oct 25, 2010 11:52 UTC (Mon) by hppnq (guest, #14462) [Link]

Just out of interest, are there really no simple ways (as nix suggested) to allocate a fixed-size stack at programme begin in Linux userland?

ld --stack or something similar?

My advice on implementing stuff in C:

Posted Oct 25, 2010 22:41 UTC (Mon) by nix (subscriber, #2304) [Link] (2 responses)

You do RAII in C by wrapping everything up in opaque structures allocated by dedicated allocators and freed either by dedicated freers or by APR-style pool destructors. If you're using mempools, you can even get close to the automagic destructor calls of C++ (you still have to free a mempool, but if you free the pool the free cascades down all contained pools and all their destructors.)

My advice on implementing stuff in C:

Posted Oct 26, 2010 8:06 UTC (Tue) by mjthayer (guest, #39183) [Link] (1 responses)

> You do RAII in C by wrapping everything up in opaque structures allocated by dedicated allocators and freed either by dedicated freers or by APR-style pool destructors.

Right, roughly what I was thinking of. Thanks for the concrete pointers!

My advice on implementing stuff in C:

Posted Oct 26, 2010 8:18 UTC (Tue) by mjthayer (guest, #39183) [Link]

> Right, roughly what I was thinking of.

Except of course that there is no overriding need to use memory pools. You can also keep track of multiple allocations (possibly also with destructors) in some structure and free them all at one go when you are done. Freeing many allocations in one go rather than freeing each as soon as it is no longer needed might also be more efficient cache-wise.

My advice on implementing stuff in C:

Posted Oct 25, 2010 1:56 UTC (Mon) by vonbrand (subscriber, #4458) [Link]

No overcommit makes OOM kills much more likely (even in cases which would work fine otherwise). You've got your logic seriously backwards...

My advice on implementing stuff in C:

Posted Oct 25, 2010 16:10 UTC (Mon) by bronson (subscriber, #4806) [Link]

> Bailing on malloc is categorically wrong for any daemon, and most user-interactive applications. Bailing on malloc failure really only makes sense for batch jobs

OK, let's say your interactive application has just received a malloc failure. What should it do? Display an error dialog? Bzzt, that takes memory. Free up some buffers? There's good chance that any memory you free will just get sucked up by a rogue process and your next malloc attempt will fail too. And the next one. And the next one. And be careful with your error-handling code paths because, if you cause more data to get paged in from disk (say, a page of string constants that are only accessed in OOM conditions), you're now in even deeper trouble.

Bailing out is about the only thing ANY process can reliably do. If you try to do anything more imaginative, you are almost guaranteed to get it wrong and make things worse.

The days of cooperative multitasking and deterministic memory behavior are long gone (or, more accurately, restricted to a tiny sliver of embedded environments that no general purpose toolchain would ever consider a primary target). And good riddance! Programming is so much nicer these days that, even though this seems heinous, I'd never want to go back.

I can virtually guarantee you've never actually tested your apps in OOM situations or you would have discovered this for yourself. Try it! Once you fix all the bugs in your untested code, I think you'll be surprised at how few options you actually have.

My advice on implementing stuff in C:

Posted Oct 15, 2010 14:45 UTC (Fri) by dvdeug (guest, #10998) [Link]

The problem is, for a standard Unix library, you have to offer an interface usable from C, Python, Perl, Fortran, Ada, Java and the rest of the bunch. Which means if you're not C, you have to fake it, and that in particular means you can't use any feature that might not work if the main or the calling code is written in C. Templates are unusable in an interface that has to be called by <i>any</i> other language, and objects won't work with any non-OO language like C.

My advice on implementing stuff in C:

Posted Oct 15, 2010 21:44 UTC (Fri) by cmccabe (guest, #60281) [Link] (20 responses)

Seriously? A programming languages flamewar? What's next-- vi vs. emacs? And why isn't there a crazy guy posting about how all software will be rewritten in Lisp within the next 5 years? If people could just accept that different programming languages are good for different things, 99% of these discussions would be over before they began.

Your comment is particularly silly because Rusty isn't even talking about how libraries are implemented. He's talking about the API that you should present to users, and how the library should be built and packaged. Almost all of his comments apply to C++ libraries just as much as C libraries.

C++ libraries pretty much *have* to present a C-style interface to the world. Firstly, if you want your library to be used in a language like Ruby, you need a C-style binding, because C++ bindings are not available. Secondly, using C++ constructs like templates and virtual classes in the API force you to rebuild your entire application every time the library version changes. Thirdly, there are many different dialects of C++ in use. Some people like exceptions; other people never use them. (Google, for example, does not allow its C++ programmers to use exceptions.) Some people compile with -fno-rtti, other people never do. Some people use smart pointers; other people prefer to let the caller manage the memory. You're never going to write a C++ API that will please all of these people. So just create a C API and be happy.

My advice on implementing stuff in C:

Posted Oct 15, 2010 23:03 UTC (Fri) by HelloWorld (guest, #56129) [Link] (16 responses)

If people could just accept that different programming languages are good for different things, 99% of these discussions would be over before they began.

I do accept that different languages are good at different things, I just happen to think that C is bad even at what it's supposed to be good at. There is no support for generic programming (read: type-safe container classes). The type system is generally a mess, as early C didn't have a cast operator and therefore needed all kinds of crazy implicit conversions. There is no module system. The preprocessor is a sucky workaround used to emulate features that ought to be in the language proper (genericity and modules), and it makes it much harder to build good tools for the C language.

None of the above conflict with the language being a systems programming language, so that's not an excuse for C sucking as hard as it does.

My advice on implementing stuff in C:

Posted Oct 16, 2010 22:46 UTC (Sat) by marcH (subscriber, #57642) [Link] (1 responses)

The real excuse is age. To be fair with C and put it in perspective you have to compare it with languages just as old. This will not solve anything but explain a lot.

My advice on implementing stuff in C:

Posted Oct 17, 2010 22:20 UTC (Sun) by HelloWorld (guest, #56129) [Link]

OK, so C was invented in 1972, ML was invented in 1973. ML had a revolutionary type system (including polymorphism, type inference and algebraic data types), it had a powerful module system, and it didn't have a preprocessor because it didn't need one.
At least some of these innovations could have been copied by C without compromising the suitability for systems programming, for example the idea of strong typing and the module system; they just didn't do it.

Also, even if they hadn't known about the necessity of modules, generic algorithms and data structures etc., they could have added them later on when it became apparent; for example Ada had generic programming features in '83.

My advice on implementing stuff in C:

Posted Oct 18, 2010 5:10 UTC (Mon) by cmccabe (guest, #60281) [Link] (13 responses)

> The type system is generally a mess, as early C didn't have a cast
> operator and therefore needed all kinds of crazy implicit conversions

C has a few implicit conversions that might be confusing to novices. That hardly qualifies it as "a mess." I think C's type system is fine for low-level programming, where you're playing with bits and bytes.

> There is no module system.

In C and C++, functions and variables that are declared "file static" can't be referenced outside that file. If you combine that with a sane policy about how to split up functionality between multiple files, it does of the things that module systems do in other languages.

C also has a built-in way of loading code at runtime in the form of shared libraries. C even has the ability to run code when a shared library is loaded, and when it is unloaded.

As Rusty would no doubt say: modularity isn't a programming language feature. It's a programmer feature.

> The preprocessor is a sucky workaround used to emulate features that ought
> to be in the language proper (genericity and modules), and it makes it
> much harder to build good tools for the C language.

Any language that doesn't have eval() probably needs a macro system.

Look at what happened with Java. In their arrogance, the designers believed that they were beyond the need for a macro system. But the language wasn't expressive enough to describe a lot of the things that people needed to do. The result was that an ugly mass of automatic code generators got written to do the things that a macro system would have done. Now, in addition to learning the core language, you have to read huge tomes describing these automatic code generators. Usually they come with clunky XML interfaces and specialized tools.

> There is no support for generic programming (read: type-safe container
> classes).

Ok, I agree with you here. Generic programming support would be a big improvement. Templates are probably C++'s best feature.

My advice on implementing stuff in C:

Posted Oct 18, 2010 8:51 UTC (Mon) by marcH (subscriber, #57642) [Link] (2 responses)

> In C and C++, functions and variables that are declared "file static" can't be referenced outside that file. If you combine that with a sane policy about how to split up functionality between multiple files, it does of the things that module systems do in other languages.

Sorry but this is not good enough: it does not scale. Whereas most module systems allow nesting, "static" in C gives you only one level. And this puts severe constraints on how you split your project into files.

> Java designers believed that they were beyond the need for a macro system. The result was that an ugly mass of automatic code generators got written to do the things that a macro system would have done.

While I share the hate for these code generators, a macro system does not seem like the solution to me. Examples?

My advice on implementing stuff in C:

Posted Oct 18, 2010 19:17 UTC (Mon) by cmccabe (guest, #60281) [Link] (1 responses)

> Sorry but this is not good enough: it does not scale. Whereas most module
> systems allow nesting, "static" in C gives you only one level. And this
> puts severe constraints on how you split your project into files.

It's easy enough to split your project into multiple libraries. Then each library can do its own self-contained thing. Hopefully the graph of dependencies is a tree. I have programmed C in this style before. Build systems like CMake make it easy.

> While I share the hate for these code generators, a macro system does not
> seem like the solution to me. Examples?

Macros can be used for:

* Initializing structs in a very repetitive way.

* Generating short functions that do something trivial, like accessors. High level languages like ruby have :attr_accessor, but that relies on metaprogramming which isn't available in a low-level language like C.

* Generic programming

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:36 UTC (Tue) by nix (subscriber, #2304) [Link]

A better alternative for short functions that do something trivial is static inline functions. You can't make something a macro unless it's just shortening something the user can do anyway, and for *that* you generally want static inlines in a header file. A decent compiler will inline them for you if appropriate.

What macros really can do that nothing else can is anything involving stringization or token pasting, e.g. filling up a structure statically with information describing other C identifiers (fields in some other structure, that sort of thing). Thanks to stringization you can fill in the *name* of the identifier and its size without worrying about skew.

My advice on implementing stuff in C:

Posted Oct 18, 2010 16:52 UTC (Mon) by HelloWorld (guest, #56129) [Link] (8 responses)

C has a few implicit conversions that might be confusing to novices. That hardly qualifies it as "a mess."
C has value-destroying conversions (double -> float, int -> char, hell, even float -> int), it has cycles in the graph of conversions (int -> char, char -> int), and the void* -> T* conversion frequently leads to bugs. enums aren't really a type but a means to define integer constants. 'a' isn't a char literal but an integer literal (check it, sizeof 'a' == 4 on most compilers). If all this isn't a mess, what is? Even Bjarne Stroustrup called C's implicit conversions "rather chaotic" (Design & Evolution of C++, page 224).
In C and C++, functions and variables that are declared "file static" can't be referenced outside that file. If you combine that with a sane policy about how to split up functionality between multiple files, it does of the things that module systems do in other languages.
Actually, it's a rather lame workaround. You have to put header guards everywhere, which is just useless clutter. When a header file changes, you have to recompile every file that includes it, even if the change is totally irrelevant for some files, for example when you removed an unused function prototype or added a comment. If you forget to include a header file into its implementation file, the compiler won't check that the prototypes match the implementation. Also, the issue of name collisions isn't resolved that way. I learned about this when I worked on a project that typedefed its own fixed-size integer types (u64, u32 etc.), and one day they realized that some system header file also typedefed those names.
Any language that doesn't have eval() probably needs a macro system.
I'm not so sure about that, but let's just assume its true. Then why did they build such a crappy one for C? Proper macro systems, such as those from LISP, work on the AST level, while the C preprocessor works on the token level. The result of that is that you can't write #define square(x) x*x, you have to write #define square(x) ((x)*(x)). Also, the C preprocessor doesn't really know anything about the C language. You can't do #if sizeof (int) >= 4, and you can't do stuff like declare a function only if it's not declared already.
Look at what happened with Java. In their arrogance, the designers believed that they were beyond the need for a macro system. But the language wasn't expressive enough to describe a lot of the things that people needed to do. The result was that an ugly mass of automatic code generators got written to do the things that a macro system would have done. Now, in addition to learning the core language, you have to read huge tomes describing these automatic code generators. Usually they come with clunky XML interfaces and specialized tools.
The C preprocessor is just as extralinguistic as those code generators, the only difference is that it's called by the compiler driver, while other such tools aren't. Also, the C preprocessor clearly isn't powerful enough to replace code generators even for C. Just look at the numerous code generators in use every day, like flex, bison or ecpg, or those things that generate "config.h" for a C program. On the other hand, Java has annotations and annotation processors, which can replace many uses of code generators.

My advice on implementing stuff in C:

Posted Oct 18, 2010 18:23 UTC (Mon) by cmccabe (guest, #60281) [Link] (6 responses)

> Actually, it's a rather lame workaround. You have to put header guards
> everywhere, which is just useless clutter. When a header file changes, you
> have to recompile every file that includes it, even if the change is
> totally irrelevant for some files, for example when you removed an unused
> function prototype or added a comment

Well, it sounds like you should be advocating Google Go then, rather than C++.

> You can't do #if sizeof (int) >= 4, and you can't do stuff like declare a
> function only if it's not declared already

Actually, you can. Check out static_assert in boost and BUILD_BUG_ON in the kernel.

> The C preprocessor is just as extralinguistic as those code generators,
> the only difference is that it's called by the compiler driver, while
> other such tools aren't.

This is a case where what seems like a small detail to academics is actually a huge deal for practical programmers. Having one preprocessor instead of hundreds really simplifies a programmer's life.

> Also, the C preprocessor clearly isn't powerful
> enough to replace code generators even for C. Just look at the numerous
> code generators in use every day, like flex, bison or ecpg, or those
> things that generate "config.h" for a C program. On the other hand, Java
> has annotations and annotation processors, which can replace many uses of
> code generators.

I haven't thought about it deeply, but I don't think that any macro preprocessor would be powerful enough to replace flex or bison. If you want to define your own domain-specific language without using code generators, just use a high-level language like Ruby.

Also, my understanding is that Java annotations are usually used in conjunction with code generators and static analyzers, rather than apart from them.

My advice on implementing stuff in C:

Posted Oct 18, 2010 21:15 UTC (Mon) by HelloWorld (guest, #56129) [Link] (5 responses)

> Well, it sounds like you should be advocating Google Go then, rather than C++.

I'm not advocating C++. If you read my comments carefully, you'll see that I said that C++ is not a beautiful language, and I also admitted that the name lookup rules are too complex. And of course, some of my criticisms of C (lack of a module system, the C preprocessor) apply to C++ as well; it also takes too long to compile.
I merely defend C++ when I feel it's being criticized unfairly, that doesn't mean I'm a fan of the language. I also mentioned other languages than C++ (specifically, Go, Rust and D) in my second comment; I just don't know them as well as C++.

> Actually, you can. Check out static_assert in boost and BUILD_BUG_ON in the kernel.

You're missing the point. The point is that the C preprocessor doesn't know anything about the C language. When you use BUILD_BUG_ON(x), it's not the preprocessor that complains when x is true, but the C compiler; the preprocessor is merely used to generate a C program fragment that is illegal if x is true. It's the same with BOOST_STATIC_ASSERT. There are plenty of other things that can't be done with the C preprocessor due to this, for example you can't check whether two arguments of a macro are of the same type.

> This is a case where what seems like a small detail to academics is actually a huge deal for practical programmers. Having one preprocessor instead of hundreds really simplifies a programmer's life.

Actually, the C preprocessor is a wonderful example of how having a sucky hack that is thought the be "good enough" by many people prevents progress. If the preprocessor hadn't been there, somebody would have invented proper facilities for generic programming and modules, which would have simplified programmer's lives much more than the preprocessor did.
Btw, I'm still not convinced that any language lacking eval needs a macro facility, could you elaborate on that?

> I haven't thought about it deeply, but I don't think that any macro preprocessor would be powerful enough to replace flex or bison.

Then go and learn about Lisp macros. With those, you have the full power of Lisp available to do source code transformations. Writing a Lisp macro that generates parsers isn't harder than writing a Lisp program that does.

> Also, my understanding is that Java annotations are usually used in conjunction with code generators and static analyzers, rather than apart from them.

I only recently learned about what annotations can do. Projekt Lombok uses them get rid of all those trivial functions for Java classes (getters, setters, hashCode etc.). This is clearly a case where code generators are actively being made redundant with annotations.

My advice on implementing stuff in C:

Posted Oct 18, 2010 22:01 UTC (Mon) by cmccabe (guest, #60281) [Link] (4 responses)

> There are plenty of other things that can't be done with the C
> preprocessor due to this, for example you can't check whether two
> arguments of a macro are of the same type

From kernel.h:

> /*
> * min()/max()/clamp() macros that also do
> * strict type-checking.. See the
> * "unnecessary" pointer comparison.
> */
> #define min(x, y) ({ \
> typeof(x) _min1 = (x); \
> typeof(y) _min2 = (y); \
> (void) (&_min1 == &_min2); \
> _min1 < _min2 ? _min1 : _min2; })

I'm getting a little tired of being told what I can't do, especially when it turns out that I can do it after all.

From the Project Lombok web page:

> Data is nice, but its certainly not the only boilerplate buster that
> lombok has to offer. If you need more fine grained control, there's
> @Getter and @Setter, and to help you in correctly cleaning up your
> resources, @Cleanup can automatically and without cluttering your source
> files generate try/finally blocks to safely call close() on your resource
> objects. That's not all, but for the complete list you'll need to head
> over to the feature overview

So in other words, it's just yet another code generator.

> Writing a Lisp macro that generates parsers isn't harder than writing a
> Lisp program that does.

And writing a domain-specific language in Ruby is easier still. Apparently high-level languages do high level things better than low-level languages do. Film at 11.

My advice on implementing stuff in C:

Posted Oct 18, 2010 22:43 UTC (Mon) by foom (subscriber, #14868) [Link]

> From kernel.h:
The linux kernel is not written in C: it is written in an extended GCC-proprietary C variant.

>> #define min(x, y) ({ \
({ is a GCC extension (statement-expression), it is not in C.

>> typeof(x) _min1 = (x); \
>> typeof(y) _min2 = (y); \
typeof is a GCC extension, it is not in C.

>> (void) (&_min1 == &_min2); \
Comparison of distinct pointer types is not an error in C.

>> _min1 < _min2 ? _min1 : _min2; })
>I'm getting a little tired of being told what I can't do, especially when it turns out that I can do it after all.
Just because you can do it in GCC, doesn't mean you can do it in C.

My advice on implementing stuff in C:

Posted Oct 18, 2010 22:45 UTC (Mon) by HelloWorld (guest, #56129) [Link] (2 responses)

> I'm getting a little tired of being told what I can't do, especially when it turns out that I can do it after all.

That code uses two GNU extensions, statement expressions and typeof. It's not possible to do this in C89 or C99.

Also, you're still missing the point. The point is that the preprocessor doesn't understand the language, which is a _fundamental_ shortcoming of that facility.

> So in other words, it's just yet another code generator.

Wake up.
The preprocessor, cpp, is "just yet another code generator", it generates C code from a (pretty dumb) macro language.
The C compiler, cc1, is "just yet another code generator", it generates assembly from C code.
The assembler, gas, is "just yet another code generator", it generates machine code from assembly.

Just because all this stuff is hidden from you by the compiler driver, gcc, doesn't mean it doesn't exist. However, unlike gcc's compiler driver, the java compiler allows you to hook up arbitrary stuff, like lombok. Being flexible is an advantage in my book.

> And writing a domain-specific language in Ruby is easier still. Apparently high-level languages do high level things better than low-level languages do.

You clearly have _no_ idea what Lisp is about. Here's a hint: it's not a low level language, and it's _at least_ as suitable for writing domain-specific languages as Ruby. Actually, it had most of what ruby has before ruby even existed.

My advice on implementing stuff in C:

Posted Oct 19, 2010 20:13 UTC (Tue) by cmccabe (guest, #60281) [Link] (1 responses)

My point isn't what you can do with ISO-standard C. My point is what you can do with C, period. Most good extensions to the language get rolled into the standard eventually.

Yes, I am aware that the C pre-processor is a code generator. My point is that if code generation is going to be required, it ought to be part of the language rather than separate.

> However, unlike gcc's compiler driver, the java compiler allows you to
> hook up arbitrary stuff, like lombok. Being flexible is an advantage in my
> book.

C/C++ has annotations too. Look up doxygen. It's not that hard to put an at-sign in a comment and run a macro generator over it later. The fact that Java has chosen to give this construct the pompous name of an "annotation" doesn't make it any better. Before annotations were added to Java, the code generators would match method names against regular expressions. For example, test_FOO would be used to generate a test of the FOO class.

> You clearly have _no_ idea what Lisp is about. Here's a hint: it's not a
> low level language, and it's _at least_ as suitable for writing
> domain-specific languages as Ruby. Actually, it had most of what ruby has
> before ruby even existed

What part of my statement makes you think I don't know what Lisp is? I'm well aware of Lisp's history as the first functional language.

I'm also aware of standard ML and OCaml. I have written compilers in these languages, and I'm well aware of their features.

You see, I have used many different languages, and I'm aware of what they're good at. Java works pretty well to write high-level apps on Android. Ruby works pretty well to write web applications. C works well in the kernel. And in some cases, C++ is the right choice (its best feature is templates.)

> I'm not advocating C++. If you read my comments carefully, you'll see that
> I said that C++ is not a beautiful language, and I also admitted that the
> name lookup rules are too complex. And of course, some of my criticisms of
> C (lack of a module system, the C preprocessor) apply to C++ as well; it
> also takes too long to compile.
> I merely defend C++ when I feel it's being criticized unfairly, that
> doesn't mean I'm a fan of the language. I also mentioned other languages
> than C++ (specifically, Go, Rust and D) in my second comment; I just don't
> know them as well as C++.

When I read back through this thread, I'm not at all sure what you're advocating. You start off by condemning C and casually commenting "C++ has been around for a long time, and while it's not exactly a beautiful language, it does provide huge benefits over C." Then, when people challenge you to explain these "huge benefits," you air a bunch of minor grievances about small things like implicit conversions in the type system and lack of features present in Standard ML (Seriously? We're comparing a low-level language to Standard ML?) Pretty much all of these gripes could equally well be raised against C++, because almost every valid C program is also a valid C++ program. You go on to raise a bunch of "but I can't do X" whines, which I then counter with "here is how you do X in C."

Probably the only valid point you raised at all is that generic programming would make a good addition to C. In fact, I think it's probably C++'s best feature. I suspect that a lot of people write very C-like code in C++ and use g++ just to make use of the STL.

I'm sorry that you feel so negatively about C. And really, I'm sorry that you feel so negatively about programming languages in general. If you could offer a constructive suggestion rather than "everything sucks," people might actually listen to you. Language extensions get made all the time. The truth is, though, a lot of the comments you have made come across as bikeshedding. Perhaps if you had been Kernigan or Ritchie, you would have put an "e" in the creat() function. Or changed C so that shorts did not get implicitly promoted to int. But you weren't, and they didn't, so we're just going to have to get on with our lives.

PS. I probably can't continue replying to this thread. Cheers.

My advice on implementing stuff in C:

Posted Oct 19, 2010 22:02 UTC (Tue) by HelloWorld (guest, #56129) [Link]

My point isn't what you can do with ISO-standard C. My point is what you can do with C, period.
Oh, so C isn't defined by a standard any more, but by what you think C is? I guess "you know it when you see it", right?

Yes, I am aware that the C pre-processor is a code generator. My point is that if code generation is going to be required, it ought to be part of the language rather than separate.
And my point is that cpp is pretty much the worst way to make it part of the language.

C/C++ has annotations too. Look up doxygen.
The Java equivalent to Doxygen is Javadoc. Annotations are something completely different; they are not (special) comments. And actually, C++0x will be supporting annotations, though they'll be called attributes there.

What part of my statement makes you think I don't know what Lisp is?
First, you didn't know that domain specific languages can be implemented with Lisp macros. Then you made the completely unfounded claim that Ruby is more suitable for implementing DSLs than Lisp. The following remark that "high-level languages do high level things better than low-level languages do." looked like a justification for that claim, and in the context it looks as if you're thinking of Lisp as a low-level language which it isn't.

When I read back through this thread, I'm not at all sure what you're advocating. You start off by condemning C and casually commenting "C++ has been around for a long time, and while it's not exactly a beautiful language, it does provide huge benefits over C." Then, when people challenge you to explain these "huge benefits," you air a bunch of minor grievances about small things like implicit conversions in the type system and lack of features present in Standard ML (Seriously? We're comparing a low-level language to Standard ML?)
I explicitly pointed out that some ideas from Standard ML, such as having strong typing, polymorphism and a module system, have nothing at all to do with whether it's a high level language or a low level one. Please, just read carefully

You go on to raise a bunch of "but I can't do X" whines, which I then counter with "here is how you do X in C."
All you've shown is that you're missing the point. The point is that cpp doesn't understand the C language. You then pointed out that it is possible to use cpp to generate a C program that fails to compile under certain circumstances. That is something completely and utterly different, and the fact that you still didn't understand it doesn't speak for you.

Probably the only valid point you raised at all is that generic programming would make a good addition to C.
Yeah right, because I didn't point out how con- and destructors (or RAII for short) massively simplifies resource management at the very beginning.

I'm sorry that you feel so negatively about C. And really, I'm sorry that you feel so negatively about programming languages in general. If you could offer a constructive suggestion rather than "everything sucks," people might actually listen to you.
I've pointed out what I think the worst problems with C are (sucky preprocessor, lack of a module system, lack of support for generic programming, lack of support for resource management), and I honestly think that these problems are so bad that nobody should be using C, except if no compiler for a better language is available for your target platform(s).
Also, contrary to your claim, I did point out alternatives: D, C++, Rust and Go, at the very beginning of this thread. I don't want to advocate any particular one, because I think that people should make up their own minds about what language they want. And also because I think that it'd obscure my point which is not "Language X is better than C", but "get rid of C if you can". If you don't like that -- tough.

Perhaps if you had been Kernigan or Ritchie, you would have put an "e" in the creat() function. Or changed C so that shorts did not get implicitly promoted to int. But you weren't, and they didn't, so we're just going to have to get on with our lives.
Unlike you, I'd rather try to fix mistakes from the past, rather than carrying them around forever and trying to cope with them forever.

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:40 UTC (Tue) by nix (subscriber, #2304) [Link]

The C preprocessor is just as extralinguistic as those code generators
While it seems so from cpp's quite remarkable lack of expressive power, it actually isn't possible to write a C preprocessor that is truly extralinguistic, because (as you yourself said), it operates on a token stream, not a character stream. It really is integrated into the language: the problem is that it's mostly (but not entirely) bolted onto the front end of it, as befits something that at one point was an extralinguistic addon that operated on character streams.

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:30 UTC (Tue) by nix (subscriber, #2304) [Link]

C also has a built-in way of loading code at runtime in the form of shared libraries. C even has the ability to run code when a shared library is loaded, and when it is unloaded.
No it doesn't. All C has is atexit(). The features you mention are POSIX, not C.

My advice on implementing stuff in C:

Posted Oct 16, 2010 14:40 UTC (Sat) by Baylink (guest, #755) [Link] (2 responses)

Aw, c'mon; I haven't had a good R-war since Usenet died. :-)

My advice on implementing stuff in C:

Posted Oct 19, 2010 11:44 UTC (Tue) by nix (subscriber, #2304) [Link] (1 responses)

I've never seen an R-war at all. Your statistical language is better than my statistical language!

My advice on implementing stuff in C:

Posted Oct 21, 2010 19:58 UTC (Thu) by njs (subscriber, #40338) [Link]

R has some excellent bits that are difficult/impossible to get without baking them into the core language design (support for NA ubiquitous across all data types, the formula syntax, the quirky call-by-thunk convention that lets you get ad-hoc macro-like behavior even for non-macro functions), BUT the interpreter is a piece of junk by modern standards -- and honestly even by not-so-modern standards, given that R is a Lisp (it does refcount based COW, where the refcount values are "1, 2, many", so in practice it's an imperative vector language that does not support in-place mutation), the C API is awful and exposes all of the interpreter's implementation details (so you can't fix any of them), and while in theory it has nice namespace support in practice you end up with a giant flat soup of poorly organized functions that do random things, like PHP.

TAKE THAT

Russell: On C Library Implementation

Posted Oct 14, 2010 21:29 UTC (Thu) by cpeterso (guest, #305) [Link]

There’s a standard naming for “I know what I’m doing” low-level alternate functions: the single-underscore prefix (eg. _exit()).
But the POSIX 2008 standard says:
... all identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use by the implementation. All identifiers that begin with an underscore are always reserved for use as identifiers with file scope in both the ordinary identifier and tag name spaces.
So technically, you would probably be safe if you follow Rusty's advice to never use uppercase letters. But you would still be walking through a POSIX minefield if you use a leading underscore. Consider #ifndef _MY_HEADER_H_INCLUDED_. I now prefer trailing_ underscores_ for "I know what I’m doing" functions.

Russell: On C Library Implementation

Posted Oct 14, 2010 21:29 UTC (Thu) by ncm (guest, #165) [Link] (8 responses)

The correct name is StudlyCaps, but you have to say it with a sneer.

StudlyCaps

Posted Oct 14, 2010 22:00 UTC (Thu) by pr1268 (guest, #24648) [Link] (5 responses)

I always thought it was CamelCase (with the BactrianCamelCase and dromedaryCamelcase variants). ;-)

StudlyCaps

Posted Oct 15, 2010 8:20 UTC (Fri) by rvfh (guest, #31018) [Link] (4 responses)

I think thisIsCamelCase.

StudlyCaps

Posted Oct 15, 2010 10:21 UTC (Fri) by peregrin (guest, #56601) [Link] (3 responses)

Depends on whether the camel is currently standing upright or drinking (StandingCamelCase vs. drinkingCamelCase).

StudlyCaps

Posted Oct 15, 2010 17:02 UTC (Fri) by marcH (subscriber, #57642) [Link]

> StandingCamelCase vs. drinkingCamelCase

Brilliant! Noted.

StudlyCaps

Posted Oct 21, 2010 22:28 UTC (Thu) by speedster1 (guest, #8143) [Link]

> (StandingCamelCase vs. drinkingCamelCase)

Depending on what it's drinking, it could be dRInk1ngcAM3LCaSE

:)

StudlyCaps

Posted Oct 22, 2010 21:54 UTC (Fri) by foom (subscriber, #14868) [Link]

Or, if you're using the Go language, you have privateCamelCase and PublicCamelCase. :)

In Go the rule about visibility of information is simple: if a name (of a top-level type, function, method, constant or variable, or of a structure field or method) is capitalized, users of the package may see it. Otherwise, the name and hence the thing being named is visible only inside the package in which it is declared. This is more than a convention; the rule is enforced by the compiler. In Go, the term for publicly visible names is ''exported''.

Woo.

Russell: On C Library Implementation

Posted Oct 15, 2010 21:25 UTC (Fri) by xtifr (guest, #143) [Link]

Studlycaps (available as studly(6) in the Debian filters package) involves _random_ capitalization (raNdom capItalIzatIon).

Russell: On C Library Implementation

Posted Oct 25, 2010 12:04 UTC (Mon) by Seegras (guest, #20463) [Link]

BiCapitalisation?


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds