User: Password:
|
|
Subscribe / Log in / New account

The return of syslets

The return of syslets

Posted May 31, 2007 20:05 UTC (Thu) by vmole (guest, #111)
In reply to: The return of syslets by RobSeace
Parent article: The return of syslets

Any app thinking they have sole domain over all FDs, and no lib will ever create any behind its back, is a totally broken app,

Correct.

which is unlikely to work in normal usage anywhere...

Unfortunately, this is not the case. They *do* work in normal usage almost everywhere. That's why they survive, because they don't break in the presence of two or three "unknown" descriptors. But when glibc starts chewing up many descriptors in a hidden/unexpected way, those apps will break. But guess who gets the blame? "My app works everywhere except with glibc on Linux 2.6.25, so it must be glibc/Linux which is broken." There's a whole long history of this kind of thing, and a whole long history of vendors (in a very general sense that includes free software developers) accommodating this kind of lossage. For example, why does C99 have the abomination "long long", even though 64 bit code could easily be accomodated by char/short/int/long? Because far too many people wrote code that assumed "long" was 32 bits, and the C compiler vendors didn't want to break that. (Well, and wanting to avoid breaking existing ABIs, which also seems outside the purview of a language standard, and could have been dealt with in better ways.) Who got screwed? Those who could read the C89 standard, and made no assumptions about "long", except what was *promised* in the C89 standard: "long is the largest integer type".

But I'm not bitter.


(Log in to post comments)

The return of syslets

Posted May 31, 2007 21:49 UTC (Thu) by RobSeace (subscriber, #4435) [Link]

> For example, why does C99 have the abomination "long long", even though 64
> bit code could easily be accomodated by char/short/int/long? Because far
> too many people wrote code that assumed "long" was 32 bits, and the C
> compiler vendors didn't want to break that.

Well, I wouldn't really choose to complain about THAT particular example, personally... I think it would be kind of awkward to have "long" be 64-bits on a 32-bit system... Not to mention probably inefficient, since LOTS of stuff uses longs, and manipulating 64-bit ints on a 32-bit system has to be less efficient... With a separate "long long", people only use it when they need a potentially 64-bit value... Yes, it's a bit of a pain and not as clean as just using "long", but I can certainly see the logic in it, above and beyond just supporting people who write broken code assuming a 32-bit "long"...

And, the ABI issue you mentioned is a big deal-breaker, as well... HOW would you propose to solve that other than leaving "long" alone?? You can't just change all standard lib functions that used to take/return "long" to "int" (or some new typedef), because all existing code quite properly assumes they take/return a "long", since that's how they've always been defined... Plus, there's tons of non-standard third-party libs to think of, which would also be affected and which you could never hope to change all of... (On a side-note: am I the only one who hates the fact that various socket functions these days take stupid typedefs like "socklen_t", instead of the traditional "int"?? I wouldn't mind so much, but apparently that's being defined as "unsigned" instead of signed "int", which is what it's historically always been... Sure, unsigned makes more sense, in retrospect, but geez... And, now GCC complains about passing in a pointer to an "int" (which is how things have always been done) for stuff like accept()/getsockname()/etc., since it's not unsigned... ;-/ Yeah, you can disable it, thankfully, but still it might be a nice warning to leave enabled for OTHER stuff where it legitimately IS a mistake, but here it's a case of the API changing, which just isn't cool...)

> Those who could read the C89 standard, and made no assumptions about
> "long", except what was *promised* in the C89 standard: "long is the
> largest integer type".

Well, if you change it to "largest integer type native to the current platform", it still works... ;-) No, I know what you're saying... I'm old enough to remember the conversion from 16-bit systems to 32-bit; there, "long" was 32-bit, even though the system was 16-bit, so what you say certainly makes sense... I just don't really have a problem with "long long", personally...

The real fun is going to come if/when we ever go to 128-bit systems: I guess the only choose at that point will be to keep "long" 64-bit, and make "long long" the only 128-bit integer; or else, invent another new native type... Either choice is kind of ugly...

The return of syslets

Posted May 31, 2007 22:24 UTC (Thu) by vmole (guest, #111) [Link]

I honestly don't remember what the alternative ABI solution was; I *think* it was better than "just recompile everything", but I don't have a reference to it now, and I'm not willing to re-read all of comp.std.c from that era, so maybe not. My main gripe is that the solution only broke *correct code*. Also, IMO, "long long" is ugly; it's the only core type that is two words.

Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms, and doesn't break when someone flips on the ILP16 compiler switch.

I think that it's generally agreed socklen_t was misguided, causing more problems than it solved, but we're stuck with it now.

long long

Posted Jun 1, 2007 0:20 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms,

Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system. I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.

It was wishful thinking of the original C designers that a vague type like "the longest integer available" would be useful. In practice, you almost always need a certain number of bits. Because such types were not provided, programmers did what they had to do: assume long or int is 32 bits.

long long

Posted Jun 1, 2007 1:32 UTC (Fri) by roelofs (guest, #2599) [Link]

Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system.

Yes, but fortunately there aren't any more of those, so you set up your own typedefs once (e.g., based on predefined macros) and you're done.

I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.

Yup, been there, done that, got the scars. And I 100% agree (heh) that the failure to link typedefs to macros (or something else the preprocessor can test) was a massive mistake on the part of the standardization committee(s). "Let's see, now... It's an error to re-typedef something, so why don't we make such cases completely undetectable!"

Fortunately that's mostly water under the bridge at this point, though. And you can get pretty far on old systems by detecting them on the basis of macros. Back in the Usenet days I maintained a script called defines, which did a fair job of sniffing out such things (and also reporting native sizes), along with a corresponding database of its output. I think Zip and UnZip still use some of the results, though I don't know if any of those code paths have been tested in recent eras.

Greg

long long

Posted Jun 1, 2007 18:38 UTC (Fri) by vmole (guest, #111) [Link]

This might help: Instant C99.

Yes, typedefs should be testable in the preprocessor. You certainly won't get any argument from me on that point :-) But for stdint, you can check __STDC_VERSION__ to determine whether or not to use your local version or the implmentation provided version.

A key point is that even if you do have to create your own defs, at least name them after the stdint.h types, so that you can later switch without pain and not require other people looking at your code to learn yet another set of typedef names.

long long

Posted Jun 2, 2007 18:56 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

A key point is that even if you do have to create your own defs, at least name them after the stdint.h types

You have to do substantially more work if you want to do that, because you have to make sure nobody else defines the type. If you just do something as simple as checking __STDC_VERSION__, you can't then do a typedef of uint32_t, because it might be defined even though the environment is not totally C99.

And if it's part of an external interface, you surely have no right to define as generic a name as uint32_t. It could easily conflict with header files from other projects that had the same idea.

so that you can later switch without pain

The "switching" that I think is most important is where someone extracts your code for use in a specific environment where uint32_t is known to be defined. That's why I do all that extra work to be able to use uint32_t (and I don't claim that I've got it right yet) instead of a private name for the same thing.

The return of syslets

Posted Jun 11, 2007 9:19 UTC (Mon) by forthy (guest, #1525) [Link]

except what was *promised* in the C89 standard: "long is the largest integer type".

Or like GCC promised that "long long" is twice as long as "long", and broke the promise when they ported GCC to the first 64 bit architecture (MIPS). Now, if you are lucky, you can use typedef int int128_t __attribute__((__mode__(TI))); to create a real 128 bit type on some 64 bit platforms.

There are only two choices: Sanity or backward compatibility with idiots. The idiots are the majority, they always win.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds