User: Password:
|
|
Subscribe / Log in / New account

The return of syslets

The return of syslets

Posted May 31, 2007 22:24 UTC (Thu) by vmole (guest, #111)
In reply to: The return of syslets by RobSeace
Parent article: The return of syslets

I honestly don't remember what the alternative ABI solution was; I *think* it was better than "just recompile everything", but I don't have a reference to it now, and I'm not willing to re-read all of comp.std.c from that era, so maybe not. My main gripe is that the solution only broke *correct code*. Also, IMO, "long long" is ugly; it's the only core type that is two words.

Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms, and doesn't break when someone flips on the ILP16 compiler switch.

I think that it's generally agreed socklen_t was misguided, causing more problems than it solved, but we're stuck with it now.


(Log in to post comments)

long long

Posted Jun 1, 2007 0:20 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms,

Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system. I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.

It was wishful thinking of the original C designers that a vague type like "the longest integer available" would be useful. In practice, you almost always need a certain number of bits. Because such types were not provided, programmers did what they had to do: assume long or int is 32 bits.

long long

Posted Jun 1, 2007 1:32 UTC (Fri) by roelofs (guest, #2599) [Link]

Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system.

Yes, but fortunately there aren't any more of those, so you set up your own typedefs once (e.g., based on predefined macros) and you're done.

I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.

Yup, been there, done that, got the scars. And I 100% agree (heh) that the failure to link typedefs to macros (or something else the preprocessor can test) was a massive mistake on the part of the standardization committee(s). "Let's see, now... It's an error to re-typedef something, so why don't we make such cases completely undetectable!"

Fortunately that's mostly water under the bridge at this point, though. And you can get pretty far on old systems by detecting them on the basis of macros. Back in the Usenet days I maintained a script called defines, which did a fair job of sniffing out such things (and also reporting native sizes), along with a corresponding database of its output. I think Zip and UnZip still use some of the results, though I don't know if any of those code paths have been tested in recent eras.

Greg

long long

Posted Jun 1, 2007 18:38 UTC (Fri) by vmole (guest, #111) [Link]

This might help: Instant C99.

Yes, typedefs should be testable in the preprocessor. You certainly won't get any argument from me on that point :-) But for stdint, you can check __STDC_VERSION__ to determine whether or not to use your local version or the implmentation provided version.

A key point is that even if you do have to create your own defs, at least name them after the stdint.h types, so that you can later switch without pain and not require other people looking at your code to learn yet another set of typedef names.

long long

Posted Jun 2, 2007 18:56 UTC (Sat) by giraffedata (subscriber, #1954) [Link]

A key point is that even if you do have to create your own defs, at least name them after the stdint.h types

You have to do substantially more work if you want to do that, because you have to make sure nobody else defines the type. If you just do something as simple as checking __STDC_VERSION__, you can't then do a typedef of uint32_t, because it might be defined even though the environment is not totally C99.

And if it's part of an external interface, you surely have no right to define as generic a name as uint32_t. It could easily conflict with header files from other projects that had the same idea.

so that you can later switch without pain

The "switching" that I think is most important is where someone extracts your code for use in a specific environment where uint32_t is known to be defined. That's why I do all that extra work to be able to use uint32_t (and I don't claim that I've got it right yet) instead of a private name for the same thing.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds