The return of syslets
Zach's motivation for this work, remember, was to make it easier to implement and maintain proper asynchronous I/O (AIO) support in the kernel. His current work continues toward that goal:
In particular, one
part of the new syslet patch is a replacement for the
io_submit() system call, which is the core of the current AIO
implementation. Rather than start the I/O and return, the new
io_submit() uses the syslet mechanism, eliminating a lot of
special-purpose AIO code in the process. Zach's stated goal is to
get rid of the internal kiocb structure altogether. The current
code is more of a proof of concept, though, with a lot of details yet to
fill in. Some benchmarks have been posted,
though, as Zach says, "They haven't wildly regressed, that's about as much as can be said
with confidence so far.
" It is worth noting that, with this patch,
the kernel is able to do asynchronous buffered I/O through
io_submit(), something which the mainline has never yet supported.
The biggest area of discussion, though, has been over Jeff Garzik's suggestion that the kevent code should be integrated with syslets. Some people like the idea, but others, including Ingo, think that kevents do not provide any sort of demonstrable improvement over the current epoll interface. Ulrich Drepper, the glibc maintainer, disagreed with that assessment, saying that the kevent interface was a step in the right direction if it does not perform any better.
The reasoning behind that point of view is worth a look. The use of the epoll interface requires the creation of a file descriptor. That is fine when applications use epoll directly, but it can be problematic if glibc is trying to poll for events (I/O completions, say) that the application does not see directly. There is a single space for file descriptors, and applications often think they know what should be done with every descriptor in that space. If glibc starts creating its own private file descriptors, it will find itself at the mercy of any application which closes random descriptors, uses dup() without care, etc. So there is no way for glibc to use file descriptors independently from the application.
Possible solutions exist, such as giving glibc a set of private, hidden
descriptors. But Ulrich would rather just go with a memory-based interface
which avoids the problem altogether. And Linus would rather not create any new interfaces at
all. All told, it has the feel of an unfinished discussion; we'll be
seeing it again.
Index entries for this article | |
---|---|
Kernel | Asynchronous I/O |
Kernel | Events reporting |
Kernel | Kevent |
Kernel | Syslets |
Posted May 31, 2007 12:29 UTC (Thu)
by RobSeace (subscriber, #4435)
[Link] (10 responses)
Posted May 31, 2007 14:33 UTC (Thu)
by pflugstad (subscriber, #224)
[Link]
I also really like the whole signalfd and timerfd interface. I think it's much cleaner than some kind of memory interface and it fits in really well with FD process loop. I like being able to fold signals and timers into my normal select (or epoll) style interface.
Granted this is probably not a high performance setup, but I think you should always work out the best clean/correct interface, then make it perform fast, vs trying to work out a fast "ugly" interface. Signals and timers have always been the ugly stepchildren in the Unix environment, and this makes them feel more Unix-like. Now, if we could just get System V IPC to interact with FD's as well...
I also like that signalfd and timerfd interfaces would possibly be easily portable to other Unixen and even something like Cygwin (natural fit with win32's WaitFor functions? - I doubt it's even on the Cygwin people's radar at this point though). Just conjecture.
Posted May 31, 2007 18:55 UTC (Thu)
by MisterIO (guest, #36192)
[Link]
Posted May 31, 2007 20:05 UTC (Thu)
by vmole (guest, #111)
[Link] (7 responses)
Any app thinking they have sole domain over all FDs, and no lib will ever create any behind its back, is a totally broken app,
Correct.
which is unlikely to work in normal usage anywhere...
Unfortunately, this is not the case. They *do* work in normal usage almost everywhere. That's why they survive, because they don't break in the presence of two or three "unknown" descriptors. But when glibc starts chewing up many descriptors in a hidden/unexpected way, those apps will break. But guess who gets the blame? "My app works everywhere except with glibc on Linux 2.6.25, so it must be glibc/Linux which is broken." There's a whole long history of this kind of thing, and a whole long history of vendors (in a very general sense that includes free software developers) accommodating this kind of lossage. For example, why does C99 have the abomination "long long", even though 64 bit code could easily be accomodated by char/short/int/long? Because far too many people wrote code that assumed "long" was 32 bits, and the C compiler vendors didn't want to break that. (Well, and wanting to avoid breaking existing ABIs, which also
seems outside the purview of a language standard, and could have been dealt with in better ways.) Who got screwed? Those who could read the C89 standard, and made no assumptions about "long", except what was *promised* in the C89 standard: "long is the largest integer type".
But I'm not bitter.
Posted May 31, 2007 21:49 UTC (Thu)
by RobSeace (subscriber, #4435)
[Link] (5 responses)
Well, I wouldn't really choose to complain about THAT particular example, personally... I think it would be kind of awkward to have "long" be 64-bits on a 32-bit system... Not to mention probably inefficient, since LOTS of stuff uses longs, and manipulating 64-bit ints on a 32-bit system has to be less efficient... With a separate "long long", people only use it when they need a potentially 64-bit value... Yes, it's a bit of a pain and not as clean as just using "long", but I can certainly see the logic in it, above and beyond just supporting people who write broken code assuming a 32-bit "long"...
And, the ABI issue you mentioned is a big deal-breaker, as well... HOW would you propose to solve that other than leaving "long" alone?? You can't just change all standard lib functions that used to take/return "long" to "int" (or some new typedef), because all existing code quite properly assumes they take/return a "long", since that's how they've always been defined... Plus, there's tons of non-standard third-party libs to think of, which would also be affected and which you could never hope to change all of... (On a side-note: am I the only one who hates the fact that various socket functions these days take stupid typedefs like "socklen_t", instead of the traditional "int"?? I wouldn't mind so much, but apparently that's being defined as "unsigned" instead of signed "int", which is what it's historically always been... Sure, unsigned makes more sense, in retrospect, but geez... And, now GCC complains about passing in a pointer to an "int" (which is how things have always been done) for stuff like accept()/getsockname()/etc., since it's not unsigned... ;-/ Yeah, you can disable it, thankfully, but still it might be a nice warning to leave enabled for OTHER stuff where it legitimately IS a mistake, but here it's a case of the API changing, which just isn't cool...)
> Those who could read the C89 standard, and made no assumptions about
Well, if you change it to "largest integer type native to the current platform", it still works... ;-) No, I know what you're saying... I'm old enough to remember the conversion from 16-bit systems to 32-bit; there, "long" was 32-bit, even though the system was 16-bit, so what you say certainly makes sense... I just don't really have a problem with "long long", personally...
The real fun is going to come if/when we ever go to 128-bit systems: I guess the only choose at that point will be to keep "long" 64-bit, and make "long long" the only 128-bit integer; or else, invent another new native type... Either choice is kind of ugly...
Posted May 31, 2007 22:24 UTC (Thu)
by vmole (guest, #111)
[Link] (4 responses)
I honestly don't remember what the alternative ABI solution was; I *think* it was better than "just recompile everything", but I don't have a reference to it now, and I'm not willing to re-read all of comp.std.c from that era, so maybe not. My main gripe is that the solution only broke *correct code*. Also, IMO, "long long" is ugly; it's the only core type that is two words.
Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms, and doesn't break when someone flips on the ILP16 compiler switch.
I think that it's generally agreed socklen_t was misguided, causing more problems than it solved, but we're stuck with it now.
Posted Jun 1, 2007 0:20 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (3 responses)
Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system. I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.
It was wishful thinking of the original C designers that a vague type like "the longest integer available" would be useful. In practice, you almost always need a certain number of bits. Because such types were not provided, programmers did what they had to do: assume long or int is 32 bits.
Posted Jun 1, 2007 1:32 UTC (Fri)
by roelofs (guest, #2599)
[Link]
Yes, but fortunately there aren't any more of those, so you set up your own typedefs once (e.g., based on predefined macros) and you're done.
I distribute some software used on a wide variety of systems, some quite old, and this has been a nightmare for me. The inability to test for the existence of type at compile time, or redefine one, is the worst part.
Yup, been there, done that, got the scars. And I 100% agree (heh) that the failure to link typedefs to macros (or something else the preprocessor can test) was a massive mistake on the part of the standardization committee(s). "Let's see, now... It's an error to re-typedef something, so why don't we make such cases completely undetectable!"
Fortunately that's mostly water under the bridge at this point, though. And you can get pretty far on old systems by detecting them on the basis of macros. Back in the Usenet days I maintained a script called defines, which did a fair job of sniffing out such things (and also reporting native sizes), along with a corresponding database of its output. I think Zip and UnZip still use some of the results, though I don't know if any of those code paths have been tested in recent eras.
Greg
Posted Jun 1, 2007 18:38 UTC (Fri)
by vmole (guest, #111)
[Link] (1 responses)
This might help: Instant C99.
Yes, typedefs should be testable in the preprocessor. You certainly won't get any argument from me on that point :-) But for stdint, you can check __STDC_VERSION__ to determine whether or not to use your local version or the implmentation provided version.
A key point is that even if you do have to create your own defs, at least
name them after the stdint.h types, so that you can later switch without pain and not require other people looking at your code to learn yet another set of typedef names.
Posted Jun 2, 2007 18:56 UTC (Sat)
by giraffedata (guest, #1954)
[Link]
You have to do substantially more work if you want to do that, because you have to make sure nobody else defines the type. If you just do something as simple as checking __STDC_VERSION__, you can't then do a typedef of uint32_t, because it might be defined even though the environment is not totally C99.
And if it's part of an external interface, you surely have no right to define as generic a name as uint32_t. It could easily conflict with header files from other projects that had the same idea.
The "switching" that I think is most important is where someone extracts your code for use in a specific environment where uint32_t is known to be defined. That's why I do all that extra work to be able to use uint32_t (and I don't claim that I've got it right yet) instead of a private name for the same thing.
Posted Jun 11, 2007 9:19 UTC (Mon)
by forthy (guest, #1525)
[Link]
except what was *promised* in the C89 standard: "long is the
largest integer type". Or like GCC promised that "long long" is twice as long as "long", and
broke the promise when they ported GCC to the first 64 bit architecture
(MIPS). Now, if you are lucky, you can use typedef int int128_t
__attribute__((__mode__(TI))); to create a real 128 bit type on some
64 bit platforms. There are only two choices: Sanity or backward compatibility with
idiots. The idiots are the majority, they always win.
Posted Jun 2, 2007 10:24 UTC (Sat)
by kleptog (subscriber, #1183)
[Link]
Drool....
I must say, that FD argument seems very silly... Already, libc can (and does) create hidden file descriptors behind an app's back for various other reasons... (Eg: syslog()... And, libc isn't alone in doing it: eg. xlib talking to the X server...) This is hardly an unexpected thing to happen from an app's point of view... Any app thinking they have sole domain over all FDs, and no lib will ever create any behind its back, is a totally broken app, which is unlikely to work in normal usage anywhere... So, I can't see any merit at all to that sort of argument against an FD-based API... Plus, an FD-based API fits naturally with the standard Unixy way of doing things, and is easy for most Unix coders to grasp, so again I'm not seeing any valid argument against it based on it being FD-based...The return of syslets
I agree. Any app that assumes it owns the FD space is just asking to be borked. My first major unix application (in grad school) involved forking and exec'ing another process and using dup and friends to control I/O to it. I learned fast to check return codes and be careful with dup.FD interface vs memory interface (and signal/timer fd interface)
I agree. |The return of syslets
----------------|
The return of syslets
> For example, why does C99 have the abomination "long long", even though 64The return of syslets
> bit code could easily be accomodated by char/short/int/long? Because far
> too many people wrote code that assumed "long" was 32 bits, and the C
> compiler vendors didn't want to break that.
> "long", except what was *promised* in the C89 standard: "long is the
> largest integer type".
The return of syslets
long long
Anyway, new code shouldn't use it. If you need an integer of a certain size, use the intN_t,_leastN_t, or int_fastN_t typedefs in stdint.h, so that your code has a chance of working on past and future platforms,
Unfortunately, you really have to go further than that to have a reasonable chance. Old systems don't have those types defined, or have them defined elsewhere than <stdint.h>. So you really have to use local types which you laboriously define to whatever types, long long or whatever, work on that system.
long long
long long
long long
A key point is that even if you do have to create your own defs, at least name them after the stdint.h types
so that you can later switch without pain
The return of syslets
Asynchronous buffered I/OThe return of syslets