LWN: Comments on "A perf ABI fix"
http://lwn.net/Articles/567894/
This is a special feed containing comments posted
to the individual LWN article titled "A perf ABI fix".
hourly2More perf bitfield fun
http://lwn.net/Articles/569337/rss
2013-10-03T16:04:45+00:00deater
<div class="FormattedComment">
Other bitfields in the perf interface continue to cause trouble.<br>
See this recent proposed patch: <a href="https://lkml.org/lkml/2013/8/10/154">https://lkml.org/lkml/2013/8/10/154</a><br>
that tries to sanely provide access to perf_mem_data_src on both big and little endian systems.<br>
<p>
There's got to be a better way of doing this, but it's likely too late.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/569301/rss
2013-10-03T11:38:12+00:00heijo
<div class="FormattedComment">
cap_bit0 needs to be set to (cap_user_time && cap_user_rdpmc).<br>
<p>
Setting it to always zero is idiotic and degrades older applications...<br>
<p>
Stop pushing crap into the kernel.<br>
<p>
</div>
Need for a special ABI/API team?
http://lwn.net/Articles/568924/rss
2013-09-30T15:36:59+00:00proski
Perhaps all ABI changes should be vetted by a person or a group of persons who would go through a checklist and test the change.
<p>
Linux kernel is too big to rely solely on bright minds, who can devise new ideas but cannot be tasked with checking the code based on existing rules.
Good one
http://lwn.net/Articles/568598/rss
2013-09-27T12:05:27+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> opposite ways on little endian and big endian systems</font><br>
<p>
On my side of the world, you do have little-endian system and BI-endian systems: the processor may be big-endian, but then it always has to interact with at least one little-endian subsystem (could be as simple as a PCI card, more usually most subsystems).<br>
Then, they added stuff at the virtual memory layer to describe that memory mapped area as either little or big endian, which solves a small part of the problem, two bit fields still increment as 0b00, 0b10, 0b01, 0b11.<br>
Then, big endian processor sort of disappeared.<br>
<p>
I still prefer:<br>
struct status {<br>
#ifdef LITTLE_ENDIAN<br>
unsigned b1:1, b2:1, b3:1, unused:29;<br>
unsigned xx;<br>
#else<br>
unsigned xx;<br>
unsigned unused:29, b3:1, b2:1, b1:1;<br>
#endif<br>
}<br>
than the 40 equivalent lines of #define, if I have a lot of those status.<br>
<p>
</div>
A perf ABI fix
http://lwn.net/Articles/568594/rss
2013-09-27T11:33:30+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> require defining two or three more macros</font><br>
<p>
In that case the 10000's lines of #define is automatically generated by some TCL command nobody really is interested of reading, while "compiling" the VHDL.<br>
You have the choice as a software engineer either to use that file or not use it; if you do not use it by what do you replace it.<br>
For me, having an array of 2048 structures, each of them containing one hundred different control/status bits, few read and few write buffer, fully memory mapped and most area not even declared volatile leads to a source code ten times smaller with a lot less bugs.<br>
Obviously my knowledge of the preprocessor is sufficient to use the 10000's line file and "concat" names to counters in macros to access all the defines if my employer want to. I can do so for the 20 different parts of the VHDL chip, on each of the chips.<br>
Note that there is always an exception to every rule, and someone will modify the automatically TCL generated file, in the future.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568583/rss
2013-09-27T09:54:43+00:00mpr22
<blockquote>What I am saying is that ten lines of #define to write a memory map register do not scale; once the single block works, FPGA teams just put 2048 of them on one corner of the FPGA.</blockquote>
<p>It seems to me that dealing with an FPGA containing 2048 instance of the same functional block <em>should</em> only require defining two or three more macros than dealing with an FPGA containing one instance of that block. If it doesn't... you need to have a quiet word or six with your FPGA teams about little things like "address space layout".</p>
A perf ABI fix
http://lwn.net/Articles/568571/rss
2013-09-27T09:25:58+00:00etienne
<div class="FormattedComment">
A also work with hardware, but mine may be working better.<br>
Maybe FPGAs work better, at least read/write issues are dealt by VHDL teams.<br>
What I am saying is that ten lines of #define to write a memory map register do not scale; once the single block works, FPGA teams just put 2048 of them on one corner of the FPGA.<br>
Then, most of the errors you find is that the wrong "ENABLE_xx" mask has been used with a memory map register, or someone defined<br>
#define FROBNICATE_1 xxx<br>
#define FROBNICATE_2 xxx+2<br>
...<br>
#define FROBNICATE_256 xxx+512<br>
but failed to increment for (only) FROBNICATE_42<br>
<p>
When using C described memory mapped registers (with a volatile struct of bitfields), you can read a single bit directly (knowing that the compiler will read the struct once and extract the bit), but when you want to access multiple bits you read the complete volatile struct into a locally declared (non volatile) struct (of the same type).<br>
If you want to modify and write you do it on your locally declared struct and write the complete struct back.<br>
The reading and writing of volatiles appear clearly in the source, and you can follow on your analyser, but the compiler is still free to optimize any treatment of non-volatile structs.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568511/rss
2013-09-26T20:25:32+00:00ncm
I despair.
<p>
Might such travesties have led Brian Kernighan to say that Linux kernel code was even worse than Microsoft NT kernel code he had seen?
<p>
At X.org, they take long-time brokenness of a feature to demonstrate that the feature is unused and may be eliminated. That would not be inappropriate in this case. If the feature is expected to be useful in the future, the sensible approach is to design another interface <i>with another name</i>, and leave the busted one the hell alone.
A perf ABI fix
http://lwn.net/Articles/568505/rss
2013-09-26T20:13:25+00:00ncm
What mpr said. Further, any use of bitfields to control hardware makes the driver non-portable to any other architecture. Further further, there is no way to know, ABI notwithstanding, how any particular compiler version will implement a series of bitfield operations, so use of bitfields makes your driver code non-portable even to the next release of the same compiler.
<p>
Categorically, there is <i>never any excuse</i> to use bitfields to operate hardware registers. Use of bitfields in a driver is a marker of crippling incompetence. Publishing code written that way will blight your career more reliably than publishing designs for amateur road mines.
A perf ABI fix
http://lwn.net/Articles/568501/rss
2013-09-26T20:06:47+00:00pr1268
<p>Perhaps I shouldn't have said "Seriously"... My facetiousness extended to the second part of my original comment. Not to mention a typo: s/<tt>__uu64</tt>/<tt>__u64</tt>/. Of course, I <i>could</i> simply do a <tt>typedef __u64 __uu64;</tt> and <i>voilà!</i> Typo gone. :-D</p>
<p>I'm actually intrigued by the fact some above mention using bitfields is perhaps preferred to preprocessor macros. I was under the perception (based on my 2003-2005 undergraduate CS education) that they're frowned upon. As are <tt>union</tt>s. (Personally, I'm not bothered by either; I have used bitfields and unions, even very recently, in code I've written for demonstrating <a href="http://babbage.cs.qc.cuny.edu/IEEE-754/">IEEE-754</a> floating point representation in binary. A quick look at <tt>/usr/include/ieee754.h</tt> will show lots of bitfields.)</p>
<p>P.S.1: Even COBOL has a union programming structure (the <tt>REDEFINES</tt> keyword).</p>
<p>P.S.2: I <i>do</i> think the Perf developers' solution is quite elegant. Well done, folks!</p>
Good one
http://lwn.net/Articles/568457/rss
2013-09-26T16:17:41+00:00deater
<div class="FormattedComment">
One thing not really addressed is how bitfields run opposite ways on little endian and big endian systems.<br>
<p>
Not a problem in most cases, but perf_event describes some bitfields<br>
such as struct perf_branch_entry that get written to disk directly.<br>
<p>
So if you record a session, then move it to an opposite-endian machine and try to read it back in you have problems.<br>
</div>
Good one
http://lwn.net/Articles/568412/rss
2013-09-26T12:56:29+00:00khim
This will only work for read, not for writes. And even then only if <b>int</b> and not <b>_Bool</b> is used.
A perf ABI fix
http://lwn.net/Articles/568409/rss
2013-09-26T12:40:43+00:00mpr22
<p>Yes, you're describing exactly the situation I'm implying with my comment.</p>
<p>I've worked with hardware a lot. I've worked with hardware that has default settings useful to exactly no-one. I've worked with hardware that sometimes fails to assert its interrupt output and then won't attempt to assert an interrupt again until the interrupt it didn't assert has been serviced. I've worked with hardware with complex functional blocks that were pulled in their entirety from a previous device, but only half-documented in the new device's manual. I've worked with hardware with read-to-clear status bits, hardware with write-zero-to-clear status bits, hardware with write-one-to-clear status bits, and hardware with combinations of those.</p>
<p>Thanks to that, I've spent enough time staring at bus analyser traces that I have come to appreciate code of the form ""read register at offset X from BAR Y of PCI device Z'; compose new value; write register at offset X from BAR Y of PCI device Z", because I can directly correlate what I see on the analyser to what I see in the code - and, even better, I can quickly tell when what I see on the analyser <em>doesn't</em> correlate to what I see in the code.</p>
<p>Most hardware isn't bit-addressable. Bitfields in device drivers look an awful lot like a misguided attempt to make it look like it is.</p>
This is the "right kind of version number"
http://lwn.net/Articles/568407/rss
2013-09-26T11:53:25+00:00davecb
<div class="FormattedComment">
An elegant approach!<br>
<p>
It's a lot easier in modern programming languages, where a new variant can be intruduced by adding a parameter. For common cases, this can hide the ened for versioning and future-proofing from the developer.<br>
<p>
Unless, of course, you're making a change from an absolute date to a relative one, both expressed as an integer (:-()<br>
<p>
--dave<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568402/rss
2013-09-26T11:51:52+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> bitfield, I wonder why the author didn't just set up an unsigned char</font><br>
<p>
Well, I was talking about describing the hardware, for instance a PCIe memory mapped window which control complex behaviour.<br>
I do not like to see stuff like:<br>
fpga.output_video.channel[3].sound.dolby.volume = 45;<br>
expressed with #defines:<br>
#define FPGA ((volatile void *)0xFD000000)<br>
#define OUTPUT_VIDEO (FPGA + 0x10000)<br>
#define CHANNEL (OUTPUT_VIDEO + 0x100)<br>
#define SIZEOF_CHANNEL 0x20<br>
#define OUTPUT_VIDEO_CHANNEL(n) (CHANNEL + (n * SIZEOF_CHANNEL))<br>
#define SET_SOUND_DOLBY_VOLUME(channel, v) ((stuff1 & stuff2) << 12) ... etc...<br>
<p>
For code unrelated to hardware, and not mapped to a fixed format (like for instance the structure of an Ethernet frame), then using bitfields is a lot less important.<br>
<p>
</div>
This is the "right kind of version number"
http://lwn.net/Articles/568401/rss
2013-09-26T11:11:25+00:00jnareb
<div class="FormattedComment">
There was similar situation that Git DVCS developers faced when adding new features to its network protocol. The first version was not designed with extendability in mind, but because exchange was done with pkt-lines, with length as part of payload but original parsing stopped at NUL ("\0") character they have shoe-horned information about extensions ('capabilities', this time in extendable space separated list of capabilites format) after NUL character; old clients skip capabilities list, new clients parse it and reply which they want to use.<br>
<p>
Backward compatibility was preserved with a very few exception for server-client transfer thorough whole existence of Git.<br>
</div>
Good one
http://lwn.net/Articles/568399/rss
2013-09-26T11:05:42+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> Wouldn't that break the ABI?</font><br>
<p>
No, instead of reading the 2nd bit of unaligned byte, the compiler emits code to read bit (2+8) of aligned word. Bit stay at the same place.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568397/rss
2013-09-26T10:40:13+00:00mpr22
<p>Any time I see a :1 bitfield, I wonder why the author didn't just set up an unsigned char/short/int/long/long long and define compile-time constants for the bit(s).</p>
<p>Any time I see a :n (n > 1) bitfield, I wonder what makes the author simultaneously believe that (a) it's important to squash that value into a bitfield instead of just using an int*_t or uint*_t (b) it's not important for people to be able to look at the code and predict what it will do.</p>
<p>(And any time I see a bitfield without an explicit signedness specifier, I wonder if I can revoke the author's coding privileges.)</p>
A perf ABI fix
http://lwn.net/Articles/568379/rss
2013-09-26T09:38:41+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> All the operations defined above can be expressed directly in type-checked C</font><br>
<p>
C cannot have function with bitfields parameters (i.e. parameter of 3 bits), so the simple bitfield version:<br>
struct { unsigned dummy : 3; } a_var;<br>
void fct (void) { a_var.dummy = 9; }<br>
generate warning (gcc-4.6.3):<br>
large integer implicitly truncated to unsigned type [-Woverflow]<br>
<p>
The equivalent in C is:<br>
unsigned avar;<br>
extern inline void WARN_set_dummy_too_high(void) {<br>
//#warning set_dummy value too high<br>
char overflow __attribute__((unused)) = 1024; // to get a warning<br>
}<br>
inline void set_dummy(unsigned val) {<br>
if (__builtin_constant_p(val) && (val & ~0x7))<br>
WARN_set_dummy_too_high();<br>
avar = (avar & ~0x7) | (val & 0x7);<br>
}<br>
void fct (void) { set_dummy(9); }<br>
<p>
If the bitfield is signed, the C function gets even more complex, and prone to off-by-one bugs.<br>
<p>
I have seen so much crap with #define (files with 10000+ #define lines, with bugs) that I would say bitfields is the future... let the compiler manage bits and bytes and let the linker manage addresses.<br>
</div>
Good one
http://lwn.net/Articles/568385/rss
2013-09-26T09:30:47+00:00khim
Wouldn't that break the ABI? Long-term this may be a good idea, but short-term it'll be quite a problem.
Good one
http://lwn.net/Articles/568377/rss
2013-09-26T08:51:59+00:00etienne
<div class="FormattedComment">
<font class="QuotedText">> bitfield ... insistence on aligned memory accesses</font><br>
<p>
There isn't any relation in between bitfields and alignment, so it would probably be better to fix the compiler than fix few random source files, long term...<br>
</div>
This is the "right kind of version number"
http://lwn.net/Articles/568330/rss
2013-09-26T01:26:54+00:00davecb
<div class="FormattedComment">
Literal version numbers are what most people use, but they need not be that simple-minded. The pre-IP ARPANET also used a one-bit version number, according to an old colleague.<br>
<p>
This is also an elegant solution to the "how do I introduce versioning" problem, exactly as was faced by the RCS developers when they first had to introduce an incompatible change. Something that's at least physically there (albeit not always "logically" there) gets used as the indicator, and everything thereafter can have as wide a version number as it needs. <br>
<p>
If this structure only changes every 10-20 years, a one bit width will probably do for all time (;-))<br>
<p>
See also Paul Stachour's paper at <a href="http://cacm.acm.org/magazines/2009/11/48444-you-dont-know-jack-about-software-maintenance">http://cacm.acm.org/magazines/2009/11/48444-you-dont-know...</a> for a more conventional worked example.<br>
<p>
--dave (who edited Paul's paper) c-b<br>
<p>
<p>
</div>
A perf ABI fix
http://lwn.net/Articles/568324/rss
2013-09-26T00:19:59+00:00ncm
<div class="FormattedComment">
Gcc and Clang both support C99 inline functions. All the operations defined above can be expressed directly in type-checked C, without preprocessor macros, with identical runtime performance.<br>
<p>
There are still places for CPP macros, but this isn't one of them.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568302/rss
2013-09-25T20:56:25+00:00geofft
<div class="FormattedComment">
Isn't it wonderful that we're writing our kernel in a language where preprocessor macros can defensibly be called "beautiful" by comparison to the alternative?<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568293/rss
2013-09-25T19:55:09+00:00pr1268
<p>I'm reminded of this gem from some GNU humor Web page I read a few years ago:</p>
<pre>
#define struct union
</pre>
<p>I'm not sure that would fix the perf ABI mess, though. ;-)</p>
<p>Seriously, though, why use the bit fields at all? Why not:</p>
<pre>
__uu64 capabilities;
#define CAP_USR_TIME (1ULL<<63)
#define CAP_USR_RDPMC (1ULL<<62)
#define HAS_CAP_USR_TIME(x) (x)&CAP_USR_TIME
#define HAS_CAP_USR_RDPMC(x) (x)&CAP_USR_RDPMC
#define SET_CAP_USR_TIME(x) (x)|=CAP_USR_TIME
#define SET_CAP_USR_RDPMC(x) (x)|=CAP_USR_RDPMC
#define UNSET_CAP_USR_TIME(x) (x)&=~CAP_USR_TIME
#define UNSET_CAP_USR_RDPMC(x) (x)&=~CAP_USR_RDPMC
</pre>
<p>Now you have the full complement of query, set, and unset operations in beautiful preprocessor code.</p>
Good one
http://lwn.net/Articles/568291/rss
2013-09-25T19:20:08+00:00mathstuf
<div class="FormattedComment">
<font class="QuotedText">> Don't go by the example RDPMC code in perf_event.h, it's out of date and possibly never really worked. I've been meaning to send a patch to fix that.</font><br>
<p>
Could a patch which replaces it with "TODO: Add an example" (or similar) be pushed for 3.12 at least? If there's anything worse than no documentation, it's bad documentation.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568248/rss
2013-09-25T15:34:58+00:00jfasch
<div class="FormattedComment">
I personally like the positiveness of the article, and that of the entire LWN site.<br>
</div>
Good one
http://lwn.net/Articles/568239/rss
2013-09-25T13:39:01+00:00deater
<div class="FormattedComment">
<font class="QuotedText">> Are you using some library or doing this directly? I'd like to do the same</font><br>
<font class="QuotedText">> thing, but the API seems to be (intentionally) poorly documented.</font><br>
<p>
I'm currently doing the RDPMC accesses directly. The eventual goal is to have the PAPI performance library use the interface; there are overhead issues with the interface I was dealing with first (sometimes it is slower to use RDPMC than to just use the read() syscall, for reasons that took me a long time to figure out. Thankfully there are workarounds).<br>
<p>
In any case yes, the documentation is awful. I wrote the perf_event_open() manpage in an attempt to address this. I've been working on updating the RDPMC part of that recently, although had to spend time trying to sanely document this ABI issue instead.<br>
<p>
Don't go by the example RDPMC code in perf_event.h, it's out of date and possibly never really worked. I've been meaning to send a patch to fix that.<br>
</div>
Good one
http://lwn.net/Articles/568234/rss
2013-09-25T12:31:46+00:00busterb
<div class="FormattedComment">
I used to think bitfields were neat, until I found out how badly the performed on an embedded MIPS.<br>
<p>
The difference between dereferencing a bitfield and just doing a (flags & FLAG) test was generally a 20-30% speedup on inner loops in an ISA like MIPS due to its insistence on aligned memory accesses. Similar thing with the 'packed' GCC attribute.<br>
</div>
Good one
http://lwn.net/Articles/568221/rss
2013-09-25T09:19:24+00:00luto
<div class="FormattedComment">
Are you using some library or doing this directly? I'd like to do the same thing, but the API seems to be (intentionally) poorly documented.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568200/rss
2013-09-25T04:18:03+00:00iabervon
<div class="FormattedComment">
Reading the old code, it looks like bit 0 was *actually* true if either capability was available. So you could leave bit 0 with that behavior, have bit 1 indicate one capability and bit 2 indicate the other. Then you've got the following properties:<br>
<p>
Old binary, new kernel: same as old kernel, buggy but not a regression.<br>
New binary, new kernel: works correctly.<br>
New binary, old kernel, no code change: doesn't use either feature, but the system might not have whichever feature you're actually interested in, so it's safer.<br>
New binary, old kernel, extra code: if bit 0 is set, but neither other bit is set, you know that the info is unreliable; if bit 0 is not set, you know the system has neither feature.<br>
<p>
The only possible regression is that a new build with only the old API and an old kernel, which explicitly tests for a feature, would no longer have its test subverted; it would no longer use a feature that might happen to work when there's no way to tell.<br>
<p>
If you interpret the old ABI as "kernel will only set the bit if it is definitely making the feature available", this wouldn't be an ABI change, in that the new code would conform to that ABI at least as well as the old code did.<br>
</div>
Good one
http://lwn.net/Articles/568199/rss
2013-09-25T03:41:16+00:00deater
<div class="FormattedComment">
Also I should probably disclose that I'm the Vince Weaver who apparently has become famous for being grumpy about the perf_event ABI.<br>
<p>
In this case I was grumpy because the initial Changelog for the structure re-arrangement did not mention anything at all about the ABI implications or the bit overlap.<br>
<p>
It was only by luck that I noticed this issue, because I had updated the perf_event.h header in my perf_event_tests testsuite to 3.12-rc1 but had rebooted back to 3.11 for other reasons. If I hadn't done that it's likely no one would have noticed this issue until after the 3.12 release.<br>
<p>
Not that it matters a lot though, as I'm possibly the only person in the world actually using RDPMC for anything right now. It's used by the High Performance Computing people for low-latency self monitoring, but the perf tool doesn't use the interface at all.<br>
<p>
</div>
Good one
http://lwn.net/Articles/568197/rss
2013-09-25T03:19:50+00:00deater
<div class="FormattedComment">
The perf_event interface is full of bitfields for reasons I don't fully understand.<br>
<p>
To make things more fun, there are proposals in the works to export the bit offsets in these bitfields (specifically the ones in struct perf_event_attr) via /sys so that the kernel can export event configs to the perf tool more "efficiently". I personally think this will only end in tears. Especially once endianess is factored in.<br>
</div>
Good one
http://lwn.net/Articles/568180/rss
2013-09-24T23:54:26+00:00ncm
<div class="FormattedComment">
Without tracing the kernel discussion thread, I don't know if the rich vein of humor in this event has been fully worked out, but I don't see how it ever can be.<br>
<p>
The error was not to have put bitfields in a union, the error was to have put bitfields in at all. I gather that the C committee has considered deprecating bitfields so that any header using them will elicit warnings. In the meantime, we depend upon ridicule, ostracism, and the quirky mis-implementation of bitfields in every C compiler ever. Surely all suggestions to add even more bitfields were offered tongue-in-cheek? We can but hope.<br>
<p>
As an abstract feature, bitfields are acknowledged to have an eldritch appeal, like ear tufts, 5-cm-thick toenails, or webbed fingers, but (fair warning!) anyone who speaks up for _using_ bitfields must prepare to be taunted.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568167/rss
2013-09-24T21:16:14+00:00khim
<blockquote><font class="QuotedText">Why is the API break required? If the fields were not renamed from usr to user the same code would comopile under both.</font></blockquote>
<p>And <b>that</b> is exactly the problem: now you can have a code which can be compiled with old headers and new headers but which will only work if old headers are used. Not fun. It's <b>much</b> better to introduce explicit API breakage in such cases.</p>
<p>You see, API and ABI are different. APIs are used by programmers when they write programs and if they are changed (subtly or not so subtly) then the best way to communicate the problem is to introduce deliberate breakage (programmer will fix the problem or will use old version of headers), ABIs breakage is handled by the end-user (or system administrator who's only marginally more clueless then end-user) and they don't have any sane means of handling it. Instead they will just change random stuff around till the damn thing will start.</p>
A perf ABI fix
http://lwn.net/Articles/568163/rss
2013-09-24T20:55:15+00:00cuviper
<div class="FormattedComment">
Perhaps a decent compromise would have bit0 = (time && rdpmc). This way, old userspace can keep its full performance advantage when both bits really are true, but it will never have a misinterpretation when only one was supposed to be set.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568164/rss
2013-09-24T20:50:43+00:00smurf
<div class="FormattedComment">
You need to check the _bit0* fields to determine the kernel's version of this interface, so you have to code for new struct anyway.<br>
<p>
Old code would probably compile, but it shouldn't -- it should use the new API. The rename make sure of that.<br>
</div>
A perf ABI fix
http://lwn.net/Articles/568157/rss
2013-09-24T20:12:19+00:00kugel
<div class="FormattedComment">
Why is the API break required? If the fields were not renamed from usr to user the same code would comopile under both.<br>
<p>
As for keeping ABI compatible; can bit0 not still have the same buggy value that it has in the old code instead of always-zero?<br>
</div>