Null pointers, one month later
C programmers normally expect that an attempt to dereference a null (zero) pointer will result in a hardware exception which, in turn, causes the program to crash. This happens not because there is anything special about a pointer containing zero, but because the trick of not mapping valid memory at the bottom of the virtual address space has been known and used for decades. If no valid memory is mapped near address zero, the hardware will trap attempts to access memory in that range; that includes attempts to dereference null pointers. It is a useful setup which minimizes the damage caused by misuse of null pointers.
The only problem is that, in the kernel environment, there is no guarantee that no valid memory is mapped at the bottom of the address space. The default is to not map anything there, but applications can request, via the mmap() system call, that the lowest addresses be made valid. So the null pointer address can be made to point to real memory, and this can happen entirely under the control of user space. User-space addresses remain valid when running in the kernel, so, if the kernel can be made to dereference a null pointer, it will be accessing user-controlled memory. Should the kernel try to jump to a null pointer, it will be running user-controlled code directly. Needless to say, this sequence of events would not be good for the security of the system.
After the July disclosure, it was reasonably evident that more null-pointer vulnerabilities had to exist in the kernel. Such bugs are easy to create; all that is required is a missing initialization. A new function pointer added to a structure will be silently set to null by the compiler in every declaration which does not include an explicit initialization for that pointer. Kernel programmers may be diligent about checking for null pointers, but they are human and will miss an occasional check. At times, these checks have been actively discouraged on the reasoning that dereferencing the pointer would, by virtue of oopsing the kernel, provide the same information as the check. For all of these reasons, one must conclude that there will be other situations in which the kernel can be tricked into dereferencing null pointers.
Given that, it would behoove us all to build our systems in ways which are resistant to null-pointer attacks. The /proc/sys/vm/mmap_min_addr parameter was meant to be the first line of defense here; it specifies the lowest address which can be mapped by unprivileged user-space code. Unfortunately, this protection proved porous. Systems with SELinux running, as it turns out, allowed "unconfined" users to map low memory regardless of the mmap_min_addr setting. For many other systems, it was possible to exploit a problem with pulseaudio to run code with the SVR4 personality, which resulted in a mapped zero page. All told, these problems enabled an attacker to bypass the low-memory limits and exploit null-pointer vulnerabilities.
On August 13, another null pointer vulnerability turned up; this one resulted from the combination of a missing function pointer initialization and a failure to check the pointer before jumping to it. It was an easily exploited hole; demonstration code was duly posted and there have been reports that attack code is already attempting to use this vulnerability. The kernel itself was patched quickly, even if the commit which closed this vulnerability was less than forthcoming about the problem:
Linus did mention the problem in the 2.6.31-rc6 announcement, though:
So, do "we" really have all of those issues fixed? We do not, though some important progress has been made in that direction. Take Fedora as an example: the SELinux policy problem which unconditionally allowed "unconfined" users to map low memory has been fixed; as a result, Fedora systems with SELinux running in the enforcing mode are not vulnerable. But the underlying means by which security modules bypass the mmap_min_addr check has not been fixed. So unpatched Fedora systems with SELinux in permissive mode are vulnerable, even though systems with SELinux disabled entirely are not. Updates for Fedora were released on August 15, two days after the disclosure of the vulnerability. Two days may seem slow for a problem of this nature, but, as it happens, only one distributor - Debian - got an update out more quickly.
Red Hat has not, as of this writing, issued an update for this
vulnerability. That is unfortunate because most RHEL systems are
vulnerable as the result of a policy choice made by Red Hat. RHEL systems, by
default, allow "unconfined" users to map low addresses addresses.
Red Hat's Dan Walsh explains: "We
are not planning on changing the default in RHEL5, to maintain backwards
compatibility.
" So, because compatibility trumps security, RHEL
systems (and those running distributions based on RHEL) remain vulnerable
to a trivial local root problem with exploit code easily available and in use. Not
good.
As of this writing, no other distributors have fixed this problem (though Mandriva's update showed up just before publication). Given that this vulnerability affects every kernel released since 2001, every distribution will have shipped vulnerable kernels. Even those which do not enable SELinux and which have taken steps to mitigate the other zero-page mapping problems should really be moving quickly to close this hole. Leaving the barn door open may not be a wise course of action, even if one trusts the fence which has been built around the barn.
One also should not forget all of those older systems, including embedded systems like DSL routers, which will be exposed to this vulnerability. This hole could be a boon to people trying to liberate the devices they own, but it could also be an easy way to take control of important systems which have long since been forgotten about. 2.4 kernels, too, are affected by this problem; it is easy to imagine that the bulk of these older systems will never be fixed.
One month ago we got an undeniable warning that this kind of vulnerability
was coming. The security of many of our systems has undoubtedly improved
over the course of that month. Even so, the latest null pointer
vulnerability would appear to have taken some distributors by surprise;
important holes have not been closed and updates have, in some cases, been
slow in coming. We can - and should - do better than this.
Index entries for this article | |
---|---|
Security | Linux kernel |
Security | Vulnerabilities/Privilege escalation |
Posted Aug 18, 2009 15:57 UTC (Tue)
by christian.convey (guest, #39159)
[Link] (2 responses)
Actually, didn't we get it *many* months ago, in the Coverity scan reports that people ignored?
I'm not complaining - the kernel developers don't owe me anything. I'm just saying that the warnings were sounded far earlier if only people were willing to work through the Coverity output.
Posted Aug 18, 2009 16:11 UTC (Tue)
by JoeBuck (subscriber, #2330)
[Link]
On the other hand, it does give a code auditor, or a black hat, a starting point to look for an exploitable problem.
Posted Aug 19, 2009 11:51 UTC (Wed)
by spender (guest, #23067)
[Link]
-Brad
Posted Aug 18, 2009 16:10 UTC (Tue)
by nirik (subscriber, #71)
[Link] (2 responses)
Fedora Updated F-10/F-11 this weekend...
https://admin.fedoraproject.org/updates/F11/FEDORA-2009-8684
Granted this is not a fix for all bugs of this kind, but at least the obvious known ones right
Posted Aug 18, 2009 17:09 UTC (Tue)
by corbet (editor, #1)
[Link] (1 responses)
Posted Aug 18, 2009 17:30 UTC (Tue)
by nirik (subscriber, #71)
[Link]
Sorry, should know better than to try and post before the first cup of coffee. ;)
Posted Aug 18, 2009 16:35 UTC (Tue)
by Trou.fr (subscriber, #26289)
[Link] (4 responses)
Also, PaX introduced UDEREF in 2006 to protect against it (note that it's not complete since the kernel can access code in userland, KERNEXEC protects against this).
Posted Aug 19, 2009 12:01 UTC (Wed)
by spender (guest, #23067)
[Link] (3 responses)
http://forums.grsecurity.net/viewtopic.php?f=3&t=2177...
-Brad
Posted Aug 20, 2009 5:25 UTC (Thu)
by pabs (subscriber, #43278)
[Link] (2 responses)
Posted Aug 20, 2009 12:28 UTC (Thu)
by spender (guest, #23067)
[Link] (1 responses)
From time to time though we may/do submit bug reports if for instance, UDEREF or KERNEXEC catches a bug in the vanilla kernel. There's an example we saw recently where some module if given a parameter would attempt to modify some read-only memory, caught by KERNEXEC.
-Brad
Posted Aug 22, 2009 6:40 UTC (Sat)
by pabs (subscriber, #43278)
[Link]
Posted Aug 18, 2009 17:20 UTC (Tue)
by michaelkjohnson (subscriber, #41438)
[Link]
As of this writing, no other distributors have fixed this problem As Brad Spengler pointed out and I noticed separately before becoming aware of Brad's analysis, rPath Linux is not vulnerable in its default configuration (and I doubt that anyone is changing vm.mmap_min_addr on their rPath Linux-based systems). We are still going to release a new kernel that has these specific issues addressed, but the priority isn't quite as high given the lack of vulnerability.
Posted Aug 18, 2009 17:33 UTC (Tue)
by cruff (subscriber, #7201)
[Link] (3 responses)
Posted Aug 18, 2009 17:56 UTC (Tue)
by fuhchee (guest, #40059)
[Link] (2 responses)
Perhaps that would destroy the performance benefits of sharing the VM
information between kernel & user space (since the flag would have to be toggled on & off).
Then there would be no way cause the execution of user code even if there are additional missing NULL pointer checks?
There's also "return-oriented programming", a technique for breaking into even suchly configured machines.
Posted Aug 19, 2009 1:26 UTC (Wed)
by zlynx (guest, #2285)
[Link] (1 responses)
:-)
Posted Aug 19, 2009 4:04 UTC (Wed)
by bojan (subscriber, #14302)
[Link]
Posted Aug 18, 2009 18:00 UTC (Tue)
by xilun (guest, #50638)
[Link] (33 responses)
This is only true from the point of view of the hardware. Linux like every Unix like is programmed is C, so this is _not_ true on targets when the representation of a NULL pointer is zero, which is the case for, oh well, just every target Linux and GCC supports...
Even after the compiler is instructed that NULL is less special than it thought and every single line of Linux is reviewed for this kind of problem, NULL pointers will stay special in the eye of third party tools. That's why it was a very very very bad idea at first to allow to map a page at address zero, and I guess if this "feature" stays there will again be security issues in the future because of that. So it's still a very bad idea, even if less very bad know that at least (some) people are conscious of one more thing they have to worry about when they write or review some Linux code.
Posted Aug 18, 2009 18:20 UTC (Tue)
by patrick_g (subscriber, #44470)
[Link] (3 responses)
Posted Aug 18, 2009 18:41 UTC (Tue)
by drag (guest, #31333)
[Link] (1 responses)
Is this something that programmers of emulation machines (yes I know Wine isn't emulation, but in this case it seems want to do emulation-ish things?) typically want to be able to do?
Would it make sense for the kernel to simply lie? Make it so that address zero from the applications VM perspective isn't really address zero from the kernel's or machines's perspective?
(I am struggling to understand everything going on here. It seems like it wouldn't be difficult to do.. I always understood the point to having virtual memory is so that applications can abritrarially get their memory mapped to any section of memory.)
Posted Aug 18, 2009 19:56 UTC (Tue)
by taviso (subscriber, #34037)
[Link]
You could fake it, but then you wouldn't be using the "hardware acclerated" emulation that makes things like dosemu very fast despite being a relatively complex feat.
Posted Aug 18, 2009 20:55 UTC (Tue)
by jreiser (subscriber, #11027)
[Link]
"All memory is equal, but the memory near address zero is more equal than others." On x86 (protected mode, both 32-bit and 64-bit) and PowerPC (both 32-bit and 64-bit) the hardware itself supports the low 64KiB or 32KiB better than any other region. Some forms of every branch instruction can access low memory always, in addition to the usual region near the program counter. On the PowerPC this is explicit: the AA bit (Absolute Addressing: the bit with positional value (1<<1)) in the instruction. On x86 it is implicit: the 0x66 prefix byte, which performs target_address &= 0xffff; just before branching, and the 0x67 prefix byte, which makes the 0xe9 (and 0xe8) opcodes take a 16-bit displacement instead of a 32-bit displacement. On PowerPC the benefit is a larger set of target addresses, including some targets that are universally accessible regardless of the current value of the program counter. On x86, another benefit also is smaller size: 2, 3, or 4 bytes for a branch instead of 5, or only 5 bytes for some universally-accessible targets on x86_64. Also, do not overlook the advantage of using just 16 bits for storing pointers to an important collection.
Most traditional static compilers such as gcc never use these features. However, there are other compilers, program processors, and runtime re-writers which take advantage of the hardware to offer otherwise-impossible features.
Posted Aug 20, 2009 10:35 UTC (Thu)
by etienne_lorrain@yahoo.fr (guest, #38022)
[Link] (28 responses)
In fact the C language doesn't know the identifier "NULL", it just knows that its value is zero because the preproceessor defines that.
The real problem is to tell the compiler not to optimise the NULL test of this function in the general case:
Posted Aug 20, 2009 11:47 UTC (Thu)
by xilun (guest, #50638)
[Link] (27 responses)
Wrong:
Posted Aug 20, 2009 12:34 UTC (Thu)
by hppnq (guest, #14462)
[Link] (26 responses)
Posted Aug 20, 2009 18:25 UTC (Thu)
by hummassa (subscriber, #307)
[Link] (25 responses)
Posted Aug 20, 2009 20:22 UTC (Thu)
by nix (subscriber, #2304)
[Link] (7 responses)
How will you *create* one of these pointers?
Posted Aug 21, 2009 4:47 UTC (Fri)
by njs (subscriber, #40338)
[Link] (6 responses)
So... no problem?
Posted Aug 21, 2009 7:27 UTC (Fri)
by nix (subscriber, #2304)
[Link] (5 responses)
So at best it'd give you something like a dump of program state at the
Posted Aug 21, 2009 8:05 UTC (Fri)
by njs (subscriber, #40338)
[Link]
(This is all relatively common in languages with real type systems.)
Posted Aug 21, 2009 18:50 UTC (Fri)
by bronson (subscriber, #4806)
[Link] (3 responses)
More like it mandates the null checks that everybody is supposed to do but even the most skilled programmers can't get 100% correct. It should raise the quality of all C programs.
> at best it'd give you something like a dump of program state at the
Yes, that's better than dereferencing and getting rooted isn't it?
Posted Aug 21, 2009 19:06 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
So, yes, it's an improvement, but I'm not sure it's a large one. (I also
Posted Aug 27, 2009 19:30 UTC (Thu)
by hummassa (subscriber, #307)
[Link] (1 responses)
YOU CANNOT DEREFERENCE A NULLABLE POINTER
if you want to use the star, check if it is nullable. People will start to use non-nullable pointers everywhere in their interfaces because they don't want to be checking for null all the time. :-D Cunning, eh?
Posted Aug 27, 2009 19:31 UTC (Thu)
by hummassa (subscriber, #307)
[Link]
Posted Aug 20, 2009 22:19 UTC (Thu)
by hppnq (guest, #14462)
[Link] (12 responses)
The problem, however, is that in non-trivial programs you need to be able to dereference pointers that could be NULL even if they should not be NULL. The compiler may not be able to catch all of these situations for you.
Posted Aug 21, 2009 4:45 UTC (Fri)
by njs (subscriber, #40338)
[Link] (11 responses)
Yes, that's no problem. When you set up the sort of type system he or she describes, you include some sort of syntax that lets you get convert a "nullable" pointer into a non-null pointer by checking that it is, in fact, non-NULL. Once it's a non-null pointer, it becomes legal to dereference. (In the OP's sketch they overload the 'if' operator for this, but you could add some sort of extra syntax instead if you want to make it clearer.)
It does mean you can't dereference a maybe-NULL pointer *that is actually NULL*, but... that's the point :-).
Posted Aug 21, 2009 7:17 UTC (Fri)
by nix (subscriber, #2304)
[Link] (5 responses)
I still don't see any robustness benefit here.
(Of course proving that pointers cannot be null at compile time is
Posted Aug 21, 2009 7:48 UTC (Fri)
by dgm (subscriber, #49227)
[Link] (2 responses)
The gotcha is that null pointers are just _one_ type of invalid pointer.
Posted Aug 21, 2009 19:10 UTC (Fri)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted Aug 22, 2009 1:10 UTC (Sat)
by njs (subscriber, #40338)
[Link]
Posted Aug 21, 2009 7:54 UTC (Fri)
by farnz (subscriber, #17727)
[Link]
It's also impossible to verify the C type system; this doesn't stop
compilers from running. The trick is to go for a conservative assessment;
you're not interested in the choices "will sometimes be null/will never be
null", you're interested in "might or might not be null/will never be
null". The second is tractable; imagine an "ifnull( <ptrexpression>
) {
null-block } else { <ptrexpression is now nonnull> nonnull-block }".
By
requiring you to use ifnull to convert nullable pointers to nonnull
pointers whenever you might encounter them, the compiler can force you to
decide how you're going to handle unexpected nulls.
Whenever the compiler isn't sure that a pointer is nonnull, it gives a
compile-time error message. So, examples:
This forces you to handle nulls sanely at some point, or fail to
compile and link properly. Practical code handles nullness at boundary
points, and then passes nonnull pointers around the place, to code which
can assume that they're not null.
Posted Aug 27, 2009 19:35 UTC (Thu)
by hummassa (subscriber, #307)
[Link]
Posted Aug 21, 2009 8:00 UTC (Fri)
by hppnq (guest, #14462)
[Link] (4 responses)
This assumes that pointers do not change, which is only true in the trivial cases. If you want to be completely safe, your only option is to always check, right before using it, that a pointer is not NULL.
And then, by the way, you still have to worry about what will happen it turns out to be pointing to 0x1. ;-)
Posted Aug 21, 2009 22:53 UTC (Fri)
by nix (subscriber, #2304)
[Link] (3 responses)
Posted Aug 21, 2009 23:40 UTC (Fri)
by corbet (editor, #1)
[Link] (2 responses)
Posted Aug 22, 2009 0:34 UTC (Sat)
by nix (subscriber, #2304)
[Link] (1 responses)
... but ERR_PTR() has a somewhat comprehensible reason to exist. The thing
Posted Sep 9, 2009 6:59 UTC (Wed)
by cmccabe (guest, #60281)
[Link]
In higher level languages like OCaml, Java, etc., when you encounter an unrecoverable error in a function, you throw an exception. Then the function has no return value-- control just passes directly to the relevant catch() block.
ERR_PTR is the same thing. Normally, the function would return a foo pointer, but an unrecoverable error happened. So you get an error code instead. As a bonus, if you forget to check for the error code, you get a guaranteed crash (well, if some bonehead hasn't allowed the page starting at address 0 to be mapped). I say "bonus" because the alternative is usually a nondeterministic crash.
Posted Sep 9, 2009 6:35 UTC (Wed)
by cmccabe (guest, #60281)
[Link] (3 responses)
References must point to valid objects.
Posted Sep 9, 2009 14:28 UTC (Wed)
by foom (subscriber, #14868)
[Link] (2 responses)
is perfectly valid. So they don't make a very good non-nullable pointer.
Posted Oct 18, 2009 22:52 UTC (Sun)
by cmccabe (guest, #60281)
[Link] (1 responses)
There shall be no references to references, no arrays of references, and no pointers to references. The declaration of a reference shall contain an initializer (8.5.3) except when the declaration contains an explicit extern specifier (7.1.1), is a class member (9.2) declaration within a class declaration, or is the declaration of a parameter or a return type (8.3.5); see 3.1. A reference shall be initialized to refer to a valid object or function. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the object obtained by dereferencing a null pointer, which causes undefined behavior. As described in 9.6, a reference cannot be bound directly to a bitfield. ]
ISO/IEC 14882:1998(E), the ISO C++ standard, in section 8.3.2 [dcl.ref]
C.
Posted Oct 19, 2009 3:47 UTC (Mon)
by foom (subscriber, #14868)
[Link]
I stand corrected.
I had always considered the "dereference" that occurs during the initialization of a reference
Posted Aug 23, 2009 8:23 UTC (Sun)
by oak (guest, #2786)
[Link]
Only (structure or other) variables that are static (global or static in
Other variables *may* be zero if their storage happens to be in a part of
Posted Aug 24, 2009 16:47 UTC (Mon)
by jimparis (guest, #38647)
[Link]
Yep, like Android phones. Nice!
Posted Aug 25, 2009 21:37 UTC (Tue)
by jtk@us.ibm.com (guest, #29832)
[Link]
Posted Aug 27, 2009 9:21 UTC (Thu)
by gebi (guest, #59940)
[Link] (1 responses)
WTF?
new updated kernel packages got available for amd64 at 14-Aug-2009 14:35.
Posted Aug 27, 2009 12:58 UTC (Thu)
by corbet (editor, #1)
[Link]
Posted Aug 27, 2009 17:26 UTC (Thu)
by slack (guest, #12206)
[Link]
Posted Sep 9, 2009 7:16 UTC (Wed)
by cmccabe (guest, #60281)
[Link]
The most ironic thing about all of this is that unless you're running some hardware from the late Jurassic period, you'll be able to run your emulator at full speed even without the hack. I'm sure the sysadmins of the world will be happy to know that although their boxes got rooted, at least they can run DOSBox at 50000x speed rather than 45000x.
And don't tell me that we're going to eliminate NULL pointer dereferences from the kernel. That will never happen.
The real bug here is that we are not properly enforcing the mmap_min_addr setting. Running a properly configured selinux system should never be less secure than running with selinux off. That's just FAIL.
Null pointers, one month later
No, not really. When Coverity reports that a deference of a pointer precedes, rather than follows, a test to see if that pointer is null, this doesn't tell you whether or not it is possible for a real null pointer to reach that point. It's possible that the compare against null is redundant.
Null pointers, one month later
Null pointers, one month later
Null pointers, one month later
showed up just before publication). "
https://admin.fedoraproject.org/updates/F10/FEDORA-2009-8647
now.
Indeed, the article points out that Fedora issued updates on the 15th, beaten only by (one of) Debian's updates. Am I missing something? (And they've not fixed them all; Fedora in permissive mode still lets the zero page be mapped.)
Fedora updates
Fedora updates
not new
not new
merge
merge
merge
What about non-vulnerable systems?
Null pointers, one month later
Why don't they just force the use of the no-execute page table (on processors that support it) for all kernel mappings of user space?
Null pointers, one month later
Null pointers, one month later
Null pointers, one month later
Null pointers, one month later
>>> I guess if this "feature" stays there will again be security issues in the future because of thatNull pointers, one month later
Is it just for Wine or is there other softwares using the map at adress zero ?
Null pointers, one month later
Null pointers, one month later
Is it just for Wine or [are] there other softwares using the map at adress zero ?
Uses of pages near zero
Null pointers, one month later
You can ask the compiler not to optimise tests against zero by a compilation switch, like it is done in the latest Linux source.
Another solution is to define NULL as an external pointer, and let the linker set its value to zero (either linker command file or ld parameter).
Then, the compiler cannot optimise tests against a value it doesn't know, namely NULL - but it will still optimise away tests against zero, some of them are obvious.
inline void fct (unsigned *cpt) { if (cpt != NULL) *cpt += 1; }
but to optimse it when it is called as:
static unsigned cpt1; // cpt1 address known not to be zero
voit fct1 (void) { fct (&cpt1); }
or when called as:
void fct2 (void) {
unsigned cpt2; // cpt2 address known not to be zero
fct (&cpt2);
}
That is difficult to acheive.
Null pointers, one month later
The fact that the C compiler, preprocessor excluded, does not know about the symbol NULL is irrelevant. NULL is defined as the null pointer constant, and the C language, even preprocessor excluded, does know about the null pointer constant. And even if the null pointer constant can be literally written (in a strictly conforming program) as (void*)0, that does not mean that the representation of the null pointer constant must be zero.
Dereferencing the pointer that is supposed to never point at a valid object (the NULL pointer) is always going to be a problem -- but that problem is made bigger if there are actually objects living at exactly that part of memory ("zero").
Null pointers, one month later
Put some pragma to deal with legacy code, etc...
C and C++ could have non_nullable pointers, easily
int *nonnull a = NULL; // syntax error
int *b = NULL; // Ok
int f(int *nonnull c) { return *c; } // ok
int g(int *d) { return *d; } // syntax error
int h(int *e) {
if( e ) {
// here, "e" is of type "int *nonull" b/c of the check
return *e;
} else {
return 0;
}
} // ok
f(b); // syntax error
h(b); // ok
if( b ) f(b); // ok
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
everyone should already be doing anyway, and replaces it with something
which is sufficiently automated that I can't see how it could provide
helpful output at runtime (unless it did a longjmp() or EH got added to C
or something).
time of the unintended NULL dereference: i.e., a core dump. The only
advantage is that the set of places you could get core dumps from might be
slightly smaller (at allocation, rather than at first dereference).
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
time of the unintended NULL dereference
C and C++ could have non_nullable pointers, easily
kernels and those very rare userspace programs that dereference things at
address zero or have structures whose sizeof() is in the multimegabyte
range), dereferencing null pointers doesn't lead to a root hole, but to a
crash. DoSes are bad enough, and it's still a bug...
fear it would turn out like 'const' too often does: the semiclued majority
would just use nullable pointers everywhere because non-nullable ones
are 'too annoying'. But security-important software and software written
by clued people which can't use real languages like ocaml ;) would of
course benefit. And perhaps that's all we can hope for.)
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
Well, yeah. For ages GCC has supported the nonnull function attribute, used to specify arguments of a function that should not be NULL so you can catch these at compile time.
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
cannot_convert_null exceptions from the pointer conversion?
impossible in the general case.)
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
compatibility with almost all previous code: this from a language so
conservative that by word-of-dmr the precedence of && and || was
intentionally set wrong so as to avoid breaking code running on three
sites :)
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
int func1( int *pointer )
{
return *pointer; // Compile error here - cannot deference a nullable
}
int func2( int * nonnull pointer )
{
return *pointer; // OK
}
int func3( int * pointer )
{
return func2( pointer ); // Compile error here - even if pointer is
actually non-null.
}
int func4( int * pointer )
{
ifnull( pointer )
return 0;
else
return func3( pointer ); // OK, but func3 still won't compile, as
other callers might use a null pointer.
}
int func5( int * pointer )
{
ifnull( pointer )
return 0;
else
return func2( pointer ); // OK
}
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
Once it's a non-null pointer, it becomes legal to dereference.
I've maintained code that actually went so far as to do this:
C and C++ could have non_nullable pointers, easily
struct blah *foo (...)
{
if (error_1)
return NULL;
if (error_2)
return (struct blah *)1;
if (error 3)
return (struct blah *)2;
/* repeat for ten or so errors */
return /* a real struct blah */;
}
After I'd finished being sick into the keyboard I got a new keyboard and
fixed it so it didn't do that anymore.
Ever seen the kernel ERR_PTR() macro? :)
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
I'm discussing had only half a dozen callers, and half of them ignored the
fact that it might return an error and just blindly dereferenced anyway
(but for all I know the same is true of ERR_PTR()s users).
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
C and C++ could have non_nullable pointers, easily
Obj *p = 0;
Obj &r = *p;
C and C++ could have non_nullable pointers, easily
> Obj *p = 0;
> Obj &r = *p;
>
> is perfectly valid. So they don't make a very good non-nullable pointer.
C and C++ could have non_nullable pointers, easily
variable as syntax, rather than an actual memory operation, and thus the value of the pointer is
irrelevant at that point. Clearly the standard says otherwise.
Null pointers, one month later
by the compiler in every declaration which does not include an explicit
initialization for that pointer."
the local scope) are guaranteed by C to be initialized to zero. Compiler
does this by storing them into BSS which is zeroed by the OS on process
startup.
heap or stack that hasn't been used earlier by the process. And even this
happens only on OSes that initialize the new pages to zero. Otherwise the
variables contain a "random" value.
Null pointers, one month later
http://www.ryebrye.com/blog/2009/08/16/android-rooting-in...
what about bigmem?
It's a performance hit, but a security win, that you can't access user memory via the process's virtual addressing.
Null pointers, one month later
...
> As of this writing, no other distributors have fixed this problem (though Mandriva's update showed up just before publication).
http://www.debian.org/security/2009/dsa-1862
Not bad for a vulnerability reported on 13.8 though debians kernel/security team does a _really_ good job at such things! (thx for your hard work!)
WTF indeed. If you read the text, you'll see that Debian is credited as having gotten the first update out. I seem to have written that in an especially confusing way, and I apologize, but the information is there.
Null pointers, one month later
Null pointers, one month later
Null pointers, one month later