It's normal, and could make sense, to succeed in dereferencing a pointer containing zero if you have readable memory mapped at address zero. Some older platforms used 'page zero' memory as cache -- it was faster (or took less object code) to reach.
So the C standard can say nothing about it. Where the C standard does mention null pointers is in saying that, when assigning (or comparing) the constant zero to a pointer, a specific value must be used that means 'null'. The semantics of this 'null' value need not be the same as the address zero (ie. it may be some special nonzero bit pattern when represented in a particular pointer of a particular type) and this allows for weird platforms where zero is a commonly used address value and there is some other special invalid address register value which can be efficiently detected or used to raise an exception.
OTOH Linux could say something about it, eg. by not permitting anyone to map anything at address zero. This might break one or two native-code emulators of ancient zero-page-using OSes but surely no-one will cry over those in 2009.
None of which is relevant to whether it's legal for gcc to remove code during optimisation that relies on common-but-not-universal behaviour. This is definitely a compiler bug.
It's possible in practice for the pointer to be zero, and for execution to continue after reading the contents of address zero, so the test should never have been removed.