If I understand the argument correctly, it goes like this:
1/ A null pointer is a known address that doesn't actually point to an object of the expected type. It points to something else, but should never be dereferenced.
2/ dereferencing such a pointer is not only wrong, but it would lead to results that are not predictable from examining the original source code.
3/ therefore, every attempt to dereference a pointer must be preceded by a test to see if the pointer is "NULL", and to raise an exception if it is
4/ (I'm guessing here, this wasn't explicit in the talk but is the only way that the rest makes sense) Performing that test was too expensive on hardware at the time, so they didn't perform the test, so programs behaved in hard-to-understand ways which had a substantial cost either in debugging time or dealing with incorrect or harmful results.
If I am right, then:
A/ while it may have been a mistake then, it isn't relevant any more as any hardware with an MMU (which I think is everything these days) can easily check every dereference for 'NULL' and raise an exception, and
B/ The "billion dollars" is almost certainly an extreme over-estimate. The real problem is programmers writing bad code. Some other technique than NULL (e.g. explicit sentinal objects) might have made some of the problems easier to detect earlier and so might have saved something, but I really don't think it is justifiable to place that much blame on the 'NULL' pointer. Some maybe, but not much.
So I see absolutely no problem in a modern language allowing a NULL pointer, though certainly supporting non-NULLable pointers is very appropriate. And every runtime should check for dereferences of NULL (preferrably in hardware) and raise and exception when it happens. But I'm sure they do.