He specifically mentions (in Subtle Cheating, item d.) using "parkrun" as a good test clip for adaptive quantization, one that he specifically tuned for. In the post you link to he is using a frame from the very similar "parkjoy" clip to make the same point about the existing WebP encoder (i.e. not the format)'s current lack of adaptive quantization. But, as he says:
"... claiming that either is representative of most real content and thus can be used as a general determinant of how good encoders are is of course insane."
The subtlety of his argument (in both this and his WebM review) seems to have been lost on many people and so this insanity has become common with his stills being widely touted as proving that WebP/M looks "crappy".
Despite his seeming annoyance at WebP, he himself was previously suggesting a still image format based on H.264 frames would be a good idea. Since he's also pointed out that WebM (and therefore WebP) is essentially an H.264 clone, it can't be that far away from what he suggested could be done with current H.264 tech, claiming it would beat JPEG by 2x or more (see para beginning "JPEG2000 is a classic example"):
The article is actually about the problems with wavelet compression and why JPEG 2000 and Dirac are struggling to live up to their potential, but the comment thread in particular is interesting as you get to see all the various WebP rivals (JPEG2000, JPEG-XR, DLI) that are currently being touted as obviously better than WebP being roasted for generally not being as good as JPEG (or alternatively so compute intensive that "you can fry an egg on your desktop") by random compression nerds many months before Google made it a political battle by actually implementing and releasing something that has a vague chance of adoption.