I think it has more to say about what Dijkstra wanted to teach. Once you've been exposed to a language that just lets you get simple stuff done without much effort, the preferred Dijkstra way of formally proving as much as possible starts to seem incredibly long-winded, even if in theory it does eventually produce better results. (It is, of course, a good idea in some domains -- just not everywhere, as Dijkstra sometimes seems to have wished.)
For very large programs, particularly in safety-critical domains, formal proof, particularly of core components, starts to seem reasonable -- but when you're teaching you're going to be using small examples, because you have to. And those small examples are typically too small to need formal methods of any kind. Expose someone to BASIC, or another similar language which is good for quickly whipping up something small that works but that falls apart on larger scales, and they are likely to think 'why bother with formal methods?' when exposed to little teaching examples of their use, for which, to be blunt, any random language would often suffice with no use of formal methods at all.