This often comes up. If a language or platform has a misfeature which makes it hard to write secure code, it is hard for experts in that language to see why it's a problem. In principle, there is workaround XYZ which you should clearly use if you care about that stuff, but otherwise it is working as designed. The argument that *in practice* lots of programs end up with security holes does not carry the weight it should.
It's similar to user testing: you may test your application thoroughly but when you give it to real users they do all sorts of things you didn't expect and will inevitably find bugs. Constructs which lure unsuspecting programmers into opening security holes (even though those programmers are not totally clueless or careless) should be treated as a security bug just as severe as the hole itself.