Quite the contrary; in my humble opinion, curiosity is the primary legitimate reason for studying an exploit.
From the point of view of the actual protection of a computer, I've always considered (and teached) that exploit-oriented work (when it's not evil) is counter-productive. Most frequently, it takes less time to fix a potential security bug than to check its pratical exploitability. Frequently, you check exploitability because you cannot have trustable feedback on the actual danger and you had better consider a full solution switch.
(Note that there is a critical distinction between no feedback at all as is frequent with proprietary software and disagreement between developers on a piece of software available for public scrutiny as is frequent with open source software. Though that does not necessarily solves the first point.)
Furthermore working on exploits mean working on bad, obscure code that will not be reused (except for bad reasons or endless demonstration); while working on correcting errors is quality improvement and good programming. (Quality improvement is not as rewarding as writing a new scheduler - but still better than finding buffer overflows offsets - and new linux schedulers are so... common... nowadays.)
So... there are also real reasons why such a topic scratches so much itches; those forced (either by public pressure or by careless or manoeuvering managers) to work on exploitability are usually accumulating dissatisfaction. But that's certainly not due to your curiosity.