As a developer of text extraction tool from PDF, I'm pretty concerned of this sort of bashing. Think about the difference of this type of copy protection and robots.txt. Both are shady standards but used for some time. Do people complain about major search engines not ignoring robots.txt? Sure, this kind of features can be abused by some people. But both are easily circumvented if anyone has demand. Any "decent" search engine or crawler program are supposed to respect this, and departure from the expected behavior is considered rude. You can be rude anytime at your own risk, but please don't complain when people don't provide a tool for it.
BTW, I'm personally against the idea of this "extraction protection" bit because this poses a serious information barrier for users who are visually impaired and access to PDF documents via voice synthesis (later Adobe added an "exception for blind people" bit, but it's far from perfect). Still, I had to implement this dreadful thing when I published the software. Am I missing something?