> A spammer could set up an apparently legitimate account, post valid comments with it for long enough to get moderation privileges, and then use it to approve spamming sockpuppets.
In that case, it reverts back to what we have now: the comment appears, people report it, and some moderator removes it. The "wait for moderator approval on posts from a new account" is just an extra layer. Which is why my suggestion makes it very biased towards allowing the comment to appear.
All this is not intended for troll comments, which are a diferent sort of creature. And the people who see the spam comments and thus reply to them could be the same people who could see a "do not allow this to appear, it is spam" button, so they would not feel a need to answer (the negative reaction would be mostly directed towards obliterating the spam comment with that button).
Making the comment be approved if a child comment is posted is to avoid conversations is a way to avoid "restricted" conversations that only some people can see. If a conversation starts, it should become visible. If the original post turns out to be spam, again the moderators can remove it as they already do.
I believe the spam problem is not yet bad enough to risk damaging the community with overly agressive filtering.
Finally, I posted my "report as spam" ideas in a separate (and less organized) comment.