The most obvious route to singularity (and therefore almost certainly wrong) is when the software starts writing the software. Shortly after, it starts making aesthetic judgements about the software, and moral ones about how to modify the software (itself), and then inevitably starts to ponder what to do about those slow meat-based creatures which started everything rolling, but which take literally seconds to have any sort of thought process at all.
Vernor Vinge has said it all a lot beter than I can in his fiction. Read the prologue to "A Fire Upon the Deep" (and then the rest of the book) for a pessimistic vision, and "Rainbows End" for a more optimistic spin (though if I were living in that version of our near future, I'd be very hesitant to make any predictions more than two years out. That, of course, *is* the point).
Already in mathematics, we have (fully open!) "proofs" which are beyond the comprehension of any human being, except by meta-mathematical methods. In other words, a computer program generates a proof which if printed on paper, would be anything from hundreds of thousands of pages upwards. The best that a human can do is to study the program which constructed the proof. You may even be able to prove that the abstract program is correct, but how to do that for the hardware and system in which it operates?
(For related SF, read Greg Egan's "Luminous" and "Permutation City")