This may be true in theory (I'm not sure), but in practice it's completely wrong. Bandwidth tells you only how much data can be passed through a link in a given time period. Saying a link is capable of 1Gbit/sec means if you consume every possible bit for every possible cycle for 1 second, you'll have transferred 1Gbit of data over the wire. Many links have a frame/second limit, so if your frames aren't completely full, you've wasted bandwidth and decreased the utilization of the link.
Router latency is caused by many factors. Some can be router CPU shortages, memory resource shortages, the time it takes to transfer a frame from "the wire" to the internal hardware and vice verse, how quickly a packet can be processed, whether packet inspection is happening, etc. This, relatively speaking, can be a very long time. Typically it's microseconds, but certainly not always. Either way, it represents a minimum time delay with only a practical ceiling (IP timeout / retransmission). So increasing bandwidth to an already slow router only makes the problem worse.
Also, if you have a link that passes through 1000 routers, it's bound to hit a dozen that are oversubscribed and performing horribly. This is especially true as your distance from the "core" increases and your distance to the "edge" decreases. This is why major datacenters are next to or house the major peering points of the Internet.