> hence the reason a magic number for the whole internet will not work.
The 10-100ms range is anything but magic numbers. These two numbers are fundamental requirements coming straight from physics and biology (and a tiny bit of maths).
100ms (give or take) is the amount of buffering required at every potential bottleneck to maximize the throughput of Van Jacobson's congestion control algorithm across a continent. This number does not come from some hairy research but straight from the speed of light and the average size of a continent; not exactly an arbitrary number. Buffer more than 100ms on any link and you will harm latency even more for NO throughput benefit.
10ms is a threshold in human perception - think VoIP and gaming. Again, no magic here: just biology. Less than 10ms buffering harms your throughput (even more) for no perceptible benefit.
It is a funny cosmic coincidence that playing Counter-Strike across an ocean sucks while it's OK on the same continent (well, maybe not between Alaska and Chili but you get the point).
Now these two numbers are orders of magnitude rounded for convenience. If you think the ideal range is rather 15ms-150ms I have absolutely no problem with that. What I have a problem with is:
> I work with cellular internet and 1 to 3 second ping times are far to common.
Researchers always focus on the complicated stuff (here: optimizing between 10 and 100ms). Simply because trivial requirements do not get papers published. Do NOT let researchers distract you from simple facts like: 1 second ping time is just a plain bug/a joke. Reduce buffering to 100ms (or 150ms if you prefer) on every link and you will make most of your customers happier and upset practically NONE.