Provided interrupt handlers do nothing more than the absolute minimum in
interrupt context, this approach will not adversely affect latencies.
Allowing interrupt handlers to be interrupted actually is far worse: It
introduces context switching overhead and screws up overall predictability.
If all interrupt handlers simply feed threads (that actually do the
associated work) without interruption, latencies can be tuned by design and
not by accident. Balancing throughput and latency simply becomes a question
of educating your scheduler.