Your specific example "20 kHz signal playing with 44 kHz samples, and played at 96 kHz samples" is a particularly poorly example. I assume you meant a pure tone signal? Such a tone can be represented by any sampling with a sampling rate > 40 kHz. So, 44 kHz and 96 kHz are equally good with respect to representing that signal. If there is any difference at all favoring the 96 kHz system, it arises from relatively worse engineering involved with the 44 kHz system -- poorer quality of handling of frequencies around 20 kHz, perhaps -- and not from any intrinsic difference between the representations of the two signals themselves.
Many people seem to think---and I am not implying you are one---that the way digital signals are converted to analog output waveforms occurs as if linear interpolation between sample points were used. From this reasoning, it looks as if higher sampling rates were better, because the linearly interpolated version of 96 kHz signal would look considerably closer to the "original analog waveform" than its 44 kHz sampling interpolated the same way. But that's not how it works. Digital systems are not interpolated by fitting line segments, but by fitting sin waveforms through the sample points. So in both cases, the original 20 kHz sin() could be equally well reconstructed.