Of course no existing software treats filenames purely as a string of bytes - that is just rhetoric. At the very least, filenames are treated as ASCII character encoding and displayed to the user as such. Of course, this breaks down when a filename contains control characters.
If Unix really did treat filenames as merely 'a string of bytes', with no implied character set or encoding, and displayed them to the user as a hex dump or something, then it would be truly encoding-agnostic and would have no difficulties with arbitrary byte values in filenames. Of course, it would also have been a total failure that nobody uses. For a filesystem to be useful, it needs to have some amount of meaning (or 'policy' if you will) attached to the filenames it stores. The question is how much: is the current situation of 'ASCII for characters below 128, and above that you're on your own' the best one?