the filesystem inside a SSD actually is MUCH simpler than a general purpose filesystem
it is always working in fixed size objects
it never sees more than one write to a object at a time (and if it gets conflicting data for a block, ethe latest version wins)
it also doesn't have to guess at the architecture and performance of the underlying storage
it has a very simple command set (store this block here) rather than there being many different ways to do things.
it has a single command queue where a general filesystem is getting reads and writes in parallel from many processes.
it does not need to be able to run on multiple cpu cores at the same time
all of these things make the resulting code drastically smaller, and therefor easier to check and test.
the complexity of the code in the filesystem climbs significantly faster than the complexity of the problem the filesystem is trying to address, and the chance of there being bugs climbs significantly faster than the complexity of the code in the filesystem
but the bottom line of why people are willing to trust the prioprietary filesystems in the SSDs is that so far the SSD vendors have been getting it right (or at least close enough to right) so the resulting failure rate is down in the noise of mechanical and electrical failure rates. definitely not noticeably higher than any other firmware (even firmware that doesn't contain an internal filesystem)