I am not saying that the risk is equal, I am disputing the statement that ext3 is rock solid and won't loose your data without you needing to do anything.
Ext3 is one of the worst possible filesystems to use if you really care about your data not getting lost (and therefor implement the fsync dance to make sure you don't loose data), because it's fsync performance is so horrid.
The applications are not keeping the data in buffers 'as long as they can', they are keeping the data in buffers for long enough to be able to optimize disk activity.
The first is foolishly risking data for no benefit, the second is taking a risk for a direct benefit. These are very different things. Ext3 also keeps data in buffers and delays writing it out in the name of performance. Every filesystem available on every OS does so by default, the may differ in how long they will buffer the data, and what order they write things out, but they all buffer the data unless you explicitly mount the filesystem with the sync option to tell it not to.
You say that people will always pick reliability over performance, but time and time again this is shown to not be the case. As pointed out by another poster, MySQL grew almost entirely on it's "performance over reliability" approach, only after it was huge did they start pushing reliability. The entire noSQL family of servers is based on relaxing the reliability constraints of the classic ACID protections that SQL databases provided.
This extends beyond the computing field. There are no fields where the objective is to eliminate every risk, no matter what the cost. It is always a matter of balancing the risk with the costs of eliminating them, or with the benefits of accepting the risk.