In my experience of massive data loss (thousands of files) using ext3 on a hard disk with a default data=ordered setup with LVM and no RAID, using Linux and ext3 is quite dangerous in its default configuration. This seems to be due to drives that reorder blocks in the write cache, and lack of journal checksumming in ext3 to cope with this (and possibly also LVM issues). See http://lwn.net/Articles/342978/ for the details.
My standard setup now is to:
1. Avoid LVM completely
2. Disable write caching on all hard drives using hdparm -W0 /dev/sdX.
3. Enable data=journal on ext3 (tune2fs -o journal_data /dev/sdX is the best way to ensure partitions are mounted with this option, including the root partition and when installed in another system, post-reinstall, etc).
The performance hit from these changes is trivial compared to the two days I spent rebuilding a PC where the root filesystem lost thousands of files and the backup filesystem was completely lost.
I suspect that the reason LVM is seen as reliable despite being the default for Fedora and RHEL/CentOS is that enterprise Linux deployment use hardware RAID cards with battery-backed cache, and perhaps higher quality drives that don't lie about write completion.
Linux is far worse at losing data with a default ext3 setup than I once thought it was, unfortunately. If correctly configured it's fine, but the average new Linux user has no way to know how to configure this. I can't recall losing data like this on Windows in the absence of a hardware problem.