Your clueful comments from a database perspective are greatly appreciated. I am relieved to hear that I did reinvent the wheel instead of going out on an untried limb. For the record, no study of database techniques was involved, it was just obviously a fast way to get changes onto disk.
We are thinking in terms of far more frequent rollups than every thirty minutes. More like every two or three seconds. We need to experiment and learn where the knee of the efficiency curve lies. This can easily be a mount time tunable. Real time is not the only criterion, rate of writing and amount of cache and cache pressure from other tasks also matter.
Tux3 is definitely not journalling only metadata (or the logging equivalent) and we have no plans to even provide an option for that. I wouldn't even know how to do that within the Tux3 model. It seems that we are able to provide full data consistency, nearly for free.
One surprise was, we have a compile option to overwrite file data instead of redirect writes nondestructively, and that failed to affect performance detectably in either fsx or fsstress. Perhaps we did not measure hard enough, but for now we seem to get full data consistency more or less for free. We still plan to offer a per-file overwrite vs redirect flag because somebody might want that for reasons other than performance, or maybe there are cases where it really does improve performance.
To clarify, Tux3 always maintains a clean prior image in case the latest commit fails. If the log gets trashed then for sure we have a difficult recovery task ahead of us, which is harder than Ext4 because we don't have fixed positions for different metadata bytes. So we will try to compensate in various ways, with the goal of being able to fsck approximately as reliably. Though to get there we might need to bring in the big guns and convince Ted or Andreas to help out. We should have a nice, upgraded directory index by that time to offer in trade :)