> How? With an absolutely unoptimized default configuration, each commit involves a sync, which involves waiting for the disk.
unoptimized != default. It was just the system wide installation I had lying around, it sure has some adjustments, just not optimized for anything. I don't know from the top of my head which ones exactly, I can look it up though if you want.
> There's no way you can do more than a few hundred such transactions per second unless you have a battery-backed RAID array so you don't need to wait for on-the-average half a disk rotation to bring the data under the head
Yes, I had synchronous_commit turned off (I do that on all my development systems) thats why I wrote that I had a 1/3 second window of data-loss. Thats ok enough in many scenarios.
You btw. can get into the lower thousands on a single spindle these days (since 9.2) with synchronous commit as well, due to the way WAL flushes are batched/grouped across different sessions. Requires quite a bit of concurrency though.
> (and then possibly another avg half rotation for associated metadata)
PG writes the WAL sequentially into pre-allocated files to avoid that...
> Doing 320,000 transactions per second with PostgreSQL is hard to imagine.
Those were the read only ones. 320k writing transactions to logged (i.e crash-safe) tables is probably impossible independent from the hardware right now due to the locking around the WAL.
Sorry, should have given a bit more context, but it really just was 5 minutes of benchmarking without any seriousness to it.