|
|
Subscribe / Log in / New account

Cool new Free software

Cool new Free software

Posted Dec 20, 2012 15:52 UTC (Thu) by andresfreund (subscriber, #69562)
In reply to: Cool new Free software by nix
Parent article: Status.net service to phase out, replaced by pump.io

> How? With an absolutely unoptimized default configuration, each commit involves a sync, which involves waiting for the disk.

unoptimized != default. It was just the system wide installation I had lying around, it sure has some adjustments, just not optimized for anything. I don't know from the top of my head which ones exactly, I can look it up though if you want.

> There's no way you can do more than a few hundred such transactions per second unless you have a battery-backed RAID array so you don't need to wait for on-the-average half a disk rotation to bring the data under the head

Yes, I had synchronous_commit turned off (I do that on all my development systems) thats why I wrote that I had a 1/3 second window of data-loss. Thats ok enough in many scenarios.

You btw. can get into the lower thousands on a single spindle these days (since 9.2) with synchronous commit as well, due to the way WAL flushes are batched/grouped across different sessions. Requires quite a bit of concurrency though.

> (and then possibly another avg half rotation for associated metadata)

PG writes the WAL sequentially into pre-allocated files to avoid that...

> Doing 320,000 transactions per second with PostgreSQL is hard to imagine.

Those were the read only ones. 320k writing transactions to logged (i.e crash-safe) tables is probably impossible independent from the hardware right now due to the locking around the WAL.

Sorry, should have given a bit more context, but it really just was 5 minutes of benchmarking without any seriousness to it.


to post comments

Cool new Free software

Posted Dec 20, 2012 23:23 UTC (Thu) by man_ls (guest, #15091) [Link] (1 responses)

So there is your "completely unoptimized" database: a heavily customized installation by an expert. Oh, and don't use INSERTs like you were told, just COPYs. And a schemaless specific datatype. And hope that performance is still good... At that point you might as well use a data store which at least has been designed with that scenario in mind, and which any idiot (e.g. me) can use to do thousands of writes per second -- out of the box.

Cool new Free software

Posted Dec 21, 2012 0:15 UTC (Fri) by andresfreund (subscriber, #69562) [Link]

Wait what? "Heavily customized"? I now checked and I changed 3 performance critical parameters (and loads of logging/debugging ones, but those don't increase performance).

1) synchronous_commit = off. Is it ok to loose the newest (0.2 - 0.6s) transactions in a crash. Older transactions are still guaranteed to be safe. Thats a choice *YOU* have to make, it really depends the data youre storing (and can be toggled per session & transaction). Obviously things are going to be slower if you require synchronous commits.

2) shared_buffers = 512MB. How much memory can be used to buffer reads/writes. Would I have optimized I would have probably set it to 4-6GB.

3) checkpoint_segments = 40. How much disk can we use for the transaction log. Would I have optimized for benchmarking/write heavy it would have been set up to 300.

I don't know how those could be determined automatically. They depend on what the machine is used for.

And I used plain INSERT & SELECT, no COPY.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds