I agree that its way too hard to tune it and that lots of folkloristic knowledge is needed to do so.
Unfortunately the "benchmark & set" idea doesn't really work in real life. Many of the parameters really, really depend on the workload you want to run:
* a high shared_buffers hurts in write intensive workloads if the dataset is much bigger than the available ram
* a high shared_buffers greatly improves read intensive workloads with a large hot set if it fits into s_b entirely
* a high shared_buffers hurts predictive answer times in write intensive workloads pretty badly on certain linux kernel versions
* a high shared_buffers setting hurts on high connection counts because of the large page table (can be alleviated with hugepages, probably coming in 9.3)
* a high max_connections hurts performance in high throughput oltp'ish workloads but is needed in beginner setups
* a high default_statistics_target hurts high throughput oltp workloads noticeably but greatly improves olap-ish workloads
* a high checkpoint_segments *greatly* improves write performance
* a high checkpoint_segments setting considerably increases recovery time after a crash/immediate restart
* a low checkpoint_timeout setting + small checkpoint_completion_target decreases response time jitter
* a low checkpoint_timeout setting + small checkpoint_completion_target considerably increases the amount of overall writes (due to checkpoints + full page writes), especially if the workload is update heavy
I could go on without a problem for quite some time.
For some of those idea exists to make a setting more generally acceptable, for others not.