|
|
Subscribe / Log in / New account

PostgreSQL: the good, the bad, and the ugly

PostgreSQL: the good, the bad, and the ugly

Posted May 21, 2015 23:12 UTC (Thu) by pizza (subscriber, #46)
In reply to: PostgreSQL: the good, the bad, and the ugly by jberkus
Parent article: PostgreSQL: the good, the bad, and the ugly

> 1. Even 5 minutes of downtime is a lot for someone's production-critical database.

...then WTF are you upgrading a production-critical database at all?

Never, never, never do live upgrades of critical stuff.


to post comments

PostgreSQL: the good, the bad, and the ugly

Posted May 22, 2015 0:10 UTC (Fri) by flussence (guest, #85566) [Link]

> ...then WTF are you upgrading a production-critical database at all?

Working under people who won't budget enough to allow the developers to do their job sanely?

PostgreSQL: the good, the bad, and the ugly

Posted May 22, 2015 19:04 UTC (Fri) by dlang (guest, #313) [Link] (3 responses)

> Never, never, never do live upgrades of critical stuff.

you don't always have the storage to be able to replicate everything to a new copy for the upgrade.

Even if you can, how do you make the new copy have everything the old copy has if the old copy is continually being updated.

At some point you have to stop updates to the old copy so that you can be sure the new copy has everything before you cut over to it. If you have real-time replication to a copy that's got the same performance/redundancy as your primary (the slony approach that's mentioned several times here), then you can make the outage very short.

But if you aren't setup that way, you have to either never convert, or convert in place.

PostgreSQL: the good, the bad, and the ugly

Posted May 22, 2015 22:14 UTC (Fri) by pizza (subscriber, #46) [Link] (2 responses)

> you don't always have the storage to be able to replicate everything to a new copy for the upgrade.

You mean to tell me that you don't have any sort of backup for your system, at all? "pg_dump | xz > dump.xz" takes less space than the live PG database.

Again, I stand by my assertion that performing live upgrades of mission-critical stuff is a horrendously bad idea. Justifying it with excuses about lacking sufficient resources to do this properly is even worse, because that tells me you're one failure away from being put out of business entirely.

> But if you aren't setup that way, you have to either never convert, or convert in place.

I'm not saying converting in place is necessarily the wrong thing to do.. just that doing it on a live system with no fallback option is insane -- What it there turns out to be an application bug with the new DB?

PostgreSQL: the good, the bad, and the ugly

Posted May 22, 2015 22:18 UTC (Fri) by andresfreund (subscriber, #69562) [Link]

> Again, I stand by my assertion that performing live upgrades of mission-critical stuff is a horrendously bad idea.

I think that's a statement pretty far away from reality. If downtimes cost you in some form or another, and dump/restore type upgrades take a long while due to the amount of data, in-place isn't a bad idea.

> just that doing it on a live system with no fallback option is insane -- What it there turns out to be an application bug with the new DB?

Why would inplace updates imply not having a fallback?

PostgreSQL: the good, the bad, and the ugly

Posted May 23, 2015 0:14 UTC (Sat) by dlang (guest, #313) [Link]

I didn't say there is no backup. I said that there was no extra system with all the CPU, RAM, fast disks, etc needed to run the full system in parallel while you are replicating.

A backup can be on much slower storage, and can be compressed (or without indexes that get created at reload time, etc)

There are lots of ways that a large database could be backed up so that it could be restored in a disaster that don't give you the ability to create a replacement without taking the production system down


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds