Fedora ponders the Python 2 end game
Fedora ponders the Python 2 end game
Posted Aug 2, 2017 19:40 UTC (Wed) by drag (guest, #31333)In reply to: Fedora ponders the Python 2 end game by wahern
Parent article: Fedora ponders the Python 2 end game
If your developer sucks and projects have bad habits then static vs binary dependencies are not going to save you. It's going to suck in slightly different ways, but it's still going to suck.
> There are similar problems wrt resource management when you rely heavily on multi-threaded processes. Go isn't designed for simply spinning up thousands of goroutines in a single process; both Go and Plan 9 were designed to make it easy (and predicated upon the ability to) scale a service across thousands of machines, in which case you really don't care about the resiliency of a single process or even a single machine.
Mult-threaded processes are for performance within a application. Dealing with resiliency is a entirely separate question.
If you have a application or database that isn't able to spread across multiple machines and can't deal with hardware failures and process crashings then you have a bad architecture that needs to be fixed.
Go users shoot for 'stateless applications' which allows easy scalibility and resiliency. Getting fully stateless applications is not easy and many times not even possible, but when it is possible it brings tremendous benefits and cost savings. It's the same type of mentality that stays that functional programming is better then procedural programming. It's the same idea that is used for RESTful applications and APIs.
Eventually, however, you need to store state and for that you want to use databases of one type or another.
The same rules apply however when it comes to resiliency. You have to design the database architecture with the anticipation that hardware is going to fail and databases crash and file systems corrupt. It's just that the constraints are much higher and thus so is the costs.
It really has nothing to do with static vs dynamic binaries, however, except that Go's approach makes it a lot easy to deploy and manage services/applications that are not packaged or tracked by distributions. The number of things actually tracked and managed by distributions is just a tiny fraction of the software that people run on Linux.
> For example, Go developers harbor no pretense about making OOM recoverable, and AFAICT generally believe that enabling strict memory accounting is pointless. That makes it effectively impossible to design and deploy high reliability Go apps for a small number of hosts that don't risk randomly crashing the entire system under heavy load, such as during a DDoS.
When you are dealing with a OOM app they are effectively blocked anyways. It's not like your Java application is going to be able to serve up customer data when it's stuck using up 100% of your CPU and 100% of your swap furiously trying to use garbage collecting to try to recover during the middle of a DDOS.
Many situations the quickest and best approach is just to 'kill -9' the processes or power cycle the hardware. Having a application that is able to just die and is able to automatically restart quickly is a big win in these sorts of situations. As soon as the DDOS ends then you are right back at running fresh new processes. With big applications and thousands of threads and massive single processes that are eating up GB of memory... it's a crap shoot whether or not they are going to be able to recover or enter a good state after a severe outage. Now you are stuck tracking down bad PIDs in a datacenter with hundreds or thousands of them.
> And the kernel-process boundary isn't the only place where you care about independently upgrading components.
No, but right now with Linux distributions it's the only boundary that really exists.
Linux userland exists as a one big spider-web of interconnected inter-dependencies. There really is no 'layers' there. There really isn't any 'boundries' there... It's all one big mush. Some libraries do a very good job at having strong ABI assurances, but for the most part it doesn't happen.
Whether or not it's a easier to just upgrade a single library or if it's even possible or if it's easier to recompile a program... all this is very in 'It depends'. It really really depends. It depends on the application, how it's built, how the library is managed, and thousands and thousands of different factors.