Takeaways from a global malware disaster
Takeaways from a global malware disaster
Posted May 18, 2017 22:31 UTC (Thu) by gracinet (guest, #89400)In reply to: Takeaways from a global malware disaster by HenrikH
Parent article: Vulnerability hoarding and Wcry
>Indeed. I routinely gets questions from customers on how they should upgrade our particular software and then they tell me that they currently use a version that we released several years ago. My agony with this is not that they have not updated our particular software suite but that we put software on their machines via both DEB and RPM repositories so this means that they have not even run "apt upgrade" or "yum update" for all that time either.
I don't know if that's your case, but with the kind of tailored software I'm producing (nothing technically fancy, just piles of business rules), the root cause is often that the admins just don't dare doing it, being too afraid to create breakage of applications they can't even test on their own. Instead, they rely too much on what they can actually maintain: the surrounding infrastructure, firewalls etc.
YMMV, but it's also true that there are very few customers that can tolerate a breakage due to an update they didn't ask for (even if hundreds of previous ones happened silently and prevented lots of problems). All of this requires lots of prior explanations and mutual understanding - this is hard.
So, it's quite common in my experience to be called for some application-level bugfix, and to notice that the surrounding system never had a single upgrade for years. I often raise the issue, hoping that the testing windows can be mutualized, but that's a double-edged sword : usually people call you with a specific goal in mind (very urgent), and evaluate your action with respect to that goal only. It's also quite common for the application to be scheduled for complete replacement (which is always late) after some years of production, and in that case, of course, it's very hard to plead for any extra work. And it's true that after too many upgrades have been skipped, things can get a bit dangerous.
This is the part where I heartily thank Debian, for its stability : it makes applying upgrades automatically a reasonable trade-off.
A colleague of mine even once made the acknowledgement of a situation of that kind a prerequisite to proceeding further (à la: here's the list of outdated system packages with security issues, please notice that's almost all of them, that wasn't even in our mission, so we consider it's your problem to fix that, please acknowledge that we can't be accountable about consequences of that situation or let's push the price up a bit if you want us to fix that also).
This is indeed the kind of human organizational dysfunction that the dev-ops movement has been trying to solve, but I fear that dev-ops is better understood if it's done from within an organization (usually tech-savvy), not by outside contractors. And also, for some people, dev-ops doesn't mean much more than that they can deploy Docker containers without dependency hell ; it's easy to forget that these are meant to have upgrades, too, even if there are no changes in the app itself.
To be fair, I'm aware of an exception: hosting companies that, by law, have to abid to mandatory security regulations in specialized fields (happens, e.g., with health related personal data in France). Unfortunately, it's bureaucratic and very expensive. If it weren't mandatory for the client, too, it wouldn't happen in many cases.
> Or the client who recently asked me if I could do a build for CentOS 3...
Oh, that's a nice one! EOL'ed on 2010-10-01!
Posted May 19, 2017 3:23 UTC (Fri)
by zlynx (guest, #2285)
[Link] (1 responses)
Virtual machines have been great for this. Years ago the company I worked for replaced all of our physical rack servers with a blade thing (Dell maybe?) running VMware ESX. Now our admins clone a machine, install all of the updates and test it, and then they can almost transparently shut down the old copy and replace it.
Posted May 19, 2017 10:12 UTC (Fri)
by gracinet (guest, #89400)
[Link]
Yes, of course, but… lots of applications around there have circular logical dependencies, such as having their own URL encoded in the database, the potential to send thousands of emails (again, specified in the DB) once their scheduled tasks fire, data too big to swap that easily, etc. In short, they aren't designed for easing up moving back and forth staging and production (of course some are). While you and I would certainly call that a design fault, that kind of stuff is often not in the selling criteria, as it's too much of a technician's concern.
Anyway, in the cases I was referring to, with a classic customer/developer/sysadmin separation of concerns, the admins just don't know what the application does, how to test it and with whom to share the results. Unit and integration tests can help, obviously, but they are a developer thing.
Another anecdote: it's been a while now (6 years), but once I was in a datacenter, wearing the developer hat, with the functional guy and the sysadmin for a major upgrade of a web application (first in years) and it really pleased the admin to witness the functional guy actually testing the application. It was the first time he'd even seen it in a browser, and actually we had some prior work on the firewalls to make it even possible to access it from the datacenter network. The thing that drove that exceptional gathering was the perception from management that the upgrade was both necessary and very risky.
What we can hope for is that this is mostly a thing of the past, with the dev/ops, microservices, release and test often mantras taking slowly over, but we shouldn't underestimate the human communication gaps that lie behind all this, if we don't want to end up with the same problems just spelled differently.
More broadly, non technical people in the IT/web business don't trust us for managing priorities : they fear that we drown into our own, useless, generated pile of work that they don't understand at all. In their discharge, I won't swear that never happens. We have to understand their point of view and provide better, more understandable feedback. It's so easy to just have contempt on them when one is the only one around to understand what's at stake, and that's why I've been advocating for a while that developers should have project management experience and vice-versa.
As for customers outside the IT business, I've been trying to explain that a computer system is more akin to a living body that needs continuous care (some kind of virtual horse) than to an inert tool, with some, yet limited, results.
Takeaways from a global malware disaster
Takeaways from a global malware disaster