Takeaways from a global malware disaster
Takeaways from a global malware disaster
Posted May 17, 2017 19:28 UTC (Wed) by fratti (guest, #105722)Parent article: Vulnerability hoarding and Wcry
Lesson number one. People need to keep their systems up to date. I don't feel like enough education is happening in this field as to how important it is that your system is up to date and still receiving security patches. Just today I've had a conversation with someone who claimed that his Debian Squeeze system was still secure, because (and I quote) "internet banking works". For those too lazy to look this up, Debian Squeeze has not been receiving security fixes since February 2016.
I also feel like there needs to be government regulations about software updates on mobile devices. While Android devices are not affected by this specific ransomware, the article rightfully points out that Android has a big issue in this regard. Since we're not allowed by most manufacturers to run whatever software we want on our smartphones, the manufacturers need to be held to account for any problems that arise from this. Ideally, they wouldn't be allowed to lock down devices like that in the first place, but baby steps.
Lesson number two. People will run old and vulnerable software, so when some contractor sets up a system as a one-time job they should make sure everything non-essential for the running of the system is behind a firewall or disabled altogether. I don't see why a train time table needs to have its SMB server exposed to either the local network or the world, but apparently whoever set it up didn't think of that.
Additionally, I feel like there's a disconnect between clients and contractors about purpose-built systems like this; clients believe that this is a one-time job and that there is no on-going maintenance involved, but contractors know better but choose to not correct the clients in their beliefs.
Finally, I think the whole discussion as to whether the NSA is ultimately at fault here is orthogonal to the issue. Personally, I do believe hoarding vulnerabilities in civilian software infrastructure for espionage purposes is both immoral and dangerous, but I also think that the NSA not finding those vulnerabilities would not make the world any more secure. The vulnerabilities exist whether the NSA finds them or whether someone else does. Even if Microsoft had quietly rolled out the patches in March without there having been a convenient PoC exploit tool for it, bad actors will find the vulnerabilities by looking at what the patches changed. This has been a thing that malvertisement authors have been doing with Adobe Flash updates for a long time; they bet on people not having patched their systems.
There is much that we can do better even without complex self-protection mechanisms, and it all starts with educating people about the dangers of unmonitored badly maintained systems.
Posted May 17, 2017 19:42 UTC (Wed)
by pizza (subscriber, #46)
[Link] (14 responses)
Part of folks' reluctance to do this is that these "updates" routinely come with intentional side effects -- An extreme example is Windows 10. Or how Netflix will now refuse to install on unlocked/rooted devices.
Vendors have a pretty poor track record, and they're getting worse, not better.
Posted May 17, 2017 19:55 UTC (Wed)
by fratti (guest, #105722)
[Link] (9 responses)
In the case of strictly only security updates for still supported software, I don't think it's that though. There was no excuse for anyone running Windows 7 to get affected by this exploit, but people did.
Posted May 17, 2017 20:01 UTC (Wed)
by pizza (subscriber, #46)
[Link] (3 responses)
I know a lot of people who turned off Windows 7 updates as a response to Microsoft's increasingly-underhanded attempts to force them to update to Win10 via Windows Update.
Posted May 18, 2017 4:58 UTC (Thu)
by eru (subscriber, #2753)
[Link] (2 responses)
I know a lot of people who turned off Windows 7 updates as a response to Microsoft's increasingly-underhanded attempts to force them to update to Win10 via Windows Update.
Another reason is the unreliability of Windows update software itself! I have seen it get somehow wedged for good on three different Windows versions on home laptops, so that it tries to update, wastes fifteen minutes of time and then gives up with a hex guru meditation. Last time saw this on the WannaCry weekend; I checked if the Windows 10 laptop was up to date, and noticed the effect again. The log was full of failed attempts. Fortunately the last succesful update was sometime in April, so it possibly is patched against the SMB issue in question. But the next might get it.
Posted May 18, 2017 9:47 UTC (Thu)
by cpanceac (guest, #80967)
[Link] (1 responses)
Posted May 18, 2017 18:42 UTC (Thu)
by drag (guest, #31333)
[Link]
It's rare that I have to deal with this though. Usually only when people come to me with a jacked up PC and they want me to fix it.
Posted May 17, 2017 23:43 UTC (Wed)
by simcop2387 (subscriber, #101710)
[Link]
Because of the thought, "If it's not broken don't fix it". The problem is getting them to understand that it can still be broken behind the scenes where you can't see anything. Just like a leaky pipe under a building eventually causing foundation damage or a sinkhole.
Posted May 18, 2017 9:59 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (3 responses)
Not much of an excuse, but pirated (non-activated) copies of Windows 7 might not be able to get updated.
Posted May 20, 2017 13:35 UTC (Sat)
by biergaizi (subscriber, #92498)
[Link] (1 responses)
I believe most Windows users see security updates as annoyance, even if Windows Update itself is reliable. Patches pop up every several days and strongly pushes the users to update, and users who don't understand the value of security updates just hate it... Large organizations also disable updates to ensure the consistency of their system, and prevents updates to interrupt their workflow.
Posted May 20, 2017 21:59 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted May 21, 2017 16:45 UTC (Sun)
by flussence (guest, #85566)
[Link]
Posted May 20, 2017 0:11 UTC (Sat)
by giraffedata (guest, #1954)
[Link]
And another part is the unintentional side effects - bugs.
I decided a while ago not to apply updates as they come out. I believe my risk of breaking something exceeds my risk of being hacked. I'd love to see a scientific study of that; my gut feeling is just based on the fact that I haven't been hacked yet and I've broken my system, sometimes very badly, dozens of times by applying updates.
The worst breakage-by-update that has happened to me so far is from the recent trend in browser publishers to discontinue the ability to use insecure communication protocols. Unfortunately for me, there are a bunch of servers I need to access that use these protocols. I was naive when I updated those browsers, not realizing backward compatibility is not as sacred as it used to be.
The only way to eliminate this update dilemma is to have finer grained updates through smaller software modules. If you didn't have to install thousands of kernel or browser updates to get one security fix, applying security fixes wouldn't be as risky.
Posted May 20, 2017 17:09 UTC (Sat)
by gezza (subscriber, #40700)
[Link] (1 responses)
Android was mentioned in the article. For me, when an update comes with a demand for new
So yes, I should invest the time in Cyanogen-Mod, and fine tune the access anything has. Who really has the time for that, on every system they use?
Posted May 25, 2017 20:19 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Part of the trouble is quite likely all of the crapware on the system that shouldn't be there!
Cheers,
Posted May 22, 2017 13:22 UTC (Mon)
by bandrami (guest, #94229)
[Link]
Posted May 17, 2017 19:46 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link]
There is a fundamental problem in the metaphors and abstractions we have made for computing resources and networks. The "use case" for open networks changed when the Internet was handed over from academic and defense research institutions to business, and we have suffered from that "scope creep" ever since.
On the topic of the NSA creating 'weapons' exploiting bugs and design flaws... Edward Snowden made plain the dangers that the NSA and other intelligence agencies present to ordinary people. And in making this plain, he showed that these organizations cannot be trusted with their digital assets. From the massive data vacuums they have created, to weaponized math (which is what software is), they are quite cavalier in how they secure these resources, and are creating dangerous threats where none existed before.
Posted May 18, 2017 4:28 UTC (Thu)
by pabs (subscriber, #43278)
[Link]
Posted May 18, 2017 9:00 UTC (Thu)
by Seegras (guest, #20463)
[Link] (3 responses)
This is beside the point. Demand for those exploits by secret services and law enforcement agencies has lead to a sprawling industry trading in zero-days. Here's and example what we're talking about:
https://www.zerodium.com/program.html
The NSA could start by not hoarding vulnerabilities for instance. But to really make the world more secure, the NSA would need to search for vulnerabilities and _publish_ them.
The cool thing about publishing a vulnerability is that it also denies the use of that vulnerability to your enemies. So if you want to increase security, your only option is to publish.
In fact, there is one thing that distinguishes the White Hats from the others. White Hats publish.
Right now, the NSA is as a malicious Black Hat as it gets.
Posted May 18, 2017 20:48 UTC (Thu)
by HenrikH (subscriber, #31152)
[Link]
Posted May 19, 2017 12:05 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (1 responses)
I'm not sure it's their job to make the world secure. Making the US government computer network more secure is part of their job, but for example making random computers in the Brussels neighborhood of Molenbeek more secure interferes with their job.
Posted May 19, 2017 14:09 UTC (Fri)
by excors (subscriber, #95769)
[Link]
And for the US in particular, cyber warfare almost completely bypasses their conventional military advantage. A group of smart motivated hackers in North Korea with a few million dollars to buy zero-day vulnerabilities could cause as much damage to US computers as the US could to theirs. Better to eliminate that threat globally by improving security for everyone, so that warfare has to instead be done with missiles and trillion-dollar planes where the US has a big lead.
Posted May 18, 2017 10:30 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (4 responses)
I'm the CEO of a consulting company, and my experience is different. Contractors know about the need for maintenance and tell the customers. After all, they want maintenance contracts, these are a very good way to earn money: You have low aquisation costs, and remain in contact with the client to check around what other needs he has that one can help to solve (and earn money...).
But customers often don't allocate a budget for on-going maintenance, they don't see the business need for it. Or, the IT sees the business need, but the C[EFO]O doesn't. (Actually, events like WannaCry are good, in this regard, it helps to illustrate the business case.)
Upfront charging for on-going maintenance is only possible for mass-market software, for bespoke software it would raise the price to a point where one is not competetive in the market any more.
I.e., the state of affair is even more complicated than you presented.
Posted May 23, 2017 3:19 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (3 responses)
Posted May 23, 2017 3:47 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted May 27, 2017 3:49 UTC (Sat)
by ghane (guest, #1805)
[Link] (1 responses)
Yes, but if I _have_ to be unhappy, why not be unhappy at some point in the future? Why be unhappy now?
(And all reboots are dangerous, you never know what unsaved configs you are running)
It is not only CEO who have short-term objectives, it is Sys Adms too.
--
Posted May 29, 2017 4:40 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Posted May 18, 2017 13:26 UTC (Thu)
by habarnam (guest, #61672)
[Link]
Indeed, but in the case of the good guys finding them, responsible disclosure should be the first step, not storing them for darker days. I think this is, or should be, the basis of most of the NSA directed criticism.
Posted May 18, 2017 20:53 UTC (Thu)
by HenrikH (subscriber, #31152)
[Link] (3 responses)
Indeed. I routinely gets questions from customers on how they should upgrade our particular software and then they tell me that they currently use a version that we released several years ago. My agony with this is not that they have not updated our particular software suite but that we put software on their machines via both DEB and RPM repositories so this means that they have not even run "apt upgrade" or "yum update" for all that time either.
Or the client who recently asked me if I could do a build for CentOS 3...
Posted May 18, 2017 22:31 UTC (Thu)
by gracinet (guest, #89400)
[Link] (2 responses)
>Indeed. I routinely gets questions from customers on how they should upgrade our particular software and then they tell me that they currently use a version that we released several years ago. My agony with this is not that they have not updated our particular software suite but that we put software on their machines via both DEB and RPM repositories so this means that they have not even run "apt upgrade" or "yum update" for all that time either.
I don't know if that's your case, but with the kind of tailored software I'm producing (nothing technically fancy, just piles of business rules), the root cause is often that the admins just don't dare doing it, being too afraid to create breakage of applications they can't even test on their own. Instead, they rely too much on what they can actually maintain: the surrounding infrastructure, firewalls etc.
So, it's quite common in my experience to be called for some application-level bugfix, and to notice that the surrounding system never had a single upgrade for years. I often raise the issue, hoping that the testing windows can be mutualized, but that's a double-edged sword : usually people call you with a specific goal in mind (very urgent), and evaluate your action with respect to that goal only. It's also quite common for the application to be scheduled for complete replacement (which is always late) after some years of production, and in that case, of course, it's very hard to plead for any extra work. And it's true that after too many upgrades have been skipped, things can get a bit dangerous.
This is the part where I heartily thank Debian, for its stability : it makes applying upgrades automatically a reasonable trade-off.
A colleague of mine even once made the acknowledgement of a situation of that kind a prerequisite to proceeding further (à la: here's the list of outdated system packages with security issues, please notice that's almost all of them, that wasn't even in our mission, so we consider it's your problem to fix that, please acknowledge that we can't be accountable about consequences of that situation or let's push the price up a bit if you want us to fix that also).
This is indeed the kind of human organizational dysfunction that the dev-ops movement has been trying to solve, but I fear that dev-ops is better understood if it's done from within an organization (usually tech-savvy), not by outside contractors. And also, for some people, dev-ops doesn't mean much more than that they can deploy Docker containers without dependency hell ; it's easy to forget that these are meant to have upgrades, too, even if there are no changes in the app itself.
To be fair, I'm aware of an exception: hosting companies that, by law, have to abid to mandatory security regulations in specialized fields (happens, e.g., with health related personal data in France). Unfortunately, it's bureaucratic and very expensive. If it weren't mandatory for the client, too, it wouldn't happen in many cases.
> Or the client who recently asked me if I could do a build for CentOS 3...
Oh, that's a nice one! EOL'ed on 2010-10-01!
Posted May 19, 2017 3:23 UTC (Fri)
by zlynx (guest, #2285)
[Link] (1 responses)
Virtual machines have been great for this. Years ago the company I worked for replaced all of our physical rack servers with a blade thing (Dell maybe?) running VMware ESX. Now our admins clone a machine, install all of the updates and test it, and then they can almost transparently shut down the old copy and replace it.
Posted May 19, 2017 10:12 UTC (Fri)
by gracinet (guest, #89400)
[Link]
Yes, of course, but… lots of applications around there have circular logical dependencies, such as having their own URL encoded in the database, the potential to send thousands of emails (again, specified in the DB) once their scheduled tasks fire, data too big to swap that easily, etc. In short, they aren't designed for easing up moving back and forth staging and production (of course some are). While you and I would certainly call that a design fault, that kind of stuff is often not in the selling criteria, as it's too much of a technician's concern.
Anyway, in the cases I was referring to, with a classic customer/developer/sysadmin separation of concerns, the admins just don't know what the application does, how to test it and with whom to share the results. Unit and integration tests can help, obviously, but they are a developer thing.
Another anecdote: it's been a while now (6 years), but once I was in a datacenter, wearing the developer hat, with the functional guy and the sysadmin for a major upgrade of a web application (first in years) and it really pleased the admin to witness the functional guy actually testing the application. It was the first time he'd even seen it in a browser, and actually we had some prior work on the firewalls to make it even possible to access it from the datacenter network. The thing that drove that exceptional gathering was the perception from management that the upgrade was both necessary and very risky.
What we can hope for is that this is mostly a thing of the past, with the dev/ops, microservices, release and test often mantras taking slowly over, but we shouldn't underestimate the human communication gaps that lie behind all this, if we don't want to end up with the same problems just spelled differently.
More broadly, non technical people in the IT/web business don't trust us for managing priorities : they fear that we drown into our own, useless, generated pile of work that they don't understand at all. In their discharge, I won't swear that never happens. We have to understand their point of view and provide better, more understandable feedback. It's so easy to just have contempt on them when one is the only one around to understand what's at stake, and that's why I've been advocating for a while that developers should have project management experience and vice-versa.
As for customers outside the IT business, I've been trying to explain that a computer system is more akin to a living body that needs continuous care (some kind of virtual horse) than to an inert tool, with some, yet limited, results.
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
> There was no excuse for anyone running Windows 7 to get affected by this exploit, but people did.
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
"no excuse for anyone running Windows 7 to get affected by this exploit, but people did."
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Part of folks' reluctance to do this is that these "updates" routinely come with intentional side effects
Takeaways from a global malware disaster
access rights, there is an immediate dilemma - do I apply it or not?
Takeaways from a global malware disaster
Wol
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
"to really make the world more secure, the NSA would need to search for vulnerabilities and _publish_ them."
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Sanjeev, who is a smoker. Not died even once yet. So there!
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
YMMV, but it's also true that there are very few customers that can tolerate a breakage due to an update they didn't ask for (even if hundreds of previous ones happened silently and prevented lots of problems). All of this requires lots of prior explanations and mutual understanding - this is hard.
Takeaways from a global malware disaster
Takeaways from a global malware disaster