Vulnerability hoarding and Wcry
A virulent ransomware worm attacked a wide swath of Windows machines worldwide in mid-May. The malware, known as Wcry, Wanna, or WannaCry, infected a number of systems at high-profile organizations as well as striking at critical pieces of the infrastructure—like hospitals, banks, and train stations. While the threat seems to have largely abated—for now—the origin of some of its code, which is apparently the US National Security Agency (NSA), should give one pause.
At this point, 200,000 computers or more have been affected since Wcry started ramping up on May 12. So far, it is unknown how Wcry got its initial foothold on various systems, but it spreads to other computers using code derived from the EternalBlue remote code-execution exploit that is believed to have been developed by the NSA. EternalBlue exploits a bug in the Server Message Block (SMB) implementation in multiple versions of Windows; it was patched for still-supported Windows versions in March. EternalBlue was released as part of the Shadow Brokers trove of leaked NSA exploits in April; the timing of the Microsoft patches led some to wonder whether the company was tipped off about the upcoming leaks in time to fix them before the release. It is also possible that Microsoft was alerted by someone who saw exploits in the wild or that the company found them independently.
Once a system is infected, Wcry does what ransomware is known for: it encrypts the contents of the hard disk and requests a ransom in Bitcoin to decrypt them. The amount requested is fairly small (on the order of $300-600, at least if it was paid by May 15) but some $71,000 had been collected from more than 250 victims by that deadline. Somewhat ironically, a bug in Wcry allowed the payments to be tracked more easily—code to generate a Bitcoin wallet per victim suffered from a race condition, so a hardcoded set of three wallets was used by the malware.
Another flaw was the key to the downfall of the initial version of Wcry. The malware had an "kill switch" (which was probably inadvertent) that allowed a security researcher to register a domain that effectively stopped it from doing the encryption. It could simply be a bug in Wcry, but there is another possibility: in an attempt to avoid running its code in a sandboxed environment (where researchers can safely learn more about its inner workings), Wcry tries to connect to a (formerly) unregistered domain. In some sandboxed environments that are used to study malware, all domain names resolve to the sandbox's IP address; by checking a connection to a non-existent domain, the malware could avoid running in those environments. Luckily for the rest of us, registering the domain had the same effect—now all Wcry instances were running in a "sandbox".
There has been a fair amount of finger pointing about Wcry. One of the main targets of the malware has been Windows XP, which has been beyond its end of life since 2014. Given the ferocity and reach of the attack, however, Microsoft did put out a patch for XP and two other no-longer-supported versions (Windows 8 and Windows Server 2003). That led some to criticize the software giant for not doing so back in March. The company deflected the criticism in the direction of the NSA, which is, in truth, where much of the real problem lies.
Intelligence organizations like the NSA and countless other agencies throughout the world are extremely interested in vulnerabilities that can be used to gather data or are beneficial to their missions in other ways. However, those same exploits can be used in more mundane ways—shaking down businesses for Bitcoin, for instance. The agencies can only benefit from the vulnerabilities as long as they remain unpatched, which gives them plenty of reason to keep them secret. Though, as we have seen with Wcry, the existence of a two-month-old patch is hardly an insurmountable barrier for an exploit to overcome.
One might argue that as long as the agencies hold the vulnerability information close to the vest—and not bungle the security of their trove of "weaponized exploits" as the NSA evidently did—there should be little harm to those who are not targets. That is wishful thinking, however. The longer a vulnerability exists, the more people who know about it, maybe only in general terms, but that is often enough for skilled security researchers (of any colored hat) to figure out what it is. In particular, if a vulnerability is exploited in the wild, it is even more likely that it will be exposed to others. Exposure can happen in other ways, including the vulnerability being discovered, but not reported, long before the agency became aware of it. Gathering and hiding vulnerabilities is a kind of insecurity through obscurity; as a defense, it works about as well as its better known sibling.
If the companies that are responsible for the software—or, in the free-software world, the projects—are not getting any information about the vulnerabilities, it is hard to see how they can be blamed for not fixing them. Things become trickier for older, unsupported versions, though. By fixing XP this time, Microsoft may have set itself up for future calls to patch the 16-year-old system.
Of course, Microsoft is hardly alone in having a lot of its older software still running in the wild. Closer to home, Android is in some ways in a similar spot. There are huge numbers of Android devices out there with unpatched vulnerabilities; some of the vulnerabilities are already known, but there are undoubtedly others being hoarded by agencies and malicious players. Some kind of attack that threatened a huge swath of Android devices might well lead to calls for patches to age-old kernels.
Unfortunately, in the Android case, even patching the kernels may not be enough—updates in the embedded space are nowhere near as easy as they are for desktops. Even though they may be easier for desktops, though, that doesn't mean that updates get applied, sadly. Thus the need for better self protection in our software. Whacking moles as the sole means of protecting our systems is not tenable any more.
The real culprits in this matter are, of course, those who created Wcry and loosed it on the world. There has been speculation that North Korea have had a hand in it based on some of the code in early variants of Wcry. That code has been linked to the Lazarus Group, which in turn has been connected to the North Korean government. All of the evidence, which is fairly scant, is circumstantial at best; there are plenty of other groups, countries, and organizations that might have interest in a worldwide cyber attack. In the end, for the victims, it hardly matters who was behind it.
Index entries for this article | |
---|---|
Security | Bug reporting |
Posted May 17, 2017 19:28 UTC (Wed)
by fratti (guest, #105722)
[Link] (31 responses)
Lesson number one. People need to keep their systems up to date. I don't feel like enough education is happening in this field as to how important it is that your system is up to date and still receiving security patches. Just today I've had a conversation with someone who claimed that his Debian Squeeze system was still secure, because (and I quote) "internet banking works". For those too lazy to look this up, Debian Squeeze has not been receiving security fixes since February 2016.
I also feel like there needs to be government regulations about software updates on mobile devices. While Android devices are not affected by this specific ransomware, the article rightfully points out that Android has a big issue in this regard. Since we're not allowed by most manufacturers to run whatever software we want on our smartphones, the manufacturers need to be held to account for any problems that arise from this. Ideally, they wouldn't be allowed to lock down devices like that in the first place, but baby steps.
Lesson number two. People will run old and vulnerable software, so when some contractor sets up a system as a one-time job they should make sure everything non-essential for the running of the system is behind a firewall or disabled altogether. I don't see why a train time table needs to have its SMB server exposed to either the local network or the world, but apparently whoever set it up didn't think of that.
Additionally, I feel like there's a disconnect between clients and contractors about purpose-built systems like this; clients believe that this is a one-time job and that there is no on-going maintenance involved, but contractors know better but choose to not correct the clients in their beliefs.
Finally, I think the whole discussion as to whether the NSA is ultimately at fault here is orthogonal to the issue. Personally, I do believe hoarding vulnerabilities in civilian software infrastructure for espionage purposes is both immoral and dangerous, but I also think that the NSA not finding those vulnerabilities would not make the world any more secure. The vulnerabilities exist whether the NSA finds them or whether someone else does. Even if Microsoft had quietly rolled out the patches in March without there having been a convenient PoC exploit tool for it, bad actors will find the vulnerabilities by looking at what the patches changed. This has been a thing that malvertisement authors have been doing with Adobe Flash updates for a long time; they bet on people not having patched their systems.
There is much that we can do better even without complex self-protection mechanisms, and it all starts with educating people about the dangers of unmonitored badly maintained systems.
Posted May 17, 2017 19:42 UTC (Wed)
by pizza (subscriber, #46)
[Link] (14 responses)
Part of folks' reluctance to do this is that these "updates" routinely come with intentional side effects -- An extreme example is Windows 10. Or how Netflix will now refuse to install on unlocked/rooted devices.
Vendors have a pretty poor track record, and they're getting worse, not better.
Posted May 17, 2017 19:55 UTC (Wed)
by fratti (guest, #105722)
[Link] (9 responses)
In the case of strictly only security updates for still supported software, I don't think it's that though. There was no excuse for anyone running Windows 7 to get affected by this exploit, but people did.
Posted May 17, 2017 20:01 UTC (Wed)
by pizza (subscriber, #46)
[Link] (3 responses)
I know a lot of people who turned off Windows 7 updates as a response to Microsoft's increasingly-underhanded attempts to force them to update to Win10 via Windows Update.
Posted May 18, 2017 4:58 UTC (Thu)
by eru (subscriber, #2753)
[Link] (2 responses)
I know a lot of people who turned off Windows 7 updates as a response to Microsoft's increasingly-underhanded attempts to force them to update to Win10 via Windows Update.
Another reason is the unreliability of Windows update software itself! I have seen it get somehow wedged for good on three different Windows versions on home laptops, so that it tries to update, wastes fifteen minutes of time and then gives up with a hex guru meditation. Last time saw this on the WannaCry weekend; I checked if the Windows 10 laptop was up to date, and noticed the effect again. The log was full of failed attempts. Fortunately the last succesful update was sometime in April, so it possibly is patched against the SMB issue in question. But the next might get it.
Posted May 18, 2017 9:47 UTC (Thu)
by cpanceac (guest, #80967)
[Link] (1 responses)
Posted May 18, 2017 18:42 UTC (Thu)
by drag (guest, #31333)
[Link]
It's rare that I have to deal with this though. Usually only when people come to me with a jacked up PC and they want me to fix it.
Posted May 17, 2017 23:43 UTC (Wed)
by simcop2387 (subscriber, #101710)
[Link]
Because of the thought, "If it's not broken don't fix it". The problem is getting them to understand that it can still be broken behind the scenes where you can't see anything. Just like a leaky pipe under a building eventually causing foundation damage or a sinkhole.
Posted May 18, 2017 9:59 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (3 responses)
Not much of an excuse, but pirated (non-activated) copies of Windows 7 might not be able to get updated.
Posted May 20, 2017 13:35 UTC (Sat)
by biergaizi (subscriber, #92498)
[Link] (1 responses)
I believe most Windows users see security updates as annoyance, even if Windows Update itself is reliable. Patches pop up every several days and strongly pushes the users to update, and users who don't understand the value of security updates just hate it... Large organizations also disable updates to ensure the consistency of their system, and prevents updates to interrupt their workflow.
Posted May 20, 2017 21:59 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Posted May 21, 2017 16:45 UTC (Sun)
by flussence (guest, #85566)
[Link]
Posted May 20, 2017 0:11 UTC (Sat)
by giraffedata (guest, #1954)
[Link]
And another part is the unintentional side effects - bugs.
I decided a while ago not to apply updates as they come out. I believe my risk of breaking something exceeds my risk of being hacked. I'd love to see a scientific study of that; my gut feeling is just based on the fact that I haven't been hacked yet and I've broken my system, sometimes very badly, dozens of times by applying updates.
The worst breakage-by-update that has happened to me so far is from the recent trend in browser publishers to discontinue the ability to use insecure communication protocols. Unfortunately for me, there are a bunch of servers I need to access that use these protocols. I was naive when I updated those browsers, not realizing backward compatibility is not as sacred as it used to be.
The only way to eliminate this update dilemma is to have finer grained updates through smaller software modules. If you didn't have to install thousands of kernel or browser updates to get one security fix, applying security fixes wouldn't be as risky.
Posted May 20, 2017 17:09 UTC (Sat)
by gezza (subscriber, #40700)
[Link] (1 responses)
Android was mentioned in the article. For me, when an update comes with a demand for new
So yes, I should invest the time in Cyanogen-Mod, and fine tune the access anything has. Who really has the time for that, on every system they use?
Posted May 25, 2017 20:19 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Part of the trouble is quite likely all of the crapware on the system that shouldn't be there!
Cheers,
Posted May 22, 2017 13:22 UTC (Mon)
by bandrami (guest, #94229)
[Link]
Posted May 17, 2017 19:46 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link]
There is a fundamental problem in the metaphors and abstractions we have made for computing resources and networks. The "use case" for open networks changed when the Internet was handed over from academic and defense research institutions to business, and we have suffered from that "scope creep" ever since.
On the topic of the NSA creating 'weapons' exploiting bugs and design flaws... Edward Snowden made plain the dangers that the NSA and other intelligence agencies present to ordinary people. And in making this plain, he showed that these organizations cannot be trusted with their digital assets. From the massive data vacuums they have created, to weaponized math (which is what software is), they are quite cavalier in how they secure these resources, and are creating dangerous threats where none existed before.
Posted May 18, 2017 4:28 UTC (Thu)
by pabs (subscriber, #43278)
[Link]
Posted May 18, 2017 9:00 UTC (Thu)
by Seegras (guest, #20463)
[Link] (3 responses)
This is beside the point. Demand for those exploits by secret services and law enforcement agencies has lead to a sprawling industry trading in zero-days. Here's and example what we're talking about:
https://www.zerodium.com/program.html
The NSA could start by not hoarding vulnerabilities for instance. But to really make the world more secure, the NSA would need to search for vulnerabilities and _publish_ them.
The cool thing about publishing a vulnerability is that it also denies the use of that vulnerability to your enemies. So if you want to increase security, your only option is to publish.
In fact, there is one thing that distinguishes the White Hats from the others. White Hats publish.
Right now, the NSA is as a malicious Black Hat as it gets.
Posted May 18, 2017 20:48 UTC (Thu)
by HenrikH (subscriber, #31152)
[Link]
Posted May 19, 2017 12:05 UTC (Fri)
by NAR (subscriber, #1313)
[Link] (1 responses)
I'm not sure it's their job to make the world secure. Making the US government computer network more secure is part of their job, but for example making random computers in the Brussels neighborhood of Molenbeek more secure interferes with their job.
Posted May 19, 2017 14:09 UTC (Fri)
by excors (subscriber, #95769)
[Link]
And for the US in particular, cyber warfare almost completely bypasses their conventional military advantage. A group of smart motivated hackers in North Korea with a few million dollars to buy zero-day vulnerabilities could cause as much damage to US computers as the US could to theirs. Better to eliminate that threat globally by improving security for everyone, so that warfare has to instead be done with missiles and trillion-dollar planes where the US has a big lead.
Posted May 18, 2017 10:30 UTC (Thu)
by jschrod (subscriber, #1646)
[Link] (4 responses)
I'm the CEO of a consulting company, and my experience is different. Contractors know about the need for maintenance and tell the customers. After all, they want maintenance contracts, these are a very good way to earn money: You have low aquisation costs, and remain in contact with the client to check around what other needs he has that one can help to solve (and earn money...).
But customers often don't allocate a budget for on-going maintenance, they don't see the business need for it. Or, the IT sees the business need, but the C[EFO]O doesn't. (Actually, events like WannaCry are good, in this regard, it helps to illustrate the business case.)
Upfront charging for on-going maintenance is only possible for mass-market software, for bespoke software it would raise the price to a point where one is not competetive in the market any more.
I.e., the state of affair is even more complicated than you presented.
Posted May 23, 2017 3:19 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (3 responses)
Posted May 23, 2017 3:47 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted May 27, 2017 3:49 UTC (Sat)
by ghane (guest, #1805)
[Link] (1 responses)
Yes, but if I _have_ to be unhappy, why not be unhappy at some point in the future? Why be unhappy now?
(And all reboots are dangerous, you never know what unsaved configs you are running)
It is not only CEO who have short-term objectives, it is Sys Adms too.
--
Posted May 29, 2017 4:40 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Posted May 18, 2017 13:26 UTC (Thu)
by habarnam (guest, #61672)
[Link]
Indeed, but in the case of the good guys finding them, responsible disclosure should be the first step, not storing them for darker days. I think this is, or should be, the basis of most of the NSA directed criticism.
Posted May 18, 2017 20:53 UTC (Thu)
by HenrikH (subscriber, #31152)
[Link] (3 responses)
Indeed. I routinely gets questions from customers on how they should upgrade our particular software and then they tell me that they currently use a version that we released several years ago. My agony with this is not that they have not updated our particular software suite but that we put software on their machines via both DEB and RPM repositories so this means that they have not even run "apt upgrade" or "yum update" for all that time either.
Or the client who recently asked me if I could do a build for CentOS 3...
Posted May 18, 2017 22:31 UTC (Thu)
by gracinet (guest, #89400)
[Link] (2 responses)
>Indeed. I routinely gets questions from customers on how they should upgrade our particular software and then they tell me that they currently use a version that we released several years ago. My agony with this is not that they have not updated our particular software suite but that we put software on their machines via both DEB and RPM repositories so this means that they have not even run "apt upgrade" or "yum update" for all that time either.
I don't know if that's your case, but with the kind of tailored software I'm producing (nothing technically fancy, just piles of business rules), the root cause is often that the admins just don't dare doing it, being too afraid to create breakage of applications they can't even test on their own. Instead, they rely too much on what they can actually maintain: the surrounding infrastructure, firewalls etc.
So, it's quite common in my experience to be called for some application-level bugfix, and to notice that the surrounding system never had a single upgrade for years. I often raise the issue, hoping that the testing windows can be mutualized, but that's a double-edged sword : usually people call you with a specific goal in mind (very urgent), and evaluate your action with respect to that goal only. It's also quite common for the application to be scheduled for complete replacement (which is always late) after some years of production, and in that case, of course, it's very hard to plead for any extra work. And it's true that after too many upgrades have been skipped, things can get a bit dangerous.
This is the part where I heartily thank Debian, for its stability : it makes applying upgrades automatically a reasonable trade-off.
A colleague of mine even once made the acknowledgement of a situation of that kind a prerequisite to proceeding further (à la: here's the list of outdated system packages with security issues, please notice that's almost all of them, that wasn't even in our mission, so we consider it's your problem to fix that, please acknowledge that we can't be accountable about consequences of that situation or let's push the price up a bit if you want us to fix that also).
This is indeed the kind of human organizational dysfunction that the dev-ops movement has been trying to solve, but I fear that dev-ops is better understood if it's done from within an organization (usually tech-savvy), not by outside contractors. And also, for some people, dev-ops doesn't mean much more than that they can deploy Docker containers without dependency hell ; it's easy to forget that these are meant to have upgrades, too, even if there are no changes in the app itself.
To be fair, I'm aware of an exception: hosting companies that, by law, have to abid to mandatory security regulations in specialized fields (happens, e.g., with health related personal data in France). Unfortunately, it's bureaucratic and very expensive. If it weren't mandatory for the client, too, it wouldn't happen in many cases.
> Or the client who recently asked me if I could do a build for CentOS 3...
Oh, that's a nice one! EOL'ed on 2010-10-01!
Posted May 19, 2017 3:23 UTC (Fri)
by zlynx (guest, #2285)
[Link] (1 responses)
Virtual machines have been great for this. Years ago the company I worked for replaced all of our physical rack servers with a blade thing (Dell maybe?) running VMware ESX. Now our admins clone a machine, install all of the updates and test it, and then they can almost transparently shut down the old copy and replace it.
Posted May 19, 2017 10:12 UTC (Fri)
by gracinet (guest, #89400)
[Link]
Yes, of course, but… lots of applications around there have circular logical dependencies, such as having their own URL encoded in the database, the potential to send thousands of emails (again, specified in the DB) once their scheduled tasks fire, data too big to swap that easily, etc. In short, they aren't designed for easing up moving back and forth staging and production (of course some are). While you and I would certainly call that a design fault, that kind of stuff is often not in the selling criteria, as it's too much of a technician's concern.
Anyway, in the cases I was referring to, with a classic customer/developer/sysadmin separation of concerns, the admins just don't know what the application does, how to test it and with whom to share the results. Unit and integration tests can help, obviously, but they are a developer thing.
Another anecdote: it's been a while now (6 years), but once I was in a datacenter, wearing the developer hat, with the functional guy and the sysadmin for a major upgrade of a web application (first in years) and it really pleased the admin to witness the functional guy actually testing the application. It was the first time he'd even seen it in a browser, and actually we had some prior work on the firewalls to make it even possible to access it from the datacenter network. The thing that drove that exceptional gathering was the perception from management that the upgrade was both necessary and very risky.
What we can hope for is that this is mostly a thing of the past, with the dev/ops, microservices, release and test often mantras taking slowly over, but we shouldn't underestimate the human communication gaps that lie behind all this, if we don't want to end up with the same problems just spelled differently.
More broadly, non technical people in the IT/web business don't trust us for managing priorities : they fear that we drown into our own, useless, generated pile of work that they don't understand at all. In their discharge, I won't swear that never happens. We have to understand their point of view and provide better, more understandable feedback. It's so easy to just have contempt on them when one is the only one around to understand what's at stake, and that's why I've been advocating for a while that developers should have project management experience and vice-versa.
As for customers outside the IT business, I've been trying to explain that a computer system is more akin to a living body that needs continuous care (some kind of virtual horse) than to an inert tool, with some, yet limited, results.
Posted May 18, 2017 9:51 UTC (Thu)
by cpanceac (guest, #80967)
[Link] (1 responses)
Posted May 20, 2017 0:23 UTC (Sat)
by giraffedata (guest, #1954)
[Link]
Why would that make it Microsoft's problem?
And which problem are you talking about - the problem that people had to pay ransom and/or were deprived of their computers for a while, or the problem that people need to update their Windows?
Posted May 18, 2017 12:31 UTC (Thu)
by triddell (guest, #90933)
[Link] (1 responses)
It seems updates for Windows 8 will be available for some time to come: https://support.microsoft.com/en-ca/help/13853/windows-li...
Did the author mean Vista perhaps?
Posted May 18, 2017 12:59 UTC (Thu)
by smcv (subscriber, #53363)
[Link]
This is analogous to how, for example, Ubuntu 16.04 LTS users are expected to upgrade to the 16.04.2 point release to get continued support, even if they do not want to upgrade to a newer major version like 16.10.
You might reasonably think "well, obviously, if you don't update then you don't get updates" but historically Microsoft has made some effort to make fixes individually applicable to older versions for a while, as a response to users' unwillingness to risk regressions by applying service packs and other large updates early (or in some cases at all).
If I understand correctly, the complexity required to support applying arbitrary combinations of patches (and for that matter detecting which ones are missing) is a large part of why Windows Update is so slow and horrible, particularly on fresh installs of old Windows releases where bringing the system up to date requires a huge number of patches. Linux distributions have tended to dodge this by having well-defined packages with incrementing version numbers, and refusing to support anything other than a linear sequence of upgrades per package: if foobar version 1.2.3-4 fixed CVE-2014-12345 (but introduced a regression) and foobar 1.2.3-5 fixed CVE-2014-54321, then you can't opt to install the fix for CVE-2014-54321 but remain vulnerable to -12345, except by rebuilding foobar yourself (at which point you are the OS vendor for a very small fork).
The other reasons we don't suffer from this in FOSS distributions to the extent that Microsoft does are that our major-version updates are free of charge, so some perverse financial incentives go away, leaving technical decisions (like how much regression risk to accept) as the only factor in how far to upgrade; and that if anyone feels sufficiently strongly that a particular distribution is doing it wrong (for example introducing too many regressions in their updates), forking the distribution is always an option.
Posted May 18, 2017 20:28 UTC (Thu)
by jchaxby (subscriber, #63942)
[Link] (2 responses)
So far so obvious.
We all know, now, that unpatched machines are going to get exploited.
What's unconscionable though is people deploying systems both from a distro revision that's years out of date and not even considering the processes by which it needs to be updated and, eventually upgraded or decommissioned.
It doesn't matter if it's home routers, MRI scanners or cloud data centres; they need to be *designed* so that their embedded software can be updated and upgraded for as long as the system still works.
You can probably get away with a $10 router not getting updates after two or three years but it needs to be sold under the premise that at the end of that time it _will_ get an update that will deactivate it and you can then recycle it or trade it in for an upgrade.
It's completely unacceptable for some medical bit of kit costing millions that might be running for decades to be designed with no way to update its embedded OS or eventually upgrade it to something supportable. (WinXP in MRI scanners?)
We know how to do this, we've all been designing distros that can be updated (and in some cases upgraded) for years now. It's about time that all that got put into practise.
Posted May 25, 2017 20:35 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Shades of the company that complained that, with all the PCs moving over to USB, it was getting harder and harder to get computers with RS232 ports to drive the peripherals. "Well, get new peripherals, then" was the response of the guy they were complaining to (it might have been Bill Gates, spec'ing that new PCs should have USB not serial).
Problem was the guy complaining had a LOT of said peripherals, at typically $250K or more each ...
Cheers,
Posted May 27, 2017 14:49 UTC (Sat)
by flussence (guest, #85566)
[Link]
Posted May 21, 2017 7:28 UTC (Sun)
by johnjones (guest, #5462)
[Link] (1 responses)
so basically windows 7 did not get the update and we have this problem... the only organisation that is a problem is Microsoft
Posted May 23, 2017 21:20 UTC (Tue)
by coolhandluke (guest, #114151)
[Link]
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
> There was no excuse for anyone running Windows 7 to get affected by this exploit, but people did.
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
"no excuse for anyone running Windows 7 to get affected by this exploit, but people did."
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Part of folks' reluctance to do this is that these "updates" routinely come with intentional side effects
Takeaways from a global malware disaster
access rights, there is an immediate dilemma - do I apply it or not?
Takeaways from a global malware disaster
Wol
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
"to really make the world more secure, the NSA would need to search for vulnerabilities and _publish_ them."
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Sanjeev, who is a smoker. Not died even once yet. So there!
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
Takeaways from a global malware disaster
YMMV, but it's also true that there are very few customers that can tolerate a breakage due to an update they didn't ask for (even if hundreds of previous ones happened silently and prevented lots of problems). All of this requires lots of prior explanations and mutual understanding - this is hard.
Takeaways from a global malware disaster
Takeaways from a global malware disaster
User does not own the vulnerable software
User does not own the vulnerable software
I have a question: the software does not belong to the user, it's licensed for use by Microsoft, right? So how come this is not Microsoft's problem?
Vulnerability hoarding and Wcry
Vulnerability hoarding and Wcry
Vulnerability hoarding and Wcry
Vulnerability hoarding and Wcry
Wol
Vulnerability hoarding and Wcry
microsoft where sitting on a patch for this since march
they had an amazing Press team that blamed others...
microsoft where sitting on a patch for this since march