There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

CrowdStrike IT outage affected 8.5 million Windows devices, Microsoft says

Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.

It’s the first time a figure has been put on the incident and suggests it could be the worst cyber event in history.

The glitch came from a security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.

Microsoft, which is helping customers recover said in a blog post: “We currently estimate that CrowdStrike’s update affected 8.5 million Windows devices.”

autotldr Bot ,

This is the best summary I could come up with:


Microsoft says it estimates that 8.5m computers around the world were disabled by the global IT outage.It’s the first time that a number has been put on the incident, which is still causing problems around the world.The glitch came from a cyber security company called CrowdStrike which sent out a corrupted software update to its huge number of customers.Microsoft, which is helping customers recover said in a blog post: “we currently estimate that CrowdStrike’s update affected 8.5 million Windows devices.”

The post by David Weston, vice-president, enterprise and OS at the firm, says this number is less than 1% of all Windows machines worldwide, but that “the broad economic and societal impacts reflect the use of CrowdStrike by enterprises that run many critical services”.The company can be very accurate on how many devices were disabled by the outage as it has performance telemetry to many by their internet connections.The tech giant - which was keen to point out that this was not an issue with it’s software - says the incident highlights how important it is for companies such as CrowdStrike to use quality control checks on updates before sending them out.“It’s also a reminder of how important it is for all of us across the tech ecosystem to prioritize operating with safe deployment and disaster recovery using the mechanisms that exist,” Mr Weston said.The fall out from the IT glitch has been enormous and was already one of the worst cyber-incidents in history.The number given by Microsoft means it is probably the largest ever cyber-event, eclipsing all previous hacks and outages.The closest to this is the WannaCry cyber-attack in 2017 that is estimated to have impacted around 300,000 computers in 150 countries.

There was a similar costly and disruptive attack called NotPetya a month later.There was also a major six-hour outage in 2021 at Meta, which runs Instagram, Facebook and WhatsApp.

But that was largely contained to the social media giant and some linked partners.The massive outage has also prompted warnings by cyber-security experts and agencies around the world about a wave of opportunistic hacking attempts linked to the IT outage.Cyber agencies in the UK and Australia are warning people to be vigilant to fake emails, calls and websites that pretend to be official.And CrowdStrike head George Kurtz encouraged users to make sure they were speaking to official representatives from the company before downloading fixes.

“We know that adversaries and bad actors will try to exploit events like this,” he said in a blog post.Whenever there is a major news event, especially one linked to technology, hackers respond by tweaking their existing methods to take into account the fear and uncertainty.According to researchers at Secureworks, there has already been a sharp rise in CrowdStrike-themed domain registrations – hackers registering new websites made to look official and potentially trick IT managers or members of the public into downloading malicious software or handing over private details.Cyber security agencies around the world have urged IT responders to only use CrowdStrike’s website to source information and help.The advice is mainly for IT managers who are the ones being affected by this as they try to get their organisations back online.But individuals too might be targeted, so experts are warning to be to be hyper vigilante and only act on information from the official CrowdStrike channels.


The original article contains 551 words, the summary contains 552 words. Saved -0%. I’m a bot and I’m open source!

dentoid ,
@dentoid@sopuli.xyz avatar

Upvoted just for the tagline “reduced article from 551 to 552 words” 😁 Wacky bot

timewarp ,
@timewarp@lemmy.world avatar

CrowdStrike will ultimately have contract terms that put responsibility on the companies, and truth be told the companies should be able to handle this situation with relative ease. Maybe the discussion here should be on the fragility of Windows and why Linux is a better option.

avidamoeba ,
@avidamoeba@lemmy.ca avatar

Linux could have easily been bricked in a similar fashion by pushing a bad kernel or kernel module update that wasn’t tested enough. Not saying it’s the same as Windows, but this particular scenario where someone can push a system component just like that can fuck up both.

rozodru ,
@rozodru@lemmy.ca avatar

yeah but with Linux if that were the case it’s an easy fix. it’s not locked behind something like bitlocker. I mean i’m on an Arch distro which…yeah…I break all the time including the kernal. the fix is simple. before I get too deep into something I always have my snapshots on an external drive that is updated at boot, twice a week, and 3 times a month. If I fuck it up I may, at most, loose a couple days of changes. and with Borg all my data is automatically backed up constantly so it’s not an issue.

worse comes to worse if all that fails I can easily reinstall with the iso’s I have (or use it as an excuse to try out a different distro). And with distros today it takes all of 5min to reinstall the OS.

hydrashok ,

Tell me you’ve never administered at scale without telling me you’ve never administered at scale.

magikmw ,

Bruh, disk encryption is not optional in many environments and dealing with unbootable LUKS Linux is pretty much on par with an unbootable Bitlocker Windows machine.

timewarp ,
@timewarp@lemmy.world avatar

Yes it can, but a kernel update is a completely different scenario, and managed individually by companies as part of their upgrades. It is usually tested and rolled out incrementally.

Furthermore, Linux doesn’t blue screen. I know some scenarios where Linux has issues, but I can count on one finger the amount of times I’ve had an update cause issues booting… and that was because I was using some newer encryption settings as part of systemd.

However, it would take all my fingers & toes, and then some, to count the number of blue screens I’ve gotten with Windows… and I don’t think I’m alone in that regard.

catloaf ,

Linux doesn’t blue screen, no. A kernel panic is a black screen.

huginn ,

And you’re running corporate kernel level security software on your encrypted Linux server?

timewarp ,
@timewarp@lemmy.world avatar

I guess it depends on what you consider corporate kernel level security. Would that include AppArmor, SELinux, and other tools that are open-source but used in some of the most secure corporate and government environments? Or are you asking if I’m running proprietary untrusted code on a Linux server with access to the system kernel?

Darkassassin07 ,
@Darkassassin07@lemmy.ca avatar

Terms which should be void as this update was pushed to systems that explicitly disabled automatic updates.

Companies were literally raped by Crowdstrike.

/edit Sauce (bottom paragraph)

timewarp ,
@timewarp@lemmy.world avatar

Companies were not raped by CrowdStrike. They were raped by their own ineptitude.

No where have I seen evidence where these updates were disabled and still got pushed. I’m not saying it is impossible, but unlikely if they followed any common sense and best practices. Usually, you’d be monitoring traffic and asking yourself why it is still checking for updates despite being disabled before deploying it to your entire IT infrastructure.

I see a lot of bad faith arguments here against CrowdStrike. I agree that they messed up, but it pales in comparison in my book to how messed up these companies are for not doing any basic planning around IT infrastructure & automation to be able to recover quickly.

ricecake ,

In this case, it’s really not a Linux/windows thing except by the most tenuous reasoning.

A corrupted piece of kernel level software is going to cause issues in any OS.
Cloudstrike itself has actually caused kernel panics on Linux before, albeit less because of a corrupted driver and more because of programming choices interacting with kernel behavior. (Two bugs: you shouldn’t have done that, and it shouldn’t have let you).

Tenuously, Linux is a better choice because it doesn’t need this type of software as much. It’s easier and more efficient to do packet inspection via dedicated firewall for infrastructure, and the other parts are already handled by automation and reporting tools you already use.
You still need something in this category if you need to solve the exact problem of “realtime network and filesystem event monitoring on each host”, but Linux makes it easier to get right up to that point without diving into the kernel.
Also vendors managing auto update is just less of a thing on Linux, so it’s more the cultural norm to manage updates in a way that’s conducive to staggering that would have caught this.

Contract wise, I’m less confident that crowd strike has favorable terms.
It’s usually consumers who are straddled with atrocious terms because they neither have power nor the interest in digging into the specifics too far.
Businesses, particularly ones that need or are interested in this category of software, inevitably have lawyers to go over contract terms in much more detail and much more ability to refuse terms and have it matter to the vendor. United airlines isn’t going to accept the contract terms of caveat emptor.

timewarp ,
@timewarp@lemmy.world avatar

You assume that businesses operate in good faith. That they thoroughly review contracts to ensure that they are fair and in the best interests of all its employees. Do you really think Greg, a VP of Cloud Solutions that makes 500k a year, who gets his IT advice on the golf course by AWS, Microsoft, & Oracle reps. Who gets wined & dined almost weekly by these reps, and a speaking spot at re:Invent, and believes Gartner when it says spending $5 million a month on cloud hosting and $90/TB on Egress traffic is normal, has the company’s best interests in mind?

I’ve seen companies pay millions for things they never used, or that weren’t ever provided by the vendor. You go to your managers, and say… “hey, why are we paying for this?” and suddenly you’re the bad guy. I’d love for you to prove me wrong. I’ve found pieces of progress before, within isolated teams when a manager wanted to actually accomplish something. It never lasts though… its like being an ice cube in a glass full of warm water.

danc4498 ,

I wonder how much this cost people & businesses.

For instance, people’s flights were canceled because of this resulting in them having to stay in hotels overnight. I’m sure there’s many other examples.

TexasDrunk ,

For businesses, a lot of them are hiring IT companies (consultants, MSPs, VARs, and whoever the hell else they can get) at a couple to a few hundred bucks an hour per person to get boots on the ground to fix it. Some of them have everyone below the C levels with any sort of technical background doing entry level work so there’s also lost opportunity cost.

I was in that industry for a long time and still have a lot of colleagues there. There’s a guy I know making almost $200k/yr out there at desks trying to help fix it. He moved into an SRE role years ago so that’s languishing this week while he’s going desk to desk and office to office with support staff and IT contractors.

At least two large companies have an API where they’re paying for a pile of compute and currently have a small fraction of use. Companies are paying to use those APIs but can’t.

I don’t know if there’s a good way to actually figure out how much this is costing because there are so many variables. But you can bet there are a few people at the top funneling that money directly to themselves, never to be seen again.

rozodru ,
@rozodru@lemmy.ca avatar

cool, now do away with bitlocker so it won’t happen again. if the easiest solution was to boot into safemode and either delete the empty sys file or rename the folder or delete it then that should have been that. you shouldn’t lock that access away to boot into safe or recovery mode behind bitlocker where your codes for it are…gee willykers…on servers tied in with crowdstrike.

Even before all this when I still used Windows, bitlocker was such a pain in the ass to deal with.

SketchySeaBeast ,
@SketchySeaBeast@lemmy.ca avatar

“Don’t encypt your drives containing sensitive company data” is a hard sell.

r00ty Admin ,
r00ty avatar

I think there's a good argument for bitlocker on laptops.

It's much less of a sell for servers and workstations in what should be secure locations.

Having said that, where I work they just enabled enforced windows hello pin with only numeric pins with minimum 6 digits. Seems like a pretty good way to entirely negate the protection bitlocker provides. But hey ho.

Mothra ,
@Mothra@mander.xyz avatar

8.5M worldwide? I was expecting higher numbers, interesting

ArtVandelay ,
@ArtVandelay@lemmy.world avatar

Even if 8.5m is correct, with many being servers, the total people affected is much much higher.

negativenull ,
@negativenull@lemmy.world avatar

The downstream effects are likely much much greater. If an auth server/DB server/API server/etc (for example) got taken down, the failure cascades

teejay ,

The idea that any such servers would be running windows… shudder

thisbenzingring ,

All i know is that I had to personally fix 450 servers myself and that doesn’t include the workstations that are probably still broke and will need to be fixed on Monday

😮‍💨

qjkxbmwvz ,

Is there any automation available for this? Do you fix them sequentially or can you parallelize the process? How long did it take to fix 450?

Real clustermess, but curious what fixing it looks like for the boots on the ground.

magikmw ,

You need to boot into emergency mode and replace a file. Afaik it’s not very automatable.

Jtee ,
@Jtee@lemmy.world avatar

Especially if you have bitlocker enabled. Can’t boot to safe mode without entering the key, which typically only IT has access to.

magikmw ,

You can give up the key to user and force a replacement on next DC connection, but get people to enter a key that’s 32 characters long over the phone… Not automatable anyway.

HeyJoe ,

Servers would probably be way easier than workstations if you ask me. If they were virtual, just bring up the remote console and you can do it all remotely. Even if they were physical I would hope they have an IP KVM attached to each server so they can also remotely access them as well. 450 sucks but at least they theoretically could have done every one of them without going anywhere.

There are also options to do workstations as well, but almost nobody ever uses those services so those probably need to be touched one by one.

thisbenzingring , (edited )

Thankfully I had cached credentials and our servers aren’t bitlocker’d. Majority of the servers had iLO consoles but not all. Most of the servers are on virtual hosts so once I got the fail over cluster back, it wasn’t that hard just working my way through them. But the hardware servers without iLO required physically plugging in a monitor and keyboard to fix, which is time consuming. 10 of them took a couple hours.

I worked 11+ hours straight. No breaks or lunch. That got our production domain up and the backup system back on. The dev and test domains are probably half working. My boss was responsible for those and he’s not very efficient.

So for the most part I was able to do most of the work from my admin pc in my office.

For the majority of them, I’d use the Widows recovery menu that they were stuck at to make them boot into safe mode with network support ( in case my cached credentials weren’t up-to-date). Then start a cmd and type out that famous command

Del c:\windows\system32\drivers\crowdstrike\c-00000291*.sys

I’d auto complete the folders with tab and the 5 zero’s … Probably gonna have that file in my memory forever

Edit: one painful self inflicted problem was my password is 25 random LastPass generatied password. But IDK how I managed… But I never typed it wrong. Yay for small wins

LavenderDay3544 ,

Why the hell were 450 servers running Windows?

mat ,

For some of these systems, I don’t understand why they are not running openbsd like medical equipment that should be as secure as possible… And more broadly, most of the world depending on one OS and its environment is only a path for disasters (this one, wanna cry, spying from three letters agencies…)

istanbullu ,

In case you needed to another reason to switch to Linux.

Windows is so unreliable that even Microsoft runs Linux internally.

markr ,

There are a lot of misunderstandings about what happened. First, the ‘update’ was to a data file used by the crowdstrike kernel components (specifically ‘falcon’.) while this file has a ‘.sys’ name, it is not a driver, it provides threat definition data. It is read by the falcon driver(s), not loaded as an executable.

Microsoft doesn’t update this file, crowdstrike user mode services do that, and they do that very frequently as part of their real-time threat detection and mitigation.

The updates are essential. There is no opportunity for IT to manage or test these updates other than blocking them via external firewalls.

The falcon kernel components apparently do not protect against a corrupted data file, or the corruption in this case evaded that protection. This is such an obvious vulnerability that i am leaning toward a deliberate manipulation of the data file to exploit a discovered vulnerability in their handling of a malformed data file. I have no evidence for that other than resilience against malformed data input is very basic software engineering and crowdstrike is a very sophisticated system.

I’m more interested in how the file got corrupted before distribution.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines