There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

An angry admin shares the CrowdStrike outage experience

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: “It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers.”

He isn’t alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

pelletbucket ,

I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.

catloaf ,

We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

Someone never tested their DR plans, if they even have them. Generally locking your keys inside the car is not a good idea.

SapphironZA ,

We also backup our bitlocker keys with our RMM solution for this very reason.

catloaf ,

I hope that system doesn’t have any dependencies on the systems it’s protecting (auth, mfa).

Buffalox , (edited )

At least no mission critical services were hit, because nobody would run mission critical services in Windows, right?

RIGHT??

OpenStars ,
@OpenStars@discuss.online avatar

img

gravitas_deficiency ,

Lmao this is incredible

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can’t boot into safe mode because our BitLocker keys are stored inside of a service that we can’t login to because our AD is down.

“Most of our comms are down, most execs’ laptops are in infinite bsod boot loops, engineers can’t get access to credentials to servers.”

N.B.: Reddit link is from the source

I hope a lot of c-suites get fired for this. But I’m pretty sure they won’t be.

SkybreakerEngineer ,

Fired? I hope they get class-actioned out of existence as a warning to anyone who skimps on QA

MagicShel ,

C-suites fired? That’s the funniest thing I’ve heard yet today. They aren’t getting fired - they are their own ass-coverage. How can they be to blame when all these other companies were hit as well?

I guess this is a good week for me to still be laid off.

CodexArcanum ,

Our administrator is understandably a little bitter about the whole experience as it has unfolded, saying, "We were forced to switch from the perfectly good ESET solution which we have used for years by our central IT team last year.

Sounds like a lot of architects and admins are going to get thrown under the bus for this one.

“Yes, we ordered you to cut costs in impossible ways, but we never told you specifically to centralize everything with a third party, that was just the only financially acceptable solution that we would approve. This is still your fault, so we’re firing the entire IT department and replacing them with an AI managed by a company in Sri Lanka.”

Boozilla ,
@Boozilla@lemmy.world avatar

If you have EC2 instances running Windows on AWS, here is a trick that works in many (not all) cases. It has recovered a few instances for us:

  • Shut down the affected instance.
  • Detach the boot volume.
  • Move the boot volume (attach) to a working instance in the same region (us-east-1a or whatever).
  • Remove the file(s) recommended by Crowdstrike:
  • Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
  • Locate the file(s) matching “C-00000291*.sys”, and delete them (unless they have already been fixed by Crowdstrike).
  • Detach and move the volume back over to original instance (attach)
  • Boot original instance

Alternatively, you can restore from a snapshot prior to when the bad update went out from Crowdstrike. But that is not always ideal.

MrNesser ,

Lemmy appears to be weathering the storm quite well…

…probably runs on linux

RootBeerGuy ,
@RootBeerGuy@discuss.tchncs.de avatar

It runs on hundreds of servers. If any of them ran windows they might be out but unless you got an account on them you’d be fine with the rest. That’s the whole point of federation.

cygnus , (edited )
@cygnus@lemmy.ca avatar

The overwhelming majority of webservers run Linux (it’s not even close, like high 90 percent range) Edit: Upon double-checking it’s more like mid-80s, but the point stands

bilb ,
@bilb@lem.monster avatar

I wonder if any Lemmy servers run on Windows without WSL. I can’t think of any hard dependencies on Linux, so it should be possible.

slacktoid ,
@slacktoid@lemmy.ml avatar

Sounds like the best time to unionize

db0 ,
@db0@lemmy.dbzer0.com avatar

Pity the administrators who dutifully kept a list of those keys on a secure server share, only to find that the server is also now showing a screen of baleful blue.

Lol, can you imagine? It empathetically hurts me even thinking of this situation. Enter that brave hero who kept the fileshare decryption key in a local keepass :D

kescusay ,
@kescusay@lemmy.world avatar

Seems like an argument for a heterogeneous environment, perhaps a solid and secure Linux server to host important keys like that.

pearsaltchocolatebar ,

Linux can shit the bed too. You need to maintain a physical copy.

StaySquared ,

CS did take down Linux a few years back… I forget the exact details.

gnutrino ,

Sure but the chances of your Windows and Linux machines shitting the bed at the same time is less than if everything is running Windows. It’s exactly the same reason you keep a physical copy (which after all can break/burn down etc.) - more baskets to spread your eggs across.

sugar_in_your_tea ,

That’s why the 3-2-1 rule exists:

  • 3 copies of everything on
  • 2 different forms of media with
  • 1 copy off site

For something like keys, that means:

  1. secure server share
  2. server share backup at a different site
  3. physical copy (either USB, printed in a safe, etc)

Any IT pro should be aware of this “rule.” Oh, and periodically test restoring from a backup to make sure the backup actually works.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines