For all C-Suite folks (CIO/CTO/CISO specifically) the classic “Friday the 13th Omen got changed to Friday, the 19th “as the global outage of Microsoft “Strike “co-powered by Crowdstrike. “GOD” – Generate, Operate, destroy (Disrupt -in this case) is so true in technology! Analogy of Brahma, Vishnu, Mahesh works here too, and technology fails, and we say it’s God’s hand for this outage??
Being a Friday, most of the IT folks were either preparing for Planned shutdowns on weekend or for Weekend Celebrations …and their mobile alarm woke them up with this news of strike, disruption in US, Australia. Few got alerted while few chose to be ignorant until their inbox started pouring in with messages from stakeholders with blue dump screens (reminded me of these screens during Windows NT time !!) from business stakeholders.
The day for their higher ups started with questions from management about DR availability, BCP. Strategy (business continuity planning), plans and strategies. It was suddenly a chaos with businesses, airports, hotels becoming nonfunctional and employees already starting to celebrate the “paid day -off”!!
We Indians being gifted with “Jugaad” we could still manage to run the business with manual turnaround, like issuing manual boarding passes to passengers on airports, or disabling the CS agents etc., however other countries had to face the business downtime due to stringent role and process decisions. See the irony of the situation ..Systems which people had bought to cover up the risk , had now itself generated a risk for business continuity impacting some third party systems ( Good input for Risk managers )!! .
All this happening at background, there was yet another community who was looking at this as an opportunity to dig gold mines !! yes they looked at it as honey pots working in reverse way .. Panicked users and IT experts advising them based on rumors and fake solutions of quick turnaround was opening a flood gate of information to opportunists waiting to grab access to all these end points!!
This blackout time till the solution advisory was announced was also used by many to create hundreds of fake domains (like crowdstrike.bluedump, crowdstrike.fix etc) to lure users to enter this unidirectional information highway!! While the competition used this opportunity to tell customers to onboard their solutions to prove their metal to help businesses sail through this Strike.
Let’s look at what really happened in layman’s terms and probably some lessons learnt before we move towards the sweet end where all fixes were done, and businesses returned to normal.
Crowdstrike as part of product updates, released a security patch , which was supposed to land at particular path in directory /destination once it is deployed , however it got deployed in windows service folder affecting the MS Windows OS rebooting functionality /corruption of OS resulting into BSOD (Blue screen of Death). This actual issue then turned into word of mouth causing multiple versions with people gossiping from zero day attack to Bug in MS OS ..Crowdstrike immediately gave a fix to resolve this and slowly businesses started returning to normal. This was an unusual impact and probability which occurred catalyzing the RPN (risk priority number ), which no risk manager would have thought of ever except labelling it as unforeseen or gods hand issue.
End of this episode resulted in lots of learnings (relevant or irrelevant?) across the net with everyone suddenly talking about best practices and people hosting panel discussions on it!! However in my limited knowledge and opinion it was all about having alternative solutions in your kitty to kick start your critical applications (at least) , say for e.g. having some devices running on Chrome or equivalent OS or having alternative collaboration and EDR solution for deployment etc.