CrowdStrike clarifies the update that crippled Windows environments
CrowdStrike emphasized that a thorough root cause analysis is still required to fully understand the incident. The vendor has provided an initial technical explanation for the update that bricked Windows machines worldwide.
In a blog post, CrowdStrike explained that a “sensor configuration update to Windows systems … triggered a logic error resulting in a system crash and ‘blue screen of death’ (BSOD) on impacted systems.” They corrected the logic error by updating the content in the configuration file but acknowledged that further investigation is necessary to determine how the logic flaw occurred.
CrowdStrike committed to an ongoing effort to identify foundational or workflow improvements that could strengthen their processes. They typically update configuration files, known as “channel files,” for their Falcon sensors several times a day. The problematic update was intended to allow CrowdStrike Falcon sensors on endpoints to target newly observed, malicious named pipes used by common command and control (C2) frameworks in cyberattacks. According to Microsoft documentation, a named pipe is a mechanism used to transfer data between unrelated processes and between processes on different computers.
Systems running Falcon sensor for Windows 7.11 and above that downloaded the updated configuration from 04:09 UTC to 05:27 UTC were susceptible to a system crash. This issue impacted numerous sectors across Australia, including airlines, airports, transportation networks, supermarkets, banks, and enterprises, causing widespread device crashes from Friday afternoon AEST. In response, the federal government convened an emergency meeting with CrowdStrike representation. The IT outages were also felt in other parts of the world. CrowdStrike has published a comprehensive list of actions and knowledgebase articles to aid IT administrators in remediation efforts.
Additionally, CrowdStrike used its technical explanation blog to address and refute analysis on social media suggesting that blank or null values in the configuration file were part of the problem. The vendor clarified that the issue was not related to null bytes in the offending channel file or any other channel file. This incident underscores the critical importance of rigorous testing and validation processes for updates, especially those that impact security software and critical systems. CrowdStrike’s commitment to a thorough root cause analysis and process improvements aims to prevent similar issues in the future and enhance the resilience of their services.
Weston stressed the importance of collaboration and cooperation within the sector to learn, recover, and move forward effectively, promising ongoing updates with new learnings and subsequent steps. CrowdStrike has provided its first technical explanation for a file update that bricked Windows machines worldwide. In a blog post, the vendor explained that a “sensor configuration update to Windows systems … triggered a logic error resulting in a system crash and ‘blue screen of death’ (BSOD) on impacted systems.” They corrected the logic error by updating the content in the configuration file but acknowledged that further investigation is necessary to determine how the logic flaw occurred.
CrowdStrike committed to an ongoing effort to identify foundational or workflow improvements that could strengthen their processes. They typically update configuration files, known as “channel files,” for their Falcon sensors several times a day. The problematic update was intended to allow CrowdStrike Falcon sensors on endpoints to target newly observed, malicious named pipes used by common command and control (C2) frameworks in cyberattacks. According to Microsoft documentation, a named pipe is a mechanism used to transfer data between unrelated processes and between processes on different computers.
Systems running Falcon sensor for Windows 7.11 and above that downloaded the updated configuration from 04:09 UTC to 05:27 UTC were susceptible to a system crash. This issue impacted numerous sectors across Australia, including airlines, airports, transportation networks, supermarkets, banks, and enterprises, causing widespread device crashes from Friday afternoon AEST. In response, the federal government convened an emergency meeting with CrowdStrike representation. The IT outages were also felt in other parts of the world. CrowdStrike has published a comprehensive list of actions and knowledgebase articles to aid IT administrators in remediation efforts.
Additionally, CrowdStrike used its technical explanation blog to address and refute analysis on social media suggesting that blank or null values in the configuration file were part of the problem. The vendor clarified that the issue was not related to null bytes in the offending channel file or any other channel file.
This incident highlights the need for rigorous testing and validation processes for software updates, especially for those impacting security software and critical systems. CrowdStrike’s dedication to thorough root cause analysis and process improvement aims to prevent similar issues in the future, enhancing the resilience and reliability of their services.
Related Courses and Certification
Also Online IT Certification Courses & Online Technical Certificate Programs