Cybercriminals Could Leverage AI in Majority of Cyberattacks, Says Positive Technologies

Author:

The integration of artificial intelligence (AI) into cybersecurity threats is rapidly transforming the landscape of cybercrime, according to a recent report. AI is now projected to be utilized by attackers across nearly all tactics in the MITRE ATT&CK matrix, with experts indicating its potential use in 59% of the techniques outlined. Previously, AI was incorporated into only 5% of all MITRE ATT&CK techniques, with an additional 17% being deemed feasible for AI application. However, the increasing availability of legal AI tools has rapidly escalated this trend, with the widespread deployment of advanced language models (LLMs) like ChatGPT-4 fueling a dramatic rise in cyberattacks. For instance, within a year of ChatGPT-4’s release, phishing attacks increased by an astounding 1,265%. Experts predict that the use of AI by cybercriminals will continue to enhance the sophistication and reach of cyberattacks.

The rapid advancement of AI technologies has raised concerns among cybersecurity professionals, particularly regarding the insufficient safeguards being implemented by language model developers. These models are not being adequately protected from misuse, allowing cybercriminals to leverage AI to generate malicious texts, code, and instructions, which contributes to an uptick in cybercrime activities. AI is being used by hackers to automate tasks such as writing attack scripts, verifying malicious code, and refining attack strategies, allowing even novice cybercriminals—who might lack the advanced skills or resources— to execute sophisticated attacks more easily. This democratization of cyberattack capabilities through AI has led to a rise in incidents where criminals use AI tools to double-check their plans, explore alternative attack methods, or identify overlooked vulnerabilities in their strategies.

Several factors are driving the increased use of AI in cyberattacks. One major factor is the relatively weak cybersecurity infrastructure in developing countries, where even imperfect tools can have significant impact when combined with AI. Additionally, the ongoing “arms race” between attackers and defenders is compelling cybercriminals to leverage AI in order to gain a competitive advantage. AI provides attackers with the ability to scale and automate various stages of their operations, such as managing botnets and generating malicious code or phishing messages. In some cases, AI is already being employed to generate deepfakes, automating aspects of cyberattacks that were once manual and labor-intensive.

Roman Reznikov, an Information Security Research Analyst at Positive Technologies, highlighted that while the advanced capabilities of AI in cyberattacks are concerning, they should not be seen as a cause for panic. Instead, he suggests that organizations should adopt a realistic approach, focusing on emerging technologies and developing proactive cybersecurity strategies. One such response to the rise of AI-driven attacks is the implementation of AI-powered defense systems. Positive Technologies has developed tools like the MaxPatrol O2 autopilot, which can automatically detect and block attacker actions before they cause significant damage. This approach harnesses AI’s capabilities to counteract the very threats that cybercriminals are deploying.

Currently, only experienced hackers possess the technical expertise needed to develop and deploy AI-driven tools that automate and scale cyberattacks. However, experts predict that specialized AI modules will soon emerge to address specific tasks in established attack scenarios. As these modules evolve, they are expected to merge into more comprehensive clusters, automating the stages of attacks and potentially covering the full attack lifecycle. In a more advanced scenario, AI could even autonomously search for new targets, further expanding its scope and effectiveness in cybercrime.

To mitigate the risks posed by AI-enhanced cyberattacks, Positive Technologies advises organizations to adopt best practices for cybersecurity, including maintaining strong vulnerability management protocols and participating in bug bounty programs. The use of machine learning to automate the exploitation of vulnerabilities poses a serious threat, enabling cybercriminals to exploit weaknesses faster and more frequently. As a result, it is critical for organizations to quickly address any discovered flaws, especially when publicly available exploits exist. Vendors are responding by incorporating machine learning technologies into their products to bolster defense capabilities. For example, MaxPatrol SIEM uses Behavioral Anomaly Detection (BAD) to assign risk scores to cybersecurity events and detect targeted attacks, including those that exploit zero-day vulnerabilities. Additionally, PT Application Firewall uses AI to identify shell upload attacks, while MaxPatrol VM uses AI to enhance asset information searches. PT NAD employs AI to generate custom rules for encrypted traffic analysis, and PT Sandbox leverages AI to detect unknown and anomalous malware.

In conclusion, the growing role of AI in cyberattacks highlights the need for more robust defense mechanisms. As AI continues to evolve, organizations must integrate advanced AI-driven solutions into their cybersecurity strategies to stay ahead of increasingly sophisticated threats. By doing so, they can build stronger, more resilient defenses against the rising tide of AI-enhanced cybercrime.