Srikrishna Shankavaram, Principal Cyber Security Architect at Zebra Technologies, highlights the growing need for specialized resources to address the unique challenges of AI security governance. As AI adoption expands within organizations, so does the surface area for potential attacks. The complexity of AI systems, combined with increasingly sophisticated tools available to malicious actors, makes AI security more critical than ever. Governments, regional legislators, and the private sector are all recognizing and addressing these risks, underscoring the urgency of securing AI technologies.
A recent example of AI’s role in cybersecurity threats was presented at the Aspen Security Forum, where the Coalition for Secure AI (CoSAI) was launched. This coalition, formed by leading technology companies, focuses on AI security issues such as software supply chain security, the evolving landscape of AI security threats, and AI risk governance. These concerns are especially pertinent given the rise of AI-driven cyberattacks. Hackers are now using AI to make phishing emails and deepfake attacks much more convincing, making it increasingly difficult to differentiate between legitimate communications and malicious attempts. At the Black Hat security conference, the Singapore Government Technology Agency (GovTech) demonstrated an experiment where AI-generated spear phishing emails led to a higher click-through rate compared to human-written emails. This highlights the growing sophistication of AI in aiding cybercriminals.
A notable instance occurred earlier this year when a multinational company’s finance employee was tricked into transferring $25 million to fraudsters using deepfake technology, with a fake video of the company’s chief financial officer leading the conference call. These examples underline the importance of addressing AI security comprehensively, and the launch of CoSAI represents a significant step forward.
One of CoSAI’s key initiatives is tackling software supply chain security for AI systems. The AI supply chain spans the entire lifecycle of AI systems—from data collection and model training to deployment and maintenance. Due to the interconnectedness of this ecosystem, vulnerabilities at any point can have far-reaching effects. Many AI systems rely on third-party libraries, frameworks, and components, which, while facilitating faster development, may also introduce security risks. Automated tools are essential for regularly checking and addressing these vulnerabilities in third-party dependencies.
The proliferation of open-source large language models (LLMs) adds another layer of complexity. Robust provenance tracking is required to verify the origin and integrity of these models and their datasets. In addition, security tools should be used to scan LLMs for vulnerabilities and malware. On-device LLMs can offer enhanced security by performing computations locally, thus reducing reliance on cloud connections, which could potentially expose sensitive data.
Closed-source models, on the other hand, might benefit from security through obscurity, making it harder for malicious actors to exploit vulnerabilities. However, this could also slow down the process of identifying and addressing security issues. Open-source models benefit from collaborative scrutiny, where the wider community helps to identify and resolve security weaknesses, although the public exposure of code also presents risks.
The need for AI security governance is becoming increasingly urgent, and CoSAI’s focus on this area is timely. Earlier this year, the National Institute of Science and Technology (NIST) published a paper that identified four types of machine learning attacks: data poisoning, data abuse, privacy attacks, and evasion attacks against predictive and generative AI systems. The European Union’s AI Act similarly emphasizes the need for robust cybersecurity measures to mitigate risks such as data poisoning (manipulating training datasets), model poisoning (manipulating pre-trained components), adversarial examples (inputs designed to mislead AI models), and attacks aimed at exploiting model flaws or confidentiality breaches.
As part of the regulatory process, technology companies are encouraged to share their expertise, collaborating with customers, partners, industry leaders, and research institutions. This mutual commitment to innovation relies on secure AI to ensure that it can be trusted and widely adopted. To establish consistent AI security practices, organizations should focus on developing a standard library for risk and control mapping, which will help in identifying potential security gaps across the industry.
Additionally, a security maturity assessment checklist and standardized scoring system would enable organizations to conduct self-assessments of their AI security measures. Such a process would reassure customers about the safety and integrity of AI products. This approach mirrors the secure software development lifecycle (SDLC) practices already used by organizations to ensure software assurance through assessments like the Software Assurance Maturity Model (SAMM).
AI products and solutions can also be designed to help organizations comply with security standards and regulations such as HIPAA, PCI-DSS, GDPR, and FIPS-140 validation. Organizations should leverage their technology partners’ AI enablers, software development kits (SDKs), APIs, and developer tools to build secure, scalable digital services that can be deployed efficiently without sacrificing performance.
Technology companies must commit to developing secure AI solutions that enhance productivity, particularly at the edge, by integrating multiple layers of protection. Security should be easy to deploy and should not impede performance. Much like cybersecurity initiatives that require company-wide coordination, AI processes, principles, tools, and training should evolve continuously. Companies should ensure consistency and compliance through an internal governance model, where AI security is integrated at every level to drive safe, secure, and innovative AI deployments.