Mindgard Raises $8M to Address Emerging AI Security Risks
Mindguard's industry-first DAST-AI solution revolutionizes AI security by providing continuous security testing and automated AI red teaming throughout the entire AI lifecycle. This groundbreaking solution enables organizations to proactively identify vulnerabilities and threats in AI systems in real-time, ensuring they remain secure at every stage, from development to deployment.
By automating red teaming, Mindguard's solution simulates advanced adversarial attacks to assess the resilience of AI models against potential threats, providing valuable insights for improving security. This continuous, dynamic approach not only enhances the overall security posture of AI systems but also makes AI security more actionable and auditable.With Mindguard's DAST-AI, businesses can ensure that their AI systems are safeguarded against evolving risks, comply with industry regulations, and maintain trust in the integrity of their AI-driven processes.
The solution empowers organizations to take a proactive stance in securing their AI assets while delivering transparency and accountability in AI security practices.Security AI startup Mindguard has raised $8 million in funding and appointed a new Head of Product and VP of Marketing to drive its next phase of growth. The company is addressing a critical gap in the AI security landscape, where many AI products are being launched without sufficient security measures in place, leaving organizations exposed to significant risks. These risks include LLM prompt injection and jailbreaks, which exploit the probabilistic and opaque nature of AI systems.
Such vulnerabilities often only become apparent during runtime, posing a major challenge for traditional security approaches.To effectively secure these AI-specific risks, Mindguard emphasizes the need for a fundamentally new approach to AI security. Their innovative solutions, such as the DAST-AI platform, provide continuous security testing and automated AI red teaming across the entire AI lifecycle, ensuring that AI models and toolchains remain secure and resilient against evolving threats. By addressing the unique challenges of AI security, Mindguard aims to offer actionable, auditable security that organizations can rely on as they integrate AI into their operations, helping to mitigate risks before they can be exploited.
Spun out of Lancaster University, Mindguard has developed its Dynamic Application Security Testing for AI (DAST-AI) solution to address the unique security challenges posed by AI systems. DAST-AI identifies and resolves vulnerabilities specific to AI that can only be detected during runtime, such as LLM prompt injection and jailbreaks. These issues are difficult to spot using traditional security tools but can have serious consequences if left unaddressed.As organizations increasingly adopt AI technologies or establish AI governance frameworks, continuous security testing becomes essential.
It provides the necessary risk visibility across the entire AI lifecycle, ensuring that AI systems remain secure from development through deployment and beyond."All software has security risks, and AI is no exception,” said Dr. Peter Garraghan, CEO of Mindguard and Professor at Lancaster University. "The complex, evolving nature of AI systems requires a tailored approach to security, one that can detect and mitigate vulnerabilities that arise in real-time as AI models operate. Our solution offers organizations the confidence they need to deploy AI with security built in from the start."
Mindguard’s DAST-AI is designed to continuously monitor and protect AI systems from emerging threats, making it an essential tool for companies striving to secure their AI products and systems in an ever-evolving technological landscape.“The challenge is that the way these risks manifest within AI is fundamentally different from other software. AI systems, due to their probabilistic and opaque nature, introduce unique vulnerabilities that traditional security tools and practices aren't equipped to handle,” explained Dr. Peter Garraghan, CEO of Mindguard.He continued, “Drawing on our 10 years of experience in AI security research, Mindguard was created to tackle this challenge.
We’re proud to lead the charge toward creating a safer, more secure future for AI. Our solution addresses the critical need for AI-specific security by continuously testing systems for runtime vulnerabilities, providing organizations with the assurance that their AI models and tools are secure from development through deployment.”Mindguard’s DAST-AI solution is seamlessly integrated into existing automation frameworks, ensuring that security is embedded throughout the AI lifecycle without disrupting established workflows.
This integration empowers security teams, developers, AI red teamers, and penetration testers to secure AI systems proactively, without requiring a major overhaul of existing processes.This continuous, real-time security testing approach ensures that AI systems are not only secure upon launch but also remain resilient to new and evolving threats as they operate in real-world environments. By providing this added layer of security, Mindguard is helping organizations to deploy AI systems with confidence and safeguard them against emerging risks in the rapidly evolving landscape of AI technology.
406 Ventures led the funding round for Mindgard, with participation from Atlantic Bridge, Willowtree Investments, and existing investors IQ Capital and Lakestar. As part of the company’s growth strategy, Mindgard has appointed two key executives: Dave Ganly, former Director of Product at Twilio, and Fergal Glynn, who most recently served as CMO at Next DLP (acquired by Fortinet). Both executives will play critical roles in driving Mindgard’s product development and supporting the company’s expansion into the North American market, with a leadership presence now established in Boston.According to Greg Dracon, Partner at .406 Ventures, the rapid adoption of AI technologies has introduced new and complex security risks that traditional security tools are not designed to address.
Dracon explained, “Mindgard’s approach, born out of the distinct challenges of securing AI, equips security teams and developers with the tools they need to deliver secure AI systems. The company’s DAST-AI solution is pioneering a new wave of security technology tailored to the unique vulnerabilities of AI systems, ensuring organizations can manage AI-specific risks effectively and seamlessly.”Mindgard's strategic focus on AI security, combined with its experienced leadership team, positions the company as a key player in securing the future of AI technology. With the support of its investors and executive appointments, Mindgard is poised for further expansion and continued innovation in the AI security space.
Related Courses and Certification
Also Online IT Certification Courses & Online Technical Certificate Programs