New research from NTT DATA Inc., a global leader in digital business and IT services, sheds light on the accelerating race among businesses to adopt artificial intelligence (AI). Despite the widespread enthusiasm for AI, a growing responsibility gap threatens to undermine its long-term success. While organizations are making significant investments in AI-driven innovation, leadership, governance, and workforce readiness are failing to keep pace. More than 80% of executives acknowledge that their organizations are struggling to establish the necessary frameworks to ensure AI is deployed ethically, securely, and effectively, placing investments, security, and public trust at risk.
This pressing issue is explored in NTT DATA’s latest report, “The AI Responsibility Gap: Why Leadership is the Missing Link”, which gathers insights from over 2,300 C-suite leaders and decision-makers across 34 countries. The findings highlight an urgent need for leadership-driven AI governance strategies to align technological advancement with ethical responsibility. Abhijit Dubey, CEO of NTT DATA Inc., underscores the risks of unchecked AI deployment, stating that while the enthusiasm for AI is undeniable, innovation without responsibility is a risk multiplier. He stresses the importance of organizations implementing leadership-driven AI governance strategies to prevent progress from stalling and to maintain public trust.
The research highlights a deep division within executive leadership regarding AI priorities. There is no consensus on whether responsibility or innovation should take precedence, creating internal conflicts that hinder the establishment of a unified AI strategy. While some executives emphasize the importance of ethical oversight, others focus primarily on accelerating technological advancement, often at the cost of security and compliance.
Regulatory uncertainty is identified as another major obstacle. More than 80% of leaders express concern that unclear government regulations are slowing down AI investment and adoption. The lack of well-defined legal frameworks has made AI implementation a high-risk endeavor, leading many organizations to proceed cautiously or delay their AI projects altogether.
Security and ethical concerns are also lagging behind AI ambitions. A striking 89% of C-suite executives acknowledge AI-related security risks, yet only 24% of Chief Information Security Officers (CISOs) believe their organizations have a strong framework in place to balance AI risk and value creation. While many executives recognize the potential dangers of AI misuse, few companies have taken sufficient action to develop and implement risk management strategies.
Workforce readiness presents another significant challenge. With AI rapidly transforming industries, 67% of executives admit that their employees lack the skills required to work effectively with AI. Additionally, 72% of companies still do not have an AI policy in place to ensure responsible use. This lack of preparedness creates operational inefficiencies and increases the risk of AI misuse, highlighting the need for structured training programs and policy development.
Sustainability concerns have also emerged as a point of contention. The energy-intensive nature of AI-driven solutions is causing many organizations to reevaluate their environmental commitments. 75% of business leaders indicate that their AI ambitions are in direct conflict with their corporate sustainability goals, forcing companies to seek more energy-efficient AI solutions without compromising performance.
Without decisive leadership, AI advancements risk outpacing governance and ethical considerations, leading to security vulnerabilities, ethical dilemmas, and regulatory uncertainty. Organizations must take immediate action to close the AI responsibility gap by integrating responsibility by design principles, ensuring AI systems are secure, transparent, and compliant from inception. AI solutions, particularly those involving Generative AI (GenAI), must be developed with built-in safeguards against bias, misinformation, and security threats.
A strong governance framework is essential. Business leaders must go beyond mere legal compliance and actively implement AI ethical and social standards to ensure fairness, accountability, and transparency. AI governance should be structured, proactive, and adaptable, addressing emerging risks while maintaining alignment with global ethical standards. Workforce readiness must also be prioritized. As AI reshapes industries, organizations need to invest in employee training and upskilling to prepare teams for the evolving workplace. Employees must be equipped with the knowledge and tools required to navigate AI-driven workflows effectively while understanding the risks and opportunities associated with AI adoption.
Global collaboration on AI policy is critical to establishing clearer, more actionable governance frameworks. Businesses, regulators, and industry leaders must work together to develop standardized AI policies that provide guidance on ethical implementation, security protocols, and compliance measures. Without cross-industry cooperation, AI governance will remain fragmented, leading to inconsistencies in ethical oversight and regulatory enforcement. As AI’s influence continues to expand, its impact on businesses, employees, and society will only grow. However, without decisive leadership, organizations risk a future where technological innovation outpaces responsibility, leading to security vulnerabilities, ethical blind spots, and missed opportunities.
Abhijit Dubey reinforces the urgency of these challenges, emphasizing that AI’s trajectory is clear and its influence will continue to expand. He warns that failing to address ethical, security, and workforce-related issues will create significant long-term risks. The business community must take immediate action to embed responsibility into AI’s foundation by integrating robust governance frameworks, structured workforce development, and ethical AI policies. By taking a proactive, leadership-driven approach, businesses can ensure that AI serves not only their commercial interests but also employees and society at large. The time to act is now, before innovation outpaces responsibility, and the risks become irreversible.