News

Global
30-05-2025
McKinsey Survey Finds Responsible AI Key to Unlocking Full Potential of Technology
The rapid advancement of generative AI and large language models (LLMs) is fueling widespread adoption across various business functions, with organizations aiming to enhance productivity, efficiency, and innovation. However, realizing these benefits depends on the safe and responsible deployment of AI technologies. Responsible AI (RAI) practices are a foundational component of a broader AI trust strategy, essential for generating confidence among customers, employees, and stakeholders in how organizations develop and use AI.
RAI involves addressing key issues such as data governance, explainability, fairness, privacy, security, and transparency. By embedding these principles into their AI initiatives, organizations can mitigate risks, strengthen accountability, and build long-term trust—ultimately maximizing the impact of their AI solutions.
A recent McKinsey survey of over 750 leaders from 38 countries sheds light on the current state of RAI implementation in enterprises across diverse industries, including technology, healthcare, finance, legal, and engineering. Using the McKinsey AI Trust Maturity Model—a comprehensive framework with four main dimensions (strategy, risk management, data and technology, and operating model) and 21 subdimensions—the survey evaluates organizations’ RAI maturity across four levels, from early-stage foundational efforts to fully developed, proactive programs. The findings offer valuable insights into how organizations are progressing on their responsible AI journeys and where gaps still exist.