Responsible AI Deployment Linked to Better Business Outcomes: EY

Responsible AI Deployment Linked to Better Business Outcomes: EY

As broader adoption of AI technologies continues to accelerate, companies that implement more advanced Responsible AI (RAI) measures are pulling ahead while others stall. According to the second Responsible AI (RAI) Pulse survey from the EY organization, four in five respondents said their company has improved innovation (81%) and efficiency and productivity gains (79%), while…

Spread the love

As broader adoption of AI technologies continues to accelerate, companies that implement more advanced Responsible AI (RAI) measures are pulling ahead while others stall.

According to the second Responsible AI (RAI) Pulse survey from the EY organization, four in five respondents said their company has improved innovation (81%) and efficiency and productivity gains (79%), while about half report boosts in revenue growth (54%), cost savings (48%), and employee satisfaction (56%).

The global survey of large corporations also reveals that nearly all organizations report financial losses and widespread impact from compliance failures, sustainability setbacks, and biased outputs.

According to EY, Responsible AI adoption involves defining and communicating principles before advancing to implementation and governance. The transition from principles to practice happens through 10 RAI measures that embed commitments into operations.

The survey suggests greater adherence to RAI principles is correlated with positive business performance. For instance, those respondents with real-time monitoring are 34% more likely to see improvements in revenue growth and 65% more likely to see improved cost savings.

On average, organizations surveyed have already implemented seven RAI measures, and among those yet to act, the vast majority plan to do so. Across all measures, fewer than 2% of respondents reported having no plans for implementation.

“The widespread and increasing costs of unmanaged AI underscore a critical need for organizations to embed practices deep within their operations to not only reduce risks but also accelerate value creation,” commented Raj Sharma, EY Global Managing Partner, Growth & Innovation. “This is not simply a compliance exercise; it is a driver of trust, innovation, and market differentiation. Enterprises that view these principles as a core business function are better positioned to achieve faster productivity gains, unlock stronger revenue growth, and sustain competitive advantage in an AI-driven economy.”

EY’s survey sights were gathered in August and September 2025 from 975 C-suite leaders across 11 roles and 21 countries. All respondents had some level of responsibility for AI within their organization. Respondents represented organizations with over US$1 billion in annual revenue across all major sectors and 21 countries in the Americas, Asia-Pacific, Europe, the Middle East, India and Africa.

Other findings include:

Inadequate controls for AI risks lead to negative impacts: Almost all (99%) organizations surveyed reported financial losses from AI-related risks, with nearly two-thirds (64%) suffering losses of more than US$1 million. On average, the financial loss to companies that have experienced risks is conservatively estimated at US$4.4 million. The most common AI risks are non-compliance with AI regulations (57%), negative impacts to sustainability goals (55%) and biased outputs (53%).

C-suite knowledge gaps in identifying appropriate controls: On average, when asked to identify the appropriate controls against five AI related risks, only 12% of C-suite respondents answered correctly. Chief risk officers, who are ultimately responsible for AI risks, performed slightly below average (11%). As agentic AI becomes more prevalent in the workplace and employees experiment with citizen development, the risks — and the need for appropriate controls — are only set to grow.

Citizen developers highlight governance and talent readiness gaps: Organizations face a growing challenge in managing “citizen developers” — employees independently developing or deploying AI agents. Two-thirds of surveyed companies allow this activity in some form, yet only 60% of these companies provide formal, organization-wide policies and frameworks to ensure these agents are deployed in line with responsible AI principles. Half also report they do not have a high level of visibility in employee use of AI agents.

Companies that actively encourage citizen development were more likely to report a need for talent models to evolve in preparation for a hybrid human-AI workforce. These organizations cite the scarcity of future talent as their top concern.

The full RAI survey findings can be found here.

Source: EY

Topics
InsurTech
Data Driven
Artificial Intelligence

Interested in Ai?

Get automatic alerts for this topic.

Spread the love

Similar Posts