Silent AI risks drive product development and reinsurance review: DAC Beachcroft

The risk of silent AI is prompting insurers to update policy wordings, innovate products, limit riskier exposures, and review reinsurance arrangements to ensure these risks are adequately covered, according to law firm DAC Beachcroft. “Silent AI” refers to risks associated with artificial intelligence (AI) that are neither explicitly included nor excluded in insurance policies, leaving…

Spread the love

The risk of silent AI is prompting insurers to update policy wordings, innovate products, limit riskier exposures, and review reinsurance arrangements to ensure these risks are adequately covered, according to law firm DAC Beachcroft.

DAC Beachcroft logo“Silent AI” refers to risks associated with artificial intelligence (AI) that are neither explicitly included nor excluded in insurance policies, leaving room for potential coverage gaps.

The international law firm noted that policy wordings are evolving to reflect AI’s increasing role in day-to-day business operations. Policyholders and brokers continue to seek clarification on the adequacy of their insurance programmes and want policy wordings that explicitly address AI-related risks.

DAC Beachcroft expects the risk of silent AI to continue driving product development in 2026, alongside insurers seeking to limit or condition riskier AI-related exposures.

“Policyholders should review policies for AI-specific exclusions, ensure ethical AI practices, and stay informed about how insurers use AI in pricing, risk assessment and claims handling,” the firm said.

The law firm anticipates that insurers will increasingly look to their outward reinsurance arrangements to ensure that AI-related risks affirmed at the primary level are also adequately covered under their reinsurance programmes. Equally, reinsurers may seek to condition or exclude AI-related risks, which could impact how reinsurance claims are adjusted.

DAC Beachcroft highlighted that Agentic AI—systems made up of autonomous agents capable of independent interaction and decision-making—poses heightened data protection risks.

The firm said, “Although it brings notable benefits in terms of efficiency and innovation, representing another evolution beyond generative AI, it also introduces new challenges.

“Unlike some earlier AI systems, many typical agentic AI system use cases rely heavily on processing personal data, including special categories of personal data or other sensitive categories such as financial information. Although many organisations have so far managed to apply governance controls to the use of generative AI in the workplace, the reduced human oversight evident in agentic AI significantly increases the challenge of implementing the same controls. As a result, data protection risks are likely to intensify.”

In addition, DAC Beachcroft warned insurers to be alert to fraudulent AI-generated evidence of loss.

“We have seen an increase in claim submissions that would historically be considered acceptable evidence of loss, but in fact have been created to generate a claim or inflate an otherwise legitimate one. This is not only impacting personal lines, where, for example, AI generated photographs of damage are being created, but also commercial lines, where AI is being used to generate fake invoices and statements, among other things,” said DAC Beachcroft.

The firm expects this trend in fraudulent claims to increase in 2026, emphasising that insurers need to scrutinise submitted evidence, even from commercial organisations that appear successful and legitimate.

DAC Beachcroft also noted that regulation is struggling to keep pace with AI’s rapid evolution. Current frameworks were not designed for self-learning systems or generative models, leaving gaps in accountability, transparency, and bias.

“For now, regulators are watching closely, with guidance rather than enforcement. But history tells us that regulatory intervention often comes after the first high-profile failures or consumer harms. When that moment arrives, we can expect tighter controls on explainability, governance and oversight of AI. For the sector, the message is clear: use this breathing space to build robust controls now, before regulators mandate them,” said DAC Beachcroft.

Print Friendly, PDF & Email

Spread the love

Similar Posts