Most businesses expect that artificial intelligence (AI) will save money and promote efficiency as it continues to be adopted and implemented across various industries. But what these companies may not expect – and AI cannot predict – are the myriad ways that AI will result in new and increasing potential for liability. From addiction-forming or defamatory chatbots to automated hiring tools with discriminatory impact, AI-driven technologies are being used in contexts where errors, bias, or misuse can lead to real legal, financial, and personal consequences. As AI tools become increasingly embedded in everyday operations, companies must start thinking about how to manage AI-based risks, and whether their current insurance programs will respond when something goes wrong.
While many AI-related exposures should fall within the scope of existing coverage, insurance companies are beginning to introduce AI-specific products, exclusions, and endorsements that could change how coverage applies going forward. This article examines the risks associated with AI, how traditional insurance policies can respond to those risks, and what new products and policy language are emerging.
What’s New About AI Risk?
AI poses a category of risk that differs from that of traditional software systems – particularly with its current level of generative ability. While software errors and vulnerabilities are familiar concerns, the failures of AI systems, and the repercussions of such failures, are much less predictable, less controllable, and more difficult to assess and remediate than with traditional software.
Traditional software tends to be rule-based in that, given a specific input, it produces a consistent and repeatable outcome. AI systems, on the other hand, generate responses based on patterns learned from large data sets, and their outputs can vary significantly even with highly similar inputs. This variability in AI outputs stems from the complex, and oftentimes opaque, processes that govern their decision-making. AI models learn from vast datasets, internalizing patterns, assumptions, and correlations that are not always visible or known to human users or developers.
Depending on the particular AI coding, algorithm, and inputs, AI-produced outputs can be inconsistent and also flawed, biased, or outright incorrect. A generative AI model can produce a correct result with one prompt,, but later a highly similar question can produce a completely different and incorrect result. The model will often present both responses with equal confidence, making it difficult for users to distinguish between reliable and unreliable information.
This inconsistency is concerning because AI tools are increasingly being deployed in high-stakes, public-facing settings. Businesses are using AI to respond to customers, make recommendations, automate decision-making, and even draft legal or financial documents. When these systems malfunction, the consequences extend beyond embarrassment or inconvenience.
There have already been many examples of these risks playing out. Dozens of lawyers across the United States have been sanctioned for submitting court briefs with fake case citations “hallucinated” by AI. AI systems also have produced false and defamatory material via hallucinations leading to litigation. See, e.g., Walters v. OpenAI, L.L.C., 2024 U.S. App. LEXIS 7643 (11th Cir. Apr. 1, 2024); Complaint, Battle v. Microsoft Corp., No. 1:23-cv-01822 (D. Md. July 7, 2023); Eiswert v. Darien, No. 1:25-cv-00226-RDB (D. Md. Jan. 21, 2025).
Further complicating matters, AI performance is not static. Systems are updated or retrained over time, which can result in shifts in behavior. A model that was highly accurate at deployment may later become less reliable as it incorporates new data. And without transparency into how the model evolves, businesses may be wholly unaware that risk exposure has increased until a failure occurs. Accordingly, as AI proliferates, mitigating AI risks is critical for any business’s ongoing sustainability. And one key method to protect against AI liabilities is through both existing and new insurance products.
Existing Insurance Policies Should Cover AI-Related Liabilities
As companies adopt AI tools to automate services, generate content, and support decision-making, an important question they should be asking is whether traditional insurance policies cover the risks that come with these technologies. While some insurance companies have begun offering AI-specific policies or endorsements, many common AI-related risks should be addressed under existing insurance policies, depending on the specific policy terms, endorsements, and exclusions.
For instance, Technology Errors and Omissions (Tech E&O) policies typically aim to protect tech companies against risks triggered by mistakes, negligence, or other failures associated with their tech products or services. In its current form, Tech E&O applies to many situations involving AI. For example, if a software developer develops and licenses an AI tool that produces inaccurate results or fails to function as advertised, the affected client may bring a claim against the developer for damages. If the AI system is part of the insured’s covered technology service or product, a Tech E&O policy shouldrespond and provide coverage.
Cyber insurance is another likely existing source of coverage for AI liabilities. Cyber insurance primarily is designed to address risks related to data security, privacy, and network failures. These policies typically cover events such as unauthorized access to systems, data breaches, ransomware attacks, and the legal or regulatory consequences that follow. Most cyber policies also provide coverage for costs associated with breach response, including notification, credit monitoring, and forensic investigation. In the AI context, cyber insurance may apply when an AI system exposes personal or sensitive data due to a security flaw or unauthorized use. Importantly, in their current form, most cyber policies do not contain express exclusions for AI-related incidents.
Directors and Officers (D&O) insurance provides coverage for claims made against a company’s director, officer, employee, or the company itself. Claims could be based on mismanagement, breach of fiduciary duty, or violations of securities laws. In the context of AI, companies may face securities litigation if they make misleading statements about their use or development of AI. For example, if a company announces that it is implementing advanced AI capabilities in their products, but in reality has not developed AI, shareholders may allege that leadership misrepresented material information. The SEC has brought charges against companies alleging this so-called “AI washing.”
Other types of insurance that also may apply to AI-related risks, depending on the specific facts of the claim or case, include Commercial General Liability (if there is bodily injury, property damage, or personal or advertising injury), traditional Errors and Omissions Insurance, or Employment Practices Liability Insurance. Thus, any company facing potential AI-related risks or liabilities should always look to its existing insurance program as a potential source of protection.
New AI-Specific Products, Exclusions, and Endorsements
Although many AI-related risks can be addressed under existing insurance policies, some insurance companies have begun introducing AI-specific products, endorsements, and exclusions.
AXA recently released an endorsement for its cyber policies that directly addresses risks associated with generative AI, stipulating coverage for a “machine learning wrongful act.” Coalition, another cyber insurance company, expanded the definition of a security failure or data breach to include an “AI security event” and “expands the trigger for a funds transfer fraud (FTF) event to include fraudulent instruction transmitted through the use of deepfakes” or other AI technology. In contrast, some insurance companies seem to be exploring more restrictive approaches. A new Berkley-drafted exclusion, purportedly intended for use across its D&O, E&O, and fiduciary liability policies, would bar coverage for a broad array of artificial intelligence claims.
As to new products, Armilla – a startup backed by Lloyd’s of London – has developed a dedicated insurance product intended to cover financial losses tied to underperforming or malfunctioning AI models. Coverage is said to include damage from hallucinations, model drift, mechanical failures, and other deviations from unexpected behavior. According to Armilla, their product would have applied in incidents like the Air Canada chatbot claim wherein Air Canada was ordered by Canada’s Civil Resolution Tribunal to honor a discount invented by its customer service chatbot.
These developments reflect the growing effort by insurance companies to reallocate and price AI-related risk on their own terms – whether that be through revised definitions, new exclusions, or standalone products. As these changes continue, policyholders should assess whether new terms actually expand their pre-existing coverage or simply reframe existing risks under more restrictive or expensive conditions.
Conclusion
While artificial intelligence is introducing some new areas of risk, in most cases, those risks are not entirely new to the world of insurance. Many potential AI-related liabilities can still be addressed under existing policies. At the same time, insurance companies are beginning to release new AI-specific products, endorsements, and exclusions. While some of these products are framed as steps to cover “emerging” exposures, they really reflect the emergence of insurance companies wanting to define and price AI-related risk. As insurance companies update policy language, introduce new definitions or terms, and propose exclusions, it is important for policyholders to evaluate whether those changes actually provide new value or, more likely, seek to limit coverage or increase premiums. For companies using AI tools in their operations, existing policies and future renewals must be reviewed carefully to maximize coverage and protection for burgeoning AI liabilities.

