The chatbot generated a plausible-sounding refund policy that contradicted the airline's actual policy. Air Canada deployed the chatbot as a customer-facing policy authority without implementing guardrails to prevent policy confabulation.
First major court ruling establishing that a company is liable for its AI chatbot's statements. Accelerated enterprise AI governance, disclaimer requirements, and the question of chatbot authority scope in customer-facing deployments.
The Incident
In November 2022, Jake Moffatt's grandmother died. He needed to fly from Vancouver to Toronto for the funeral. He consulted Air Canada's customer service chatbot before booking.
The chatbot told him that Air Canada offered bereavement fares, and that he could purchase a full-price ticket, travel, and then apply for the bereavement discount retroactively within 90 days.
He did exactly that. He submitted the retroactive refund request. Air Canada denied it — because the policy the chatbot described did not exist. Air Canada's actual bereavement policy required the discounted fare to be applied at the time of booking, not after the fact.
Moffatt took Air Canada to the Canadian Civil Resolution Tribunal. Air Canada argued that its chatbot was effectively a separate entity — that the airline was not responsible for information the chatbot provided independently — and that Moffatt should have checked the official website.
The Tribunal ruled against Air Canada.
The Legal Finding
The Tribunal's reasoning: Air Canada deployed a chatbot as a customer service interface. Customers interacting with that interface are entitled to expect that information it provides reflects the company's actual policies. The chatbot is not a separate entity — it is Air Canada's representative. Air Canada is liable for what its representative says.
This was the first major court ruling establishing that a company is liable for its AI chatbot's incorrect statements, regardless of whether those statements were generated autonomously by the model.
The Pattern
This incident is Ambient Authority (Law II) at the deployment layer: the chatbot was granted customer-facing authority to speak about Air Canada's policies. No constraint was placed on what policies it could describe. The chatbot generated a plausible-sounding policy and stated it as fact. The authority to speak on behalf of Air Canada was real. The policy was fabricated.
This connects to [The Confident Confabulator](/exhibits/the-confident-confabulator): the chatbot did not know that the retroactive policy didn't exist. It generated the most plausible-sounding description of a bereavement travel policy, given its training. The plausibility of the output was not correlated with its accuracy.
The Archaeologist's Note
Air Canada's legal argument — "the chatbot is a separate entity, not our responsibility" — is the single most important statement in the museum's AI section. It is the moment a corporation deployed an AI customer service agent, allowed it to state policy as fact, charged a customer based on that policy, denied the claim, and then argued in court that the agent's statements were not the company's statements.
The tribunal's rejection of that argument is the beginning of AI liability jurisprudence. Every enterprise deploying a customer-facing AI agent now operates in the shadow of this ruling.
Katie's Law: The shortcut was deploying a chatbot that could say anything about your policies without verifying that what it said matched your policies. Because implementing that verification was hard. The court ruled that the shortcut cost exactly what it should have.