AI is Great – Until You Are Liable for it
You have probably heard of AI hallucinations – when the AI makes things up. Sometimes the risk is low, but other times, not so much.
Slowly the courts are weighing in on the subject.
In Moffatt v. Air Canada, the plaintiff asked for information about the airline’s bereavement policy. The chatbot told him he could submit a form within 90 days and get a partial refund.
It also included a link to the airline’s bereavement policy.
Moffatt took the chatbot at face value and submitted a claim. The airline denied the claim saying the policy required the claim to happen before travel.
Moffatt didn’t read the policy; he trusted the chatbot. Since the airline was being dumb about this (instead of changing the software and giving him a little bit of money), after several months of fighting with them, he took the case to small claims court.
The airline tried to claim that the chatbot was a separate legal entity (really? Can you show me where this chatbot filed for legal entity status and was accepted as such? I didn’t think so).
The court said that the airline was liable for negligent misrepresentation. The company was responsible for putting accurate information on its website and the chatbot was, in the court’s consideration, part of their website.
It turns out that the chatbot was old and didn’t use AI, but the case is still one example of the risks from AI.
Even if the company tried to limit its liability in its terms of service, the court would consider whether that language comprehensible, clearly-defined and meets legal requirements.
In another case a mom sued Character.AI after it’s chatbot told the teen to commit self-harm by cutting himself and kill his mom for limiting his screen time. He is a highly functioning autistic kid who is now spiraling down.
Finally, in another case, also against Character.ai, another family is suing the company for hosting a chatbot that caused death by suicide of a 14-year-old. The company hosts thousands of chatbots. The family is asking for the profits the company made as well as forcing the removal of chatbots that are not safe.
Bottom line here is that the courts are not buying into the “devil made me do it” or other incoherent defenses. While “the jury is still out” on how this will ultimately go down, it would be wise for companies to consider the legal ramifications of their use if AI, certainly when it comes to chatbots. Likely this is going to require very clear disclaimers at a minimum.
Credit: Gluckstein Law Firm and MSN and Popular Science