720-891-1663

OpenAI Faces Defamation Suit After ChatGPT Fabricates Lawsuit Info

Armed America Radio host Mark Walters is Suing OpenAI after their software completely fabricated a court filing saying that Walters was involved in a lawsuit filed against him for embezzling funds of the Second Amendment Foundation, a gun rights group.

A so-called journalist discovered that by using ChatGPT for research and used the hallucination.

The journalist even asked ChatGPT to point out the specific paragraphs that mentioned Walters. ChatGPT made up the paragraphs that implicated Walters.

Needless to say Walters is not pleased.

But the question is whether OpenAI is responsible.

OpenAI’s terms of use say the tool is prone to hallucinating. Is that sufficient?

If the owners manual of your car says that it is prone to burst into flames, does that absolve the carmaker of responsibility for making a defective product?

On the other hand, Walters case might be weakened because he didn’t ask OpenAI to remove the information. I have no clue how one might do that or if that is even possible or whether that would reduce the damage to Walter’s reputation or make his radio show advertisers happy.

Section 230 might provide a shield for OpenAI – but likely not for the journalist who used it.

But, Section 230 might not protect OpenAI if they materially contributed to the alleged unlawfulness of the material.

Since OpenAI admits that its software sometimes hallucinates, this could be an important factor.

But, just because a newspaper publishes something that is false, they are not necessarily liable if they had no reason to think that the information was wrong.

This will be an interesting case to watch – especially for OpenAI. This is also why companies need to have clear AI use policies. Credit: ArsTechnica

Facebooktwitterredditlinkedinmailby feather

Leave a Reply

Your email address will not be published. Required fields are marked *