If ChatGPT Libels You, Who Do You Sue? – It Is A Problem Already
Everyone is in love with Large Language Model AIs, but they are far from perfect.
Someone suggested that ChatGPT is “mansplaining as a service” (MaaS). While perhaps it is a somewhat pejorative term (to either the GPTs or men), it is fairly accurate.
A couple of months ago a professor at UCLA asked ChatGPT to give five examples of sexual harassment by U.S. law professors with quotes from relevant newspaper articles to support it.
The AI did as requested and came up with five stories, including one about Professor Jonathan Turley, a constitutional law professor at George Washington University School of Law.
The AI bot said that Turley sexually harassed a female student on a school trip to Alaska.
- Professor Turley never taught at the school the AI said he was from
- The supposed trip never occurred
- The AI referenced a Washington Post article that doesn’t exist
- The AI provided a quote from the article that was never written
Other than that, ChatGPT was perfect. The news was then able to reproduce the results.
Turley reached out to OpenAI who is, at least so far, ignoring him.
As a law professor (who likely has a lot of practicing lawyer friends) and an outspoken speaker on the dangers of AI, he might be inclined to sue OpenAI.
If someone had gone forward with the story, it certainly could or would have been damaging to his reputation and career.
It turns out that three of the five professors that ChatGPT claimed were involved in sexual harassment issues were made up. While batting .400 in baseball might be okay, it certainly is not when it comes to libeling people.
OpenAI would likely claim section 230 immunity, but it really isn’t a third party that created the content. Sure OpenAI used third parties as references, but since the references were made up, that might not hold up.
The other problem with a potential lawsuit is Turley is a public figure and so, for libel, you have to prove malice as the Supremes said in NY Times Co v. Sullivan.
But lets assume a slightly different scenario –
A. An average person (remember the person – like Turley – knew nothing about this)
B. Someone issued a GPT prompt and GPT made up an answer. Lets assume that the answer implicated someone in a crime
C. Then the person who asked the GPT published the story using the GPT results and the fake quotes – without checking them
D. After the story was published, Google picked up the story
E. After that, the subject’s name came up Google searches.
F. Finally, the alleged subject was turned down for jobs and/or lost customers after they did a Google search on him and found the adverse information.
Who gets sued now?
It is likely the person who published the story and the publisher could be in trouble.
But are the courts ready to deal with this? I doubt it. Will the courts have to deal with it? Soon? Very likely.
What if someone gets physically harmed after step (F) above? It kind of escalates then. Are the author and publisher accessories to murder?
It is going to happen sooner or later.
There is no way to stop this freight train, but, the courts will get the job of dealing with it. Even if the law, the lawyers and the judges are completely unprepared to do so.
Credit: Fox