27 November 2023
Share Print

Generative AI – a licence to libel?

To The Point
 

The rise of generative AI has brought into focus the consequences of reputational and financial harm for companies and individuals who have inaccurate content generated and published about them by AI chatbots. David Engel, who leads our Reputation & Information Protection Practice, investigates the issue, applying his legal expertise to the question of who is liable for defamatory content generated by AI technology.

Any individual or company seeking to protect their reputation from false allegations in the online world already has a number of issues to contend with. But a new threat has now emerged in the shape of generative AI, and the use of content created by chatbots such as ChatGPT.

In June 2023, for example, Georgia radio host Mark Walters started legal proceedings in the US against OpenAI, the company behind ChatGPT, after the bot answered a query from a journalist by stating (incorrectly) that Mr Walters had been sued for "defrauding and embezzling funds".

Meanwhile, a law professor at George Washington University was falsely accused by ChatGPT of sexually assaulting and harassing students, and an Australian mayor of having done jail time for bribery. 

In its recent White Paper, the Government has made clear that there are no plans to create new AI-specific legal rights in the UK, so victims of an AI libel are left with the same rights and remedies as they would have in relation to a defamatory newspaper article or website. 

The technology therefore throws up some interesting issues which will doubtless play out in the Royal Courts of Justice in the coming years.

Who gets sued?
What makes a statement defamatory?
Does it matter in the real world?
Can AI platforms disclaim liability for defamation?

This article was first published by Digitalis.

To the Point 


Subscribe for legal insights, industry updates, events and webinars to your inbox

Sign up now