If you haven’t heard of ChatGPT then you are living under a rock. This artificial intelligence chatbot has taken the world by storm and is being experimented with by millions of users. Developed by OpenAI, ChatGPT is a natural language processing tool that is conducted by AI technology and trained to have human-like conversations.
By writing a prompt, ChatGPT will generate a detailed response to your question by using various information data available on the internet. This writing tool has even been used by people to assist them in drafting their emails and essays.
Yet the risk of a landmark defamation lawsuit against ChatGPT is unavoidable. This is because a primary difficulty with this technology is that the chatbot is not programmed to differentiate between the truth and any inaccurate data.
OpenAI has cautioned its users about the limitations of the chatbot with a disclaimer indicating that it may generate responses that are ‘plausible sounding but incorrect or nonsensical answers’.
Recently, ChatGPT has erroneously identified the whistleblower, Brian Hood, as a perpetrator who was ‘involved in the payment of bribes to officials in Indonesia and Malaysia’ and sentenced to prison for bribery and corruption. To say that this artificial intelligence conclusion drawn was wrong is an understatement. It was beyond merely incorrect, but ridiculous.
Brian Hood, now a Victorian Mayor, expressed his view that the false claims by ChatGPT had a highly negative impact on his reputation as a prominent figure in the community. In response, his legal representative sent a concerns’ notice on 21 March 2023, a now essential initial step before plaintiffs can commence defamation proceedings.
If the matter goes to court, Hood will need to prove that OpenAI is the publisher of the defamatory material and that the material has caused or is likely to cause serious reputational harm to him.
We believe that this lawsuit will be difficult to decide because previous precedents have held that Google should not be responsible for the content of the pages it links to. [1] The High Court of Australia held that Google only provided a link to allow people to access the content of other websites. [2]
It’s worth exploring whether this reasoning applies to AI chatbots like ChatGPT. ChatGPT scrapes the different sources of information available on the internet to generate responses, sometimes reaching patently absurd conclusions, such as confusing the whistleblower with the perpetrator. So is the legal situation different when AI reaches such a conclusion?
Should this matter go to court, Hood would need to prove that OpenAI was the publisher of the defamatory material and that it caused or was likely to cause serious harm to his reputation.
If heard in court, this would compel judges to evaluate whether AI bot operators can be held responsible for any potentially defamatory statements produced.
Whether or not this case becomes Australia’s first landmark defamation case involving AI technology, similar issues are certain to arise. It is likely that significant reforms will need to be implemented to regulate the way that AI chatbots are managed and controlled appropriately.
[1] Google v Defteros [2022] HCA 27.
[2] Ibid [53] (Kiefel CJ and Gleeson J).
*Disclaimer: This is for general information only and should not be construed as legal advice. The above information may change over time. You should always seek professional advice before taking any action. *