The increasing use of artificial intelligence (AI) is showing a big risk: the technology can sometimes make up false or made-up information, called hallucinations, and these can have real consequences.
Recent examples show that both legal and financial issues are happening all over the world.
In Australia, Deloitte had to partly pay back the government AUD 440,000 after a report had several mistakes made by AI.
These mistakes included wrong quotes from court cases and references to academic papers that didn't exist. The report was revised to say that Azure OpenAI was used, but the main analysis stayed the same.
In other countries, there have been famous cases too.
In Germany, a magazine published a fake interview with Michael Schumacher that was created by AI. The Schumacher family got €200,000 in compensation, and the editor was fired. Earlier, an Australian mayor sued for defamation after ChatGPT wrongly accused him of taking bribes. In the U.S., lawsuits have been filed against Meta and Google's AI tools for spreading false information about people.
These events show a growing global worry: as AI that creates content becomes more common, mistakes can hurt reputations, cause financial loss, and damage trust.
This raises important questions about who is responsible and who should be held accountable for AI-generated content.