Italy’s data protection agency has just hit OpenAI with a whopping €15 million fine. The reason? ChatGPT’s alleged mishandling of personal data. Yep, you read that right. The AI powerhouse is in the proverbial hot soup for not playing by the rules when it comes to user privacy.
The Italian watchdog, known as Garante, wrapped up an investigation that began in 2023. They discovered that OpenAI was using personal data to train ChatGPT without proper legal backing and wasn’t transparent enough with users about it.
OpenAI isn’t taking this lying down, though. They’ve called the fine “disproportionate” and are gearing up to appeal. But that’s not all. The investigation also revealed that OpenAI didn’t have a solid age verification system, meaning kids under 13 could have been exposed to some sketchy AI-generated content. Double ouch.
As part of the fallout, OpenAI has been ordered to run a six-month awareness campaign in Italy to educate the public on how ChatGPT works and how it handles data. This isn’t the first time Garante has gone after ChatGPT. Last year, they briefly banned it over similar privacy concerns, only lifting the ban after OpenAI made some changes.
Despite OpenAI’s claims of having excellent privacy practices, Garante’s hefty fine suggests otherwise. Under the EU’s GDPR, fines can increase up to €20 million or 4% of a company’s global turnover, so for some tech pundits, OpenAI seems to have gotten off with a light punishment—that is if you can call €15 million light money.