ChatGPT creator investigated by US regulators over data privacy and wrong answers

The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023.
The logo for OpenAI, the maker of ChatGPT, appears on a mobile phone, in New York, Tuesday, Jan. 31, 2023. Copyright Richard Drew/AP
Copyright Richard Drew/AP
By Giulia Carbonaro
Share this articleComments
Share this articleClose Button

US regulators have asked OpenAI how it’s addressing the potential for ChatGPT to generate “false, misleading, disparaging or harmful” statements about real people.

ADVERTISEMENT

The owner of ChatGPT, artificial intelligence company OpenAI, is once again under investigation from federal authorities in the US - this time over its action to prevent the revolutionary technology from giving users wrong answers.

Since its launch in November, many have pointed out that while ChatGPT can be extremely useful, it can also sometimes give totally unrelated, false or incomplete answers.

OpenAI has acknowledged that ChatGPT is far from infallible, explaining it can sometimes “hallucinate” outputs and make up facts. In short, ChatGPT, as any other machine or technology, can make mistakes.

But US regulators are still concerned.

The Federal Trade Commission (FTC) sent a letter to OpenAI this week, as first reported by the Washington Post, requesting precise information about how the company addresses the potential risk for ChatGPT “to generate statements about real individuals that are false, misleading, disparaging or harmful.”

US regulators claim the company put personal reputations and data at risk, running afoul of consumer protection laws.

Earlier this year, OpenAI chief executive Sam Altman warned US lawmakers against the risks inherent within ChatGPT, saying “if this technology goes wrong, it can go quite wrong”.

He called on the government to work with his company to “prevent that from happening”.

But Altman was “disappointed” by the latest probe into the company. “It is very disappointing to see the FTC's request start with a leak and does not help build trust,” he wrote on Twitter.

“We built GPT-4 [the latest version of ChatGPT] on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. We protect user privacy and design our systems to learn about the world, not private individuals,” he continued, adding that the company will still work with the FTC to make sure ChatGPT is “safe and pro-consumer”.

ChatGPT’s occasionally wrong answers are not all the Microsoft-backed company will be questioned by US authorities about.

The FTC is also looking at how the company approaches data privacy and how it obtains data to train and inform its AI chatbot.

In April, Italy banned the use of ChatGPT over privacy concerns, and only restored it after more information about its privacy policy was provided. A tool to verify users’ age was also added.

Share this articleComments

You might also like