FTC investigating ChatGPT-maker OpenAI for possible consumer harm
The civil investigative demand asks OpenAI to explain how it obtains information to train its large language models.
OpenAI CEO Sam Altman.
Win Mcnamee | Getty Images News | Getty Images
The Federal Trade Commission is investigating ChatGPT-maker OpenAI to understand if the company has violated consumer protection laws.
The Washington Post, which first reported the news, published the FTC's 20-page civil investigative demand, similar to a subpoena, outlining key focuses of the probe. A source familiar with the matter confirmed the authenticity of the document to CNBC. The FTC declined to comment.
The FTC says in the document that the probe will focus on whether OpenAI has "engaged in unfair or deceptive privacy or data security practices" or "engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm, in violation of Section 5 of the FTC Act."
Artificial intelligence has become a hot issue in Washington, with lawmakers trying to understand whether new laws are needed to protect intellectual property and consumer data in the age of generative AI, which requires massive datasets to learn. The FTC and other agencies have emphasized that they already have legal authority to pursue harm created by AI.
The probe is also an example of the FTC being proactive in its oversight of a relatively nascent technology, in line with Chair Lina Khan's stated goal of being "forward-looking" and paying attention to "next-generation technologies."
The CID asks OpenAI to list the third parties that have access to its large language models, their top ten customers or licensors, explain how they retain and use consumer information, how they obtain information to train their LLMs and more. The document also asks how OpenAI assesses risk in LLMs and how it monitors and deals with misleading or disparaging statements about people.
The CID asks OpenAI to provide information about a bug the company disclosed in March 2020 that "allowed some users to see titles from another active user's chat history" and "may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window."
OpenAI CEO Sam Altman has mostly received a warm welcome in Washington up until this point, with lawmakers praising his openness to discussing the technology and asking for regulations around it. But some AI experts have warned policymakers should also be aware the company has its own incentives in articulating its vision of regulation and urged them to engage a diverse set of voices.
Altman wrote on Twitter that "it is very disappointing to see the FTC's request start with a leak and does not help build trust."
"[W]e built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals," he added. "[W]e're transparent about the limitations of our technology, especially when we fall short."