American regulators are ramping up their scrutiny of generative AI. The Federal Trade Commission (FTC) has initiated an investigation into OpenAI, the creator of ChatGPT and DALL-E. Officials are examining how the company manages risks associated with its large language AI models.
The FTC is particularly concerned that OpenAI may be violating consumer protection laws through "unfair or deceptive" practices that could jeopardize public privacy, security, and reputation. This investigation includes a focus on a bug that exposed sensitive data from ChatGPT users, such as payment information and chat histories. Although OpenAI reported that only a small number of users were affected, the FTC is apprehensive about potential lapses in security.
Additionally, the FTC is seeking information regarding complaints about AI-generated false or malicious statements and insight into users' understanding of the accuracy of the products they use.
While OpenAI has not yet commented, the FTC typically does not discuss ongoing investigations. However, it has previously cautioned that generative AI could violate laws if it causes more harm than good, such as being used for scams, misleading marketing, or discriminatory advertising. The FTC has the authority to levy fines or issue consent decrees against companies found in violation.
Although specific AI regulations are not anticipated soon, the government is intensifying its scrutiny of the tech sector. OpenAI CEO Sam Altman recently testified before the Senate, defending the company's privacy and safety measures while highlighting the benefits of AI. He assured that protections are in place and that OpenAI will remain "increasingly cautious" in enhancing its safeguards.
It remains uncertain whether the FTC will investigate other generative AI companies like Google and Anthropic. However, the OpenAI inquiry serves as a potential blueprint for how the agency may address similar cases, signaling a commitment to closely monitor AI developers.