A recent public challenge could halt the deployment of ChatGPT and similar AI systems. The nonprofit Center for AI and Digital Policy (CAIDP) has filed a complaint with the Federal Trade Commission (FTC), accusing OpenAI of violating the FTC Act with its large language models, such as GPT-4. CAIDP claims that the model is "biased" and "deceptive," posing risks to privacy and public safety. Additionally, they assert that it does not align with the Commission's guidelines for transparency, fairness, and explainability.
CAIDP is urging the FTC to investigate OpenAI and to suspend future releases of large language models until they meet the agency's standards. They also call for independent reviews of GPT products before launch and advocate for the FTC to establish an incident reporting system and formal standards for AI generators.
Marc Rotenberg, CAIDP's president, is among those who signed an open letter requesting a six-month pause on AI development to facilitate ethical discussions, a letter that also includes signatures from industry figures like Elon Musk.
Critics of systems like ChatGPT and Google Bard highlight issues such as inaccurate information, hate speech, and inherent bias. The CAIDP states that users may encounter unreliable outputs, as even OpenAI acknowledges that AI can "reinforce" false ideas. Although improvements like GPT-4 offer enhanced reliability, there are concerns about users relying on AI outputs without verification.
While there is no guarantee that the FTC will take action on the complaint, any new requirements could significantly impact the entire AI industry. Companies may face delays in development and increased scrutiny if their models fail to meet FTC standards. Although this could enhance accountability, it risks slowing the rapid advancement of AI technology.