Since its founding in 1914, the U.S. Federal Trade Commission (FTC) has acted as a protector against fraud and deception in consumer transactions. The agency has addressed issues such as "review hijacking" on Amazon, simplified the cancellation of magazine subscriptions, and blocked predatory ad targeting.
Recently, Michael Atleson, an attorney in the FTC's Division of Advertising Practices, articulated how generative AI systems like ChatGPT and DALL-E 2 could potentially violate the principles of fairness established under the FTC Act. According to Atleson, a practice is deemed unfair if it inflicts more harm than good: “It’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable and not outweighed by benefits to consumers or competition.”
Atleson highlighted the influence of chatbots like Bing, Bard, and ChatGPT on users' beliefs and behaviors. These technologies have already been integrated into roles that influence individuals, such as mediators in supply networks and digital therapists. The phenomenon of automation bias may amplify these effects, as users often trust the insights of AI systems that appear neutral and objective. "People may mistakenly believe they're interacting with an entity that genuinely understands and supports them," he argued.
While he acknowledged that issues related to generative AI extend beyond the FTC's immediate jurisdiction, Atleson stressed that the agency will rigorously monitor companies that exploit this technology to take advantage of consumers. He cautioned that companies exploring innovative uses of generative AI, particularly for targeted advertising, should be aware that deceptive design practices are frequently scrutinized in FTC cases involving financial promotions, in-game purchases, and service cancellations.
The FTC's regulations also encompass advertising within generative AI applications, similar to how Google integrates ads into search results. "Consumers should be informed if an AI product directs them to specific websites or services due to sponsorship," Atleson stated. He emphasized that clarity is essential, especially when it comes to discerning whether users are engaging with a human or an AI.
Finally, Atleson issued a strong warning to tech firms: "In light of the significant concerns surrounding AI tools, it may not be wise for companies to eliminate staff focused on AI ethics and responsibility. If the FTC investigates and you wish to demonstrate that you have adequately assessed risks and mitigated potential harms, such cutbacks could be detrimental." This serves as a cautionary tale for businesses, drawing on lessons learned from other companies that faced scrutiny.