On Thursday, June 6, U.S. Treasury Secretary Janet Yellen spoke at a Financial Stability Oversight Council (FSOC) meeting on the implications of artificial intelligence (AI) for financial stability. She highlighted the potential risks financial institutions face when using AI technologies, marking a significant moment in her ongoing discourse on the topic. A Treasury official noted that Yellen has personally experimented with an AI chatbot.
The two-day conference, co-hosted with the Brookings Institution, included speakers from key regulatory bodies like the Office of the Comptroller of the Currency (OCC). The event aimed to explore the systemic risks AI poses to the financial services sector while fostering innovation alongside effective regulation. Attendees comprised executives from regulatory agencies, technology firms, insurance companies, asset management firms, academics, and bank representatives.
Yellen identified several critical risks AI introduces to the financial industry, particularly concerning market volatility and crowded positions. She emphasized the Biden administration's prioritization of AI and financial stability, predicting the topic's escalating importance in the coming years. The challenge of harnessing AI's benefits while managing risks has become a focal point for both the Treasury and the FSOC.
Yellen acknowledged AI's advantages in finance, such as improved portfolio management through predictive analytics and enhanced fraud detection. Additionally, AI can automate customer service, potentially reducing costs and making financial services more accessible. "When used appropriately, AI can boost efficiency, accuracy, and accessibility of financial products," she stated.
However, Yellen cautioned about the significant risks tied to AI use in finance. These risks include vulnerabilities linked to the complexity and opacity of AI models, insufficient risk management frameworks, and the peril of many market participants depending on the same data and models. A concentrated number of suppliers for models and cloud services can further heighten risks involving third-party providers. Furthermore, inadequate or flawed data may perpetuate existing biases or introduce new biases into financial decision-making.
Analysts suggest Yellen's warnings extend to the potential for overcrowded positions in popular trades if many investors depend on AI tools yielding similar outcomes. Should only a few firms provide AI models, any issues at those companies could have widespread repercussions across the financial sector.
While Yellen did not specifically address AI-generated inaccuracies, known as "hallucinations," she confirmed the Treasury's active use of AI to combat financial crimes. FSOC officials indicated that the council is working to understand how AI undermines the financial system, including increased monitoring of AI applications in the sector.
Yellen also noted the Treasury's efforts to tackle illicit financial activities—such as money laundering and terrorist financing—considered major threats. The IRS is enhancing its fraud detection capabilities related to tax evasion using AI.
Looking ahead, U.S. government and financial regulators plan to strengthen their capacity to utilize advanced technologies and improve their understanding of AI in financial services. They will seek public input from market participants, consumers, scholars, and the broader community regarding AI's applications in finance. Additionally, the Treasury is organizing a roundtable on AI in the insurance industry, focusing on consumer protection measures against AI-driven loan discrimination, and will engage with international regulatory bodies to monitor AI's effects on the global financial system.
Yellen emphasized that the FSOC and its members will enhance their AI regulatory capabilities while promoting information sharing and dialogue. They will build on existing risk management guidelines and introduce scenario analysis to identify future vulnerabilities and strategies to improve resilience against emerging challenges.
In last December's FSOC annual report, concerns about AI's potential threat to the financial system were initially raised, labeling its widespread use an "emerging vulnerability." The report addressed risks related to AI, including cybersecurity threats, compliance issues, and consumer privacy concerns, while cautioning about the complex nature of AI models like ChatGPT, which can produce flawed outcomes and operate as opaque “black boxes."
Yellen reiterated the Treasury's commitment to addressing operational risks, cybersecurity threats, and fraud challenges associated with AI. The report underscored the need for developers, financial industry users, and regulators to remain vigilant as AI methods become increasingly complex, making errors and biases more challenging to identify and correct.