Generative AI is driving a significant transformation at Citi, enhancing data-driven decision-making. However, the bank has opted not to implement an external-facing chatbot due to the inherent risks involved.
Promiti Dutta, Citi's head of analytics technology and innovation, articulated this perspective during the AI Impact Tour in New York. She noted, “When I joined Citi four and a half years ago, data science and analytics were often secondary considerations. The advent of Generative AI marked a paradigm shift, placing data and analytics at the forefront. Suddenly, everyone was eager to explore AI solutions.”
Citi's Generative AI Priorities
Dutta shared how this cultural shift sparked enthusiasm for AI projects throughout the organization. Citi categorized its generative AI initiatives into three key areas focused on delivering meaningful outcomes with measurable results.
1. Agent Assist: Large language models (LLMs) aid call center agents by summarizing customer information and facilitating note-taking during interactions. Although not directly customer-facing, this application enhances agents' ability to address customer needs.
2. Task Automation: LLMs streamline manual processes, such as summarizing extensive compliance documents and helping employees locate necessary information.
3. Internal Search Functionality: Citi is developing a centralized internal search engine, enabling employees to obtain data-driven insights effortlessly. This tool will soon allow staff to generate analyses through natural language, improving efficiency across the organization.
The Cautious Approach to External Engagement
While Citi embraces generative AI internally, Dutta cautioned against using LLMs for customer interactions, citing the potential risks. She highlighted concerns with LLM “hallucinations,” which, while beneficial in creative contexts, pose accuracy risks that are unacceptable in financial services. "In an industry where trust is paramount, we can't afford errors in customer interactions," she emphasized.
For now, Citi relies on pre-scripted natural language processing (NLP) methods for customer communication, a practice established before the surge of generative AI late in 2022.
Future Prospects for LLMs
Citi remains open to leveraging LLMs externally but wants to ensure all implementations include human oversight. Dutta mentioned that the highly regulated banking environment necessitates extensive testing before adopting new technologies. This measured approach contrasts with Wells Fargo, which actively uses generative AI for customer interactions through its Fargo virtual assistant.
Transforming Internal Processes
Citi's internal task force reviews generative AI initiatives, ensuring responsible deployment aligned with customer safety. Dutta shared that there is enthusiasm across the organization regarding generative AI, emphasizing the need to manage this excitement effectively.
Microsoft's Sarah Bird underscored the importance of stability in AI systems, sharing that the company is actively addressing LLM inaccuracies, particularly in applications employing retrieval augmented generation (RAG). Ongoing efforts aim to enhance the reliability of these models, which are essential for various applications.
At the event, Dr. Ashley Beecy from NewYork-Presbyterian highlighted how generative AI is reshaping healthcare through multimodal models, signifying a paradigm shift in patient care.
Conclusion
Citi is strategically navigating the generative AI landscape, focusing on internal enhancements while remaining vigilant about risks associated with customer-facing applications. This approach combines innovation with responsibility, ensuring that safety and customer trust remain paramount as the technology evolves.
For insights into future AI developments, join us at subsequent AI Impact Tour events in Boston on March 27 and Atlanta on April 10.