"Reduce Generative AI Hallucinations with Hybrid AI: Insights from Applied Intelligence Live! Austin 2023"

Before generative AI can be effectively implemented in enterprises, it must address a significant concern: hallucinations. This term refers to the tendency of AI models to fabricatenonsensical information or provide inaccurate responses. A recent study by Tidio reveals that 86% of users have encountered hallucinations while utilizing chatbots such as ChatGPT and Bard. Despite this, an impressive 72% of users still express trust in these AI systems.

In highly regulated sectors like finance, the risks associated with using AI for investor inquiries, particularly when it may dispense incorrect financial advice, come to the forefront. To mitigate these risks, Alexander Nettekoven, CEO of Multi AI and a prominent developer of an AI research assistant, advocates for the use of hybrid AI. Speaking at the Applied Intelligence Live! Austin 2023 conference, he emphasized that while general-purpose AI, like ChatGPT, excels at specific tasks, it is erroneous to rely on it for every problem.

To tackle the hallucination issue, Nettekoven recommends integrating large language models (LLMs) with additional AI systems. "You need to leverage the strengths of each AI," he explains. For instance, incorporating logic-driven components to regulate decision-making can enhance the reliability of generative AI. Also effective are rules-based methodologies tailored for specific applications. At Multi AI, a core principle is to prevent crucial information from being generated through the LLM alone. Instead, the model is directed to locate the necessary data, which is stored within verified databases for accurate retrieval.

"The solution at the end of the day is to construct this hybrid AI," Nettekoven asserts.

Pratik Gautam, the vice president and lead technical product manager at Citi, echoes the sentiment regarding the persistence of hallucinations, especially within the financial landscape. He suggests that instead of prescribing precise securities to purchase, guidance should stem from established values and the client’s historical data. This approach minimizes the risk of misinformation.

Srimoyee Bhattacharya, a senior data scientist at Shell, emphasizes the critical importance of providing accurate answers, particularly due to the heightened litigation risks in their industry. Nonetheless, generative AI plays a vital role in efficiently sifting through vast amounts of data, generating insights that benefit internal staff.

When selecting which business use cases to prioritize for AI implementation, Bhattacharya advocates for evaluating projects that promise the highest return on investment. One innovative project at Shell is the 'offer decision engine,' which tracks consumer purchases across 45,000 gas stations globally. This system not only recommends products but also alerts consumers about when to refuel.

Furthermore, Shell is utilizing AI to decrease emissions by optimizing transportation routes for buses and trucks. It also employs advanced AI for remote sensing, capable of analyzing 750,000 oil samples to monitor oil quality and signal when equipment requires maintenance.

At Citi, AI is empowering clients with deeper insights into securities, including exchange-traded funds (ETFs), while also facilitating access to important regulatory filings. Additionally, customer service benefits from smart chatbot technology, enhancing interaction efficiency.

As organizations explore the potential of generative AI, prioritizing accuracy and leveraging hybrid solutions can significantly enhance reliability and trust in these powerful tools.

Most people like

Find AI tools in YBX