The Most Advanced Text-Generating AI Models: A Double-Edged Sword
Today’s leading text-generating AI models often grapple with significant flaws. These models are known to "hallucinate" or fabricate facts, and they can exhibit various biases, including sexism, Anglocentrism, and racism. For instance, without careful moderation, OpenAI's GPT-4 can inadvertently provide harmful advice on self-harm, detail methods for synthesizing dangerous substances, or generate ethnic slurs to bypass social media scrutiny.
These issues present considerable challenges for businesses aiming to integrate AI into their applications. According to a recent Gartner survey, 58% of companies express concern over the potential for inaccurate or biased outputs. A similar proportion worries about the risk of confidential information being leaked — another well-documented flaw of text-generating models.
Efforts to enhance AI models are ongoing. For organizations eager to utilize existing models, particularly open-source options, startups like Atla are stepping up to the plate. Co-founded by Maurice Burger and Roman Engeler, Atla is developing what Burger refers to as “guardrails” for text-analyzing and generating models in “high-stakes” environments.
Burger previously co-founded Syrup Tech, which focused on AI-driven inventory management for e-commerce. Engeler boasts experience as an AI researcher at Stanford, where he explored the existential risks associated with text-generating systems.
“Atla’s mission is to construct safer AI technologies by enhancing their truthfulness, mitigating harmful outputs, and boosting reliability,” Burger stated. The company's inaugural product is a legal research model created in collaboration with teams from Volkswagen and N26, designed to respond to queries with citations from trusted legal sources.
Why prioritize AI for legal research? The demand is substantial, Burger asserts. Corporate lawyers frequently rely on external law firms to prevent mistakes, which can be both costly and time-consuming. It’s not unusual for legal professionals to spend hours poring over numerous documents to answer a single query—a task that an effective AI solution could dramatically lighten.
Burger expressed enthusiasm about the possibilities of generative AI: “We’re dedicated to enhancing the reliability of text-analyzing models in high-stakes scenarios.” He acknowledges the ambitious nature of this goal. However, the path to making AI models significantly safer remains challenging, especially considering that even well-funded companies like Anthropic have struggled to reduce bias and hallucination in their text-generating models. Atla faces substantial hurdles ahead.
Moreover, Atla isn't alone in its pursuit of safer text-generating AI. Other startups, such as Protect AI, Fairly AI, and Kolena, along with more recent entrants like Vera and Calypso, are also tackling this issue.
Despite the competition, Atla has garnered attention from investors, securing $5 million in a seed funding round led by Creandum, with support from Y Combinator and Rebel Fund. Creandum partner Hanel Baveja noted, “From our initial meetings, we’ve been deeply impressed by the ambition, tireless work ethic, and extensive AI expertise of Maurice and Roman. We are excited to back Atla as they develop reliable and safe AI applications for the most critical sectors.”
Burger indicated that this new funding will be used to expand Atla's team, scale their technology, and onboard additional clients while hiring for technical positions within its London operations.