Anthropic Launches Claude 3 AI Models Designed for Business Efficiency

Anthropic has made a significant advancement in generative AI with the launch of its Claude 3 family of models, introducing its first multimodal iterations designed to tackle key business concerns regarding generative AI: cost, performance, and accuracy. This innovative startup, backed by substantial investments from major players like Amazon and Google, aims to compete with established giants like Microsoft and OpenAI.

The Claude 3 family comprises three models—Haiku, Sonnet, and Opus—each designed to accept both text and image inputs, returning outputs in text format. The models vary in terms of capability, with Opus representing the most advanced option, followed by Sonnet and Haiku. A standout feature of these models is that all three outperform OpenAI’s GPT-3.5 and Gemini 1.0 Pro in various tasks, including knowledge assessment, reasoning, math, problem-solving, coding, and multilingual tasks. Notably, Opus even surpasses OpenAI’s GPT-4 and Google's Gemini Ultra, showcasing “near-human levels of comprehension and fluency” in complex tasks, according to Anthropic's researchers.

The Claude 3 models begin with a 200,000 token window but can support input of over one million tokens for selected clients requiring greater processing capabilities. In terms of pricing, Opus is the priciest at $15 per million tokens (MTok) for input and $75/MTok for output. In contrast, OpenAI’s GPT-4 Turbo offers a cheaper alternative at $10/MTok for input and $30/MTok for output, although it has a smaller context window of 128,000 tokens. Sonnet is priced at $3/MTok for input and $15/MTok for output, while Haiku, the most economical option, costs 25 cents/MTok for input and $1.25/MTok for output, exceeding the performance of GPT-3.5 but not matching GPT-4.

The models have been trained on a dataset compiled until August 2023, but they can also access up-to-date information through integrated search applications. Opus and Sonnet are currently available through claude.ai and the Claude API across 159 countries, with Haiku set to launch soon. For enterprise users, Sonnet is primarily accessible via Amazon Bedrock as a managed service and in private preview on Google Cloud's Vertex AI Model Garden, while Opus and Haiku will soon be available on both platforms.

In enhancing their functionality, upcoming features for the Claude 3 models will include function calling, interactive coding (REPL), and advanced capabilities reminiscent of an intelligent agent. The models promise “near-instant responses” ideal for applications like live customer chats, auto-completions, and data extraction, where rapid responses are crucial. For instance, Haiku can analyze dense research papers and graphs within approximately three seconds, with Sonnet showing improved speeds for knowledge retrieval and sales automation tasks.

Addressing another critical concern with generative AI—hallucinations or incorrect outputs—Anthropic has made substantial improvements. The company reports that Opus is twice as accurate as Claude 2.1 in generating correct responses while minimizing errors. Researchers assessed accuracy by measuring correct answers, incorrect answers, and instances where the model appropriately indicated it did not know the answer. Furthermore, Opus is noted for its “near-perfect recall” with a 99% accuracy rate in maintaining context over longer prompts, adhering closely to a brand’s voice and guidelines for consumer-facing applications.

Yet, it is important to note that the Claude 3 models do not retain prompts from past interactions, nor do they process links or identify individuals in images.

Since its inception two years ago by former engineers from OpenAI, Anthropic has focused on creating responsible AI, driven by its “Constitutional AI” philosophy. This approach incorporates human values into the model's operational rules, aiming to eliminate outputs that are sexist, racist, or otherwise toxic, in line with principles such as the U.N.'s Universal Declaration of Human Rights. The company announced an additional commitment today to ensure respect for disability rights, aiming to reduce potential stereotypes and biases in generated content.

While the Claude 3 models are at AI Safety Level 2, signifying some early signs of dangerous capabilities, Anthropic assures users that the models rely solely on public online data, private data from third parties, and their own data, without accessing password-protected sites or circumventing CAPTCHAs. Moreover, user prompts and generated outputs remain excluded from training datasets.

Anthropic has adjusted the caution levels of these models, allowing for improved context understanding. The Claude 3 models are now “significantly less likely” to decline requests that skirt the edges of their guidelines compared to previous iterations, enhancing usability while maintaining safety.

Most people like

Find AI tools in YBX