Reka Launches Reka Core: A New Multimodal Language Model Competing with GPT-4 and Claude 3 Opus

Reka, an AI startup based in San Francisco and founded by researchers from DeepMind, Google, and Meta, has launched a new multimodal language model called Reka Core. This model, touted as the company’s “largest and most capable,” was trained from scratch using thousands of GPUs.

Available today via API, on-premise, or on-device deployment options, Reka Core is the third addition to the company’s family of language models. It excels in understanding multiple modalities, including text, images, audio, and video. Impressively, despite being trained in less than a year, its performance rivals that of industry giants like OpenAI, Google, and Anthropic.

“This ability to train high-performing models in a short time frame sets us apart,” said Dani Yogatama, co-founder and CEO of the 22-person company, in a recent interview.

Reka Core was tested on Netflix’s “3 Body Problem,” successfully translating onscreen actions into text. Yi Tay, Reka’s chief scientist and co-founder, highlighted that the model was developed using "thousands of H100s." Competing against leading models like OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus is no small feat; however, Tay assured that Core's performance is still improving.

What Does Reka Core Offer?

While the exact number of parameters in Reka Core remains undisclosed, it is described as a “very large model” (with the previous version, Reka Flash, containing 21 billion parameters). It was trained on diverse data sources, including licensed, publicly available, and synthetic data in text, audio, video, and image formats.

This comprehensive training allows Reka Core to process multiple modalities and respond accurately across various domains, such as mathematics and coding, with exceptional reasoning ability. It supports 32 languages and has an extensive context window of 128,000 tokens, making it suitable for working with lengthy documents. Yogatama noted that Core is only the second model after Google’s Gemini Ultra to encompass all modalities and deliver high-quality outputs.

In performance tests, Reka Core surpassed Gemini Ultra in video perception, scoring 59.3 versus 54.3. In the MMMU benchmark for image tasks, it followed closely behind GPT-4 (56.8), Claude 3 Opus (59.4), and Gemini Ultra (59.4) with a score of 56.3. In contrast, the vision-capable Grok model from Elon Musk’s xAI scored only 53.6.

Independent evaluations ranked Reka Core as the second-best in multimodal performance.

Moreover, Core matched or exceeded the performance of prominent models in various benchmarks. In the MMLU knowledge tests, it scored 83.2, closely following GPT-4, Claude 3 Opus, and Gemini Ultra. Additionally, it outperformed GPT-4 in reasoning and coding tasks, receiving scores of 92.2 and 76.8, respectively.

To attain such performance in a brief timeframe, the company adopted a backward development approach. Instead of traditional model training, they established a targeted performance goal and reversed-engineered the necessary data volume and GPU requirements to achieve it.

Partnerships and Future Plans

With a focus on multimodal capabilities and competitive pricing—$10 per million input tokens and $25 per million output tokens—Reka aims to explore diverse use cases across industries like e-commerce, gaming, healthcare, and robotics. For reference, OpenAI’s GPT-4 Turbo charges the same for input tokens but $30 for output.

Though still in its early stages, Reka is actively working to challenge the market dominance of OpenAI, Anthropic, and Google. The startup has already initiated collaborations with industry partners; for instance, Snowflake recently incorporated Reka Core and Flash into its Cortex service for developing language model applications. Partnerships with Oracle and AI Singapore, which unites Singapore-based research institutions, are also in effect.

Yogatama mentioned that since the launch of the initial models in the Reka family (Flash and Edge), strong interest from enterprises has resulted in a growing customer pipeline. More details on partnerships are expected soon.

With the first year focused on bringing models to market, Reka plans to enhance its offerings while simultaneously scaling business operations. The team is committed to improving Core's performance and simultaneously developing its next version.

Despite ongoing advancements, Yogatama clarified that the company has no immediate plans to open-source its technology. He advocates for open-source principles but emphasizes the need to balance what is shared to ensure sustainable business growth.

Reka Core's Competitive Landscape

As Reka Core emerges in the competitive AI landscape, it demonstrates promising capabilities that put it in contention with leading models in the market today.

Most people like

Find AI tools in YBX