Apple Considers Partnership with Meta to Enhance AI Capabilities

Apple is reportedly in discussions with Meta to incorporate its generative AI models into Apple Intelligence, following a recent collaboration with OpenAI. Announced during the latest Worldwide Developer Conference, Apple Intelligence is designed to enhance generative AI features across Apple devices, enabling users to create and rewrite content seamlessly within Apple and third-party applications.

According to a report from the Wall Street Journal, these talks could lead to the integration of Meta's models such as Llama 3 into Apple Intelligence. This potential collaboration would allow users to select their preferred AI model to enrich their Apple Intelligence experience, akin to the functionality found in AI search applications like Perplexity, where users can switch between different underlying models.

Lian Jye Su, Chief Analyst of Applied Intelligence at Omdia, emphasizes that this partnership could significantly enhance the user experience for Apple users utilizing Meta's applications on Apple hardware. Current reports indicate that discussions are centered specifically on Apple Intelligence; however, Su raises questions about the possibility of this collaboration extending to mixed-reality devices, as both companies are competitors in this arena. Su points out that “Meta has been investing in spatial computing and embodied AI, which could also benefit Apple's premium XR products.”

Despite the advances in Apple’s M2 chip, the first iteration of Apple Intelligence will not be available on the Vision Pro headset. Key candidates for integration into Apple products include Llama 3 for language processing and Emu for image creation. Additionally, Meta is exploring Chameleon, a model that can process both text and visuals, along with JASCO for music generation, although these models are still in the research phase. Most likely, the Llama series of open-source foundation models will lead the way in this integration. However, the smallest current version, Llama 3 8B, may exceed the processing capabilities of some Apple devices.

In contrast to conventional generative AI experiences that rely heavily on cloud computing, Apple aims to enable these features to function directly on devices. To facilitate this, Meta might consider creating a lightweight version suitable for on-device operations. Concurrently, researchers at Meta are developing a larger-scale version, a monumental 400 billion parameter model.

Alexander Harrowell, Omdia's Principal Analyst for Advanced Computing, mentions that if Apple opts to utilize any Llama models, they would need to negotiate usage agreements similar to those required by major hyperscalers like Microsoft, Amazon, and Google. Although Apple does not fall into the hyperscale category, it would likely have to secure permissions for any monetization related to these models.

In addition to discussions with Meta, Apple is also in talks with Google, Anthropic (the developer behind Claude), and Perplexity for the integration of their generative AI technologies into Apple Intelligence. This multi-partner strategy is expected to mitigate reliance on any single AI provider, particularly important given the recent outages experienced by ChatGPT. As noted by OpenAI CEO Sam Altman, the company faces increasing demands for robust infrastructure to manage rising workloads effectively.

By forging connections with multiple AI firms, Apple is positioning itself to diversify its generative AI capabilities, allowing for improved resilience and performance for its users.

Most people like

Find AI tools in YBX