When OpenAI introduced its ChatGPT desktop app for macOS at the Spring Update conference, many questioned why Windows didn’t get priority, especially given Microsoft’s role as OpenAI’s main financial supporter.
The answer became clearer after Microsoft announced its new Copilot+ PC at the Build conference, showcasing robust features seamlessly integrated into Windows that utilize AI models both on the device and in the cloud. By comparison, the ChatGPT desktop app for Mac appears limited.
AI Integration: Apple vs. Microsoft
Both Apple and Microsoft are incorporating AI into their applications, but their strategies differ. As it stands, Microsoft appears to be leading the charge.
On-Device AI Strategy
Apple has consistently aimed to create polished products equipped with cutting-edge technology and ample room for future enhancements. This is evident in devices like the Vision Pro, iPad Pro M4, and the latest MacBooks and iPhones, boasting more memory and compute power than most users require.
Apple tends to release operating system updates gradually, capitalizing on the hardware’s capabilities over time. For instance, the M1 chip sparked interest not for running large language models (LLMs), but now there’s a whole library designed for optimizing models on Apple’s silicon, courtesy of its research team.
However, the shortcoming of Apple’s strategy is that on-device generative AI has yet to mature fully. Despite significant advancements over the past year, on-device models still lack the reliability needed to perform multiple tasks autonomously. This creates a gap that the ChatGPT app for macOS fills until Apple develops capabilities for its own cloud-based models or achieves a level where on-device AI suffices independently. Currently, this results in a disjointed array of AI tools that fail to integrate smoothly.
Seamless AI Approach
In contrast, Microsoft’s approach involves delivering cutting-edge AI technologies and integrating them as closely to the user as possible. Its partnership with OpenAI allows for incorporating advanced models into Microsoft products, while also diversifying its offerings by supporting open models such as Llama and Mistral. Additionally, Microsoft has introduced small language models (SLMs) like Phi and Orca.
When asked if Microsoft became overly reliant on OpenAI, CEO Satya Nadella emphasized the company’s capability to determine its future: “Our products are not about one model. We prioritize having the leading frontier model, which is GPT-4 today. However, we also utilize Llama and Phi, among others, ensuring diversity in our AI offerings while deeply partnering with OpenAI.”
To further mitigate reliance on any single model, Microsoft established the Copilot brand, providing users a consistent interface across Windows while the AI assistant selects the optimal model for each task.
This extensive array of AI capabilities was showcased at the Microsoft Build conference, featuring image creation, live captions, productivity tools, and even the controversial Recall feature. These tools function in various configurations, utilizing both on-device and cloud-based resources.
As Microsoft continues to enhance its hardware, particularly with advanced ARM chips for laptops, expect on-device LLMs to grow in efficiency, with backend models evolving without disrupting user experience.
A Shift in the Landscape?
Following the ChatGPT app announcement, some mocked Microsoft for investing $10 billion in OpenAI merely to launch a macOS app. However, with Microsoft’s recent Build announcements, it appears that ChatGPT has become a strategic foothold within the Apple ecosystem. Running on Azure, deeper integration of ChatGPT into macOS and iOS will strengthen Microsoft’s presence in Apple’s user experience. Satya appears to have secured this round, but the competition continues. We await Apple’s updates at WWDC in June.