This week, Apple unveiled its highly anticipated approach to generative artificial intelligence (AI). Soon, Apple device users will benefit from AI for tasks like rewriting and summarizing emails, managing calendars, and interacting more dynamically with Siri. However, Apple's strategy intentionally steers clear of creating an all-knowing superintelligence.
At the Global Developers Conference on June 10, Craig Federighi, Apple’s Senior Vice President of Software Engineering, highlighted the company’s cautious stance on generative AI, stating, "We won’t let this teenager fly the plane." This caution is justified; generative AI can yield unpredictable and inaccurate responses, posing risks for major tech firms. For example, earlier this year, Microsoft’s Copilot chatbot delivered bizarre responses, prompting intense scrutiny. Similarly, Google's AI mistakenly suggested harmful actions, resulting in widespread backlash. Apple aims to sidestep such pitfalls to protect its brand image.
Apple has long positioned itself as a responsible tech leader focused on privacy and user safety. CEO Tim Cook notably criticized Mark Zuckerberg during the Cambridge Analytica scandal, reinforcing his commitment to user data protection. Apple asserts that it does not store personal data or utilize it to train its AI models. The version of its AI showcased at the event was designed to avoid creating hyper-realistic images, which addresses concerns about deep fakes.
As of now, Apple has not released an AI "agent" capable of autonomously managing user tasks—a feature that other companies prioritize, albeit with significant risks. Even Apple’s collaboration with OpenAI is carefully managed to minimize potential errors; Siri will only query ChatGPT with explicit user consent. Moreover, Apple's partnership with OpenAI is not exclusive and may extend to other AI chatbot developers.
Professor Ethan Mollick from the Wharton School described Apple’s approach as making AI "very ordinary." In contrast, OpenAI exposes users to the full range of AI capabilities, emphasizing its strengths. Apple intends to provide personalized recommendations without collecting or storing personal data, achieved through on-device processing that negates the need for external data transmission.
For more complex tasks requiring server interaction, Apple plans to implement a new "private cloud computing" model that encrypts user data. The company has a solid track record in data encryption and allows external audits for security validation. However, Apple's privacy measures are not immune to criticism. Elon Musk, who leads a competing AI startup and has clashed with OpenAI, raised concerns about privacy risks if ChatGPT were integrated into Apple's systems, warning of potential breaches.
Musk’s comments, although influenced by his competitive viewpoint, highlight real concerns about personal information shared via Apple devices, increasing expectations for strong privacy safeguards. Despite Apple’s cautious strategy, errors in AI development are inevitable. Cook admitted in a Washington Post interview that he cannot guarantee Apple's AI won't occasionally fabricate information. "I know there is a potential for a range of alarming issues," he stated, underscoring the company's commitment to thoughtful progress in the AI landscape.