Tomorrow, February 2, Apple will start delivering its highly anticipated mixed reality (MR) headset, the Vision Pro. I believe this innovative product will not only impress consumers but also reinvigorate the technology industry.
However, if I were a developer creating an app for the Vision Pro, I would have to navigate Apple's recent language guidelines, which prohibit using terms like "mixed reality headset." Instead, Apple insists on referring to the device as a "spatial computer." While I understand the importance of branding, Apple's attempt to suppress established terms like augmented reality (AR), virtual reality (VR), and MR seems excessive.
Having worked in immersive technology since the emergence of virtual reality, I have witnessed numerous shifts in terminology over the past 30-plus years. The accumulation of these changes has often frustrated industry professionals. A notable example is the adoption of "extended reality" about a decade ago. I find "extended" to be a vague term, preferring "spatial computing" for its clarity.
My concern arises from Apple's drive to eliminate longstanding terminology. During my early career, "virtual reality" represented cutting-edge technology, and I remember conducting VR experiments at NASA, where the image below served as a significant inspiration.
![NASA photo of a “virtual reality” experience circa 1992]
The human experience in this image has been known as virtual reality for nearly 40 years. If a developer creates a fully immersive experience on the Vision Pro, why shouldn't they describe it as virtual reality? After all, the VR headset pictured above is now part of the Smithsonian collection—a testament to our history and cultural heritage.
The Vision Pro greatly surpasses the NASA headset in fidelity and capabilities. Its standout feature is the seamless integration of the real world with spatially projected virtual content, resulting in a unified perceptual experience.
To clarify, augmented reality (AR) and mixed reality (MR) refer to different levels of blending the real and virtual worlds. AR typically involves overlaying virtual content onto the real environment, while MR creates a more interactive experience where virtual elements respond to the physical surroundings.
The confusion between AR and MR stems from the evolution of technology and marketing. For years, the term "augmented reality" sufficed, but as simpler systems entered the market, its definition became muddled. While both terms serve distinct purposes today, they help define the capabilities of emerging technologies.
My journey in this field began in 1991, when I focused on aligning physical and virtual spaces to create a cohesive perceptual reality. Although "design for perception" was my term, it lacked catchiness. Fortunately, the phrase "augmented reality" emerged soon after, effectively capturing the essence of enhancing reality with virtual content.
The release of Google Glass in 2013 further complicated the AR landscape. Although it was an innovative product, it failed to meet the immersive, spatially registered, and interactive criteria that define true AR. Instead, it blurred the line between AR and smart glasses, diluting the meaning of augmented reality.
As smartphone makers began labeling simple overlays as augmented reality, the term lost its original significance—much to the dismay of industry professionals. This trend likely influenced Microsoft's decision to adopt "mixed reality" when launching the HoloLens, which accurately represented genuine AR capabilities.
Today, we differentiate between AR, MR, and VR primarily based on user experience—not hardware. The distinction emphasizes the interaction and spatial registration of virtual content within real environments.
In this context, the Apple Vision Pro is indeed an MR headset. It merges the real world with interactive virtual content, offering a precise, unified experience. Additionally, it can facilitate simpler AR experiences and fully simulate VR environments.
The Vision Pro is poised to impress consumers with its unparalleled immersive quality. It represents a significant leap forward in spatial computing capabilities, featuring a spatial operating system, visionOS, that utilizes users' gaze for input.
My recommendation to Apple is to refrain from excessively restricting the language within the field. As someone who recalls the impact of Apple's iconic "1984" ad, which warned against controlling language to manipulate perception, I advocate for allowing developers to reference VR, AR, and MR experiences on the Vision Pro.
Ultimately, language is essential for communication and understanding in the immersive technology sector. Preserving its richness will benefit developers and users alike.
Louis Rosenberg is the founder of Immersion Corp and Unanimous AI and developed the first mixed reality system at the Air Force Research Laboratory. His new book, "Our Next Reality," is available from Hachette.