Introducing mixed reality
Mixed reality is the result of blending the physical world with the digital world. It is the next evolution in human, computer and environment interaction, and unlocks possibilities that before now have been restricted to our imaginations. It is made possible by advances in computer vision, graphical processing power, display technology and input systems. The term “mixed reality” was originally introduced in a 1994 paper by Paul Milgram and Fumio Kishino, “A Taxonomy of Mixed Reality Visual Displays.” Since then, the application of mixed reality has expanded beyond displays to include environmental input, spatial sound and location.
Over the past several decades, the relationship between human input and computer input has been well explored. It even has a widely studied discipline known as “human-computer interaction” or HCI. Human input happens through a variety of means including keyboards, mice, touch, ink, voice and even Kinect skeletal tracking.
Advances in sensors and processing are giving rise to computer input from environments. The interaction between computers and environments is creating a new form of interaction based on computer perception. This is why the API names in Windows that track environmental information are called perception APIs. Environmental input captures things like a person’s position in the world (e.g., head tracking), surfaces and boundaries (e.g., spatial mapping and spatial understanding), lighting, environmental sound, object recognition, and location.
Now, the combination of computer processing, human input and environmental input is opening the door to new opportunities to create true mixed-reality experiences. Movement through the physical world can translate to movement in the digital world. Boundaries in the physical world can influence application experiences such as gameplay. Without environmental input, experiences cannot blend physical and digital realities.
Since mixed reality is the blending of the physical world and digital world, these two realities define the polar ends of a spectrum known as the “virtuality continuum.” For simplicity, we refer to this as the “mixed-reality spectrum.” On one end of the spectrum is physical reality in which humans exist. On the other is the corresponding digital reality. Most mobile phones today have little or no environmental understanding capabilities. As a result, they can’t offer mixed-reality experiences. Experiences that overlay graphics on video streams of the physical world are augmented reality. Experiences that occlude your view to present a digital experience are virtual reality. Experiences between these two extremes are mixed reality, including, for example, the projection of holograms onto the physical world, or the representation of real-world objects, like the walls of your living room, in the digital world.
Microsoft is working on a number of mixed-reality initiatives, most notably the Microsoft HoloLens, the first self-contained holographic computer, enabling you to interact with holograms in the world around you. A demonstration of the HoloLens is available here: https://www.microsoft.com/en-us/hololens.