Although it may seem futuristic, the concept of virtual reality has actually been around for a long time.
In 1938, playwright Antonin Artaud wrote a series of essays referring to the illusory nature of characters and objects in the theater as “la realite virtuelle.” An English translation of his work published in 1958 was called “The Theater and its Double”, which marked the earliest published use of the term “virtual reality.”
The first multimedia device in the form of an interactive theater experience was invented in 1957, according to the Virtual Reality Society. Known as the Sensorama, the technology consisted of a viewing screen within an enclosed booth that displayed stereoscopic images, audio output, and devices that emitted smells.
In 1968, a head-mounted display attached to a computer was introduced that enabled the wearer to see a virtual world. However, it was extremely heavy and had to be attached to a suspension device.
By the1980s, virtual reality was used on projects for NASA as well as research into new forms of human-computer interaction.
Virtual reality continued to be popular throughout the 1990’s but eventually the public lost interest due to a lack of novel applications and advancements for the technology. Over the past few years, a new wave of interest in virtual reality (VR) has formed.
It started with the design of the first prototype of the Oculus Rift in 2010 followed by next-gen video game developer Valve’s introduction of low-persistence displays in 2013. This technology offered a lag-free experience and smear-free display for consuming VR content.
Today’s technology has pushed VR even further, and researchers continue to make environments more and more lifelike.
One crucial component to making a VR experience more realistic, is low persistence OLED, which has become the technology of choice for viewing devices that help users with “maintaining presence in VR.” Other important factors associated with this viewing experience include tracking, low latency, high resolution, and high quality optics.
A new report from market research firm IDTechEx highlights the innovations underway that are moving the field toward even more realistic visuals and better user experiences.
Currently, most VR experiences occur via various head mounted displays. Companies like Avegant and Magic Leap are pursuing two potential breakthroughs that could enhance usability.
One, called focus tunable displays, entails projecting virtual content in multiple focal planes, improving the user experience by tackling issues of visual discomfort.
Lightfield cameras will be crucial to this process. This type of technology—which is currently being made by companies like Lytro— can re-focus pictures even after they have been taken, by harnessing a microlens array and special light field sensor to analyze which direction rays of light are entering the camera.
This permits a multidimensional light field to be recorded, which then is passed through special software. Algorithms synthesize the data to simulate what the image would look like if it was focused on a different plane or captured from a different angle.
Another approach is foveated image rendering, which could yield performance optimization. A key tenet of this strategy is that it relies on the incorporation of eye tracking technology built into the headset. Essentially, this involves reducing the image quality in the wearer’s peripheral vision, since the maximum resolution is expected only at the fixation point of the eye’s retina, called the fovea.
To improve functionality, engineers need to tackle several development challenges present in the current generation of headsets. Currently, tethering is still necessary, and display and optics are still below human limits. The weight and ergonomics of the VR device itself also still need improvement.
Mastering these issues, as well as refining advancements in visualization and user experience, could lead to new formats for viewing these virtual reality experiences.
One unique designs for new VR headsets is incorporating the technology into a helmet. DAQRI, a seven-year-old startup with 350 employees and over $130 million in investment, has created a device that integrates augmented reality technology in a protective helmet for use in industrial operational environments.
Emerson, Siemens, and Hyperloop are some of the companies working with DAQRI that will be trying the Smart Helmet in the field.
DAQRI also has a pair of smart glasses that support augmented reality viewing that is under development as well.
The device opts for the approach of tethering the processing power onto what the company calls the “compute pack.” This design element simultaneously reduces the bulk and weight of the headset itself.
Another project in DAQRI’s pipeline is potentially creating head-up displays for car windshields. They acquired a UK based company called Two Trees Photonics in March 2016.
This firm specialized in building holographic technology based on research that was performed at Cambridge University, according to Forbes. Displays developed at the university employed laser holographic techniques that provided better color, brightness, and contrast compared to other systems.
The IDTechEx report highlights other unique designs up and coming in the headset space.
This includes a standalone VR headset called the Alcatel Vision. It’s a pair of goggles attached to a large back pad by a pair of flexible plastic arms that is powered by a 3,000 mAh battery in the back. This was done in an effort to enhance portability/time between recharging and offers users an even distribution of weight for maximum comfort.
Plus, it has a 120 degree field of vision, making it the largest headset available with that view.
Another development area that is evolving is the fusion of augmented reality and virtual reality into one all-in-one headset.
Intel introduced a headset called Project Alloy at its August 2016 Developer Conference, where the company called it, “an all-in-one virtual reality solution made from the ground up.”
Intel built this device with its real-sense technology that essentially lets you change the physical environment around you into a digital one, reported Wired.
A fully powered computer featuring a Core M processor is at the center of the device, along with vision processors, fish-eye cameras, and a RealSense 200 series depth sensors. The cameras that will be build into the device will assist in inertial measuring unit with motion tracking. The display will be built with a conventional 1080p-per-eye displays running at 90 frames per second.
Sulon q, a Canadian company, is working on a similar product that specializes in the merged reality space.
Its augmented reality capabilities will completely replicate the world around the wearer using what the company calls “real-time machine vision technologies.”