<–Data processing Go to ToC Network –>
The two main categories of Devices are:
- Augmented Reality Devices, where the Metaverse Experience is superimposed to the Universe Experience.
- Virtual Reality Devices where the Metaverse information is consumed by a User who is immersed in it.
Future VR headsets are expected to create audio-visual experiences that are like real-life doing better than today’s bulky and heavy devices that typically require a significant personal effort if worn for a long time. They will likely morph into devices of different form factors and functionalities and will be the melting pot of the technologies that will drive adoption of the Metaverse vision.
An AR headset needs most of the technologies required by a VR headset. The differences are caused by the following considerations:
- In VR, the human’s view of a Universe Environment is blocked and replaced by the presentation of a Metaverse Environment. The human’s experience is completely mediated by the Metaverse Environment.
- In AR, the human’s experience is based on the Universe Environment where they reside and augmented with elements drawn from a Metaverse Environment.
- In VR, the light of the screen where the Metaverse Environment is rendered reach the eye through a lens. The lens is adjusted based on the eye movement using eye-tracking technologies. Visual, sound, and haptic stimuli are used to interact with a Metaverse Environment.
- In AR, the signal of the camera is analysed using computer vision, mapping, and depth sensing technologies and transmitted to a Metaverse Environment proving appropriate elements drawn from a Metaverse Environment that are relevant to what the human sees.
The operation of a VR headset can be described by the following steps:
- The User moves their head, e.g., they turn to look at somebody (virtually) sitting next to them.
- The User’s head rotation and movement are tracked by Gyroscope, Accelerometer and Magnetometer to track and create an immersion and presence feeling in the User.
- The User’s change of location can also be tracked (using both inboard and outboard devices) following the position of the user’s head, body, and hands.
- The coded information of the movement is sent to the Metaverse Environment.
- The Metaverse Environment generates the scene that the User should see and hear.
- The coded information of the scene is sent back to the User.
- The coded information is rendered with a large Field of View (FoV) matching the capabilities of the human vision (>180º) to create an immersion feeling.
- The rendered scene is displayed showing to the eyes two different images of the scene viewed from slightly different angles to create a depth perception.
- The VR headset screen generates photons, and the loudspeakers generate soundwaves.
- The photons traverse the HMD lenses that make it easier for the eyes to accommodate the light from displays despite them being a few cm away. Fresnel lenses are used to have thinner and lighter lenses and sharper images.
- The User’s retina senses the photons, and the ear senses the sound waves.
- The User’s optic and acoustic nerves send millions of spikes per second to the brain.
- The User becomes aware of the new scene.
- The User activates a haptic device to convert their hand and finger movements into data understood by the Metaverse Environment.
- The User charges batteries when the battery is low.