<–Data processing     Go to ToC   Network –>

The two main categories of Devices are:

  1. Augmented Reality Devices, where the Metaverse Experience is superimposed to the Universe Experience.
  2. Virtual Reality Devices where the Metaverse information is consumed by a User who is immersed in it.

Future VR headsets are expected to create audio-visual experiences that are like real-life doing better than today’s bulky and heavy devices that typically require a significant personal effort if worn for a long time. They will likely morph into devices of different form factors and functionalities and will be the melting pot of the technologies that will drive adoption of the Metaverse vision.

An AR headset needs most of the technologies required by a VR headset. The differences are caused by the following considerations:

  1. In VR, the human’s view of a Universe Environment is blocked and replaced by the presentation of a Metaverse Environment. The human’s experience is completely mediated by the Metaverse Environment.
  2. In AR, the human’s experience is based on the Universe Environment where they reside and augmented with elements drawn from a Metaverse Environment.
  3. In VR, the light of the screen where the Metaverse Environment is rendered reach the eye through a lens. The lens is adjusted based on the eye movement using eye-tracking technologies. Visual, sound, and haptic stimuli are used to interact with a Metaverse Environment.
  4. In AR, the signal of the camera is analysed using computer vision, mapping, and depth sensing technologies and transmitted to a Metaverse Environment proving appropriate elements drawn from a Metaverse Environment that are relevant to what the human sees.

The operation of a VR headset can be described by the following steps:

  1. The User moves their head, e.g., they turn to look at somebody (virtually) sitting next to them.
  2. The User’s head rotation and movement are tracked by Gyroscope, Accelerometer and Magnetometer to track and create an immersion and presence feeling in the User.
  3. The User’s change of location can also be tracked (using both inboard and outboard devices) following the position of the user’s head, body, and hands.
  4. The coded information of the movement is sent to the Metaverse Environment.
  5. The Metaverse Environment generates the scene that the User should see and hear.
  6. The coded information of the scene is sent back to the User.
  7. The coded information is rendered with a large Field of View (FoV) matching the capabilities of the human vision (>180º) to create an immersion feeling.
  8. The rendered scene is displayed showing to the eyes two different images of the scene viewed from slightly different angles to create a depth perception.
  9. The VR headset screen generates photons, and the loudspeakers generate soundwaves.
  10. The photons traverse the HMD lenses that make it easier for the eyes to accommodate the light from displays despite them being a few cm away. Fresnel lenses are used to have thinner and lighter lenses and sharper images.
  11. The User’s retina senses the photons, and the ear senses the sound waves.
  12. The User’s optic and acoustic nerves send millions of spikes per second to the brain.
  13. The User becomes aware of the new scene.
  14. The User activates a haptic device to convert their hand and finger movements into data understood by the Metaverse Environment.
  15. The User charges batteries when the battery is low.

<–Data processing    Go to ToC    Network –>