<– Sensory information    Go to ToC    User Devices –>

The need to process Data can only increase. Here two aspects are analysed:

Basic processing
Computing services

1      Basic processing

For decades, the Moore law – the doubling every 24 and later every 18 months of the transistor density achievable with the lowest cost transistors – has been a good, even conservative predictor of the ability to pack processing capability into a silicon chip. The Moore law was later complemented by Ray Kurzweil’s proposal that the computational power – the number of calculations/s – should also consider the layout and clock speed, not just the transistor density.

In the 2000’s, the increase in performance of a processor has substantially slowed down compared to the preceding decades. In the last 15 years since 1990, processor performance increased by ~50%/year, but in 2018 the increase had slowed to just 3.5%/year. The slowdown was caused by several factors:

  1. Hard to pack small transistors. To add more transistors to a chip more space and more energy are required, but the physical limit is being reached. This is the end of “Dennard scaling” stating that the energy need per transistor area is roughly constant because smaller transistors can be packed more densely.
  2. Difficult to increase clock speed. To increase clock speed above a certain level, more energy is required/more heat is produced. The CPU speed, especially for fan-less laptops, has not increased beyond 3-4 GHz since the first half of the ‘2000s.
  3. Inefficient to add more cores. To exploit the processing capability of dual-core, quad-core, and even octo-core CPUs, the processing must be well distributed among the cores, or the gain is lost. Exceptions are specialised tasks like graphics where different parts of the screen can be simultaneously updated.
  4. Slow growth of supercomputer processing power. Unlike the ~80%/year of a decade ago, the processing power only grows by ~40%/year because more chips must be packed in multi-million core supercomputers because the computer chips stop improving. More energy is required to operate and refrigerate supercomputers.

Possible research avenues that may maintain a sustainable future growth in processing performance are:

  1. Packing transistors in more layers to reduce chip size. This is done in memory chips, stacking transistors in 128 layers or more, etched in one go. Such chips are good for portable/wearable devices and Internet of Things devices. Energy and heating problems affect dynamic random-access memory (DRAM) because almost every part of the chip is constantly powered.
  2. Using AI to design efficient chip architecture. Especially with AI it is possible to design 3D chips making better use of the 3rd AI can design solutions in a process where they compete against one other and the best designs compete and evolve, until an optimal design is found that can no longer be improved.
  3. Combining chiplets. These components take some of the load off CPUs (the CPUs themselves could be designed as chiplets). The recently established UCIe™ intends to develop a specification defining the chiplet interconnection within a package, enabling an open chiplet ecosystem and interconnect at the package level [27] offering the following advantages:
    1. No need to use the same processor node.
    2. Possibility to mix chiplets with different geometry.
    3. Flexibility in adopting 2.5D and 3D technology.
    4. Possibility to connect chiplets with those of other companies.
  4. Using other technologies:
    1. Graphene-base CPUs can achieve x1000 speed of and consume x1/100 less energy than silicon chips and enable devices with smaller size and greater functionality.
    2. Superconducting computers offer the possibility to make 3D chips that are very dense still use very little energy at the cost of cooling to extremely low temperatures.
    3. Quantum computers can handle certain types of tasks far more quickly than traditional computers, solving problems that current computers would take a very long time to solve.
    4. Optical computers process, store, and communicate using light waves from lasers or incoherent sources. The electronic transistor is replaced by an “optical transistor” [26].
    5. Neuromorphic computers integrate features inspired by neurobiological systems and could provide energy-efficient solutions to AI problems.

2      Computing services

1     Cloud computing

Cloud computing is the delivery of hosted services over the internet. Adoption of this computing paradigm allows devices to outsource data processing and storage to external computers. It should balance several elements:

  1. Powering and cooling costs. In appropriate conditions, these are reduced because a small number of large and centralised computers and servers is more efficient that if it is done on many devices.
  2. Collection of data and distribution or processed data usually implies a lot of traffic back and forth the Devices to the cloud. Unless the bitrate available between devices and data centres is sufficiently high, cloud computing ceases to be competitive.

When more than one geographically separated Devices are involved, cloud computing is not an option, but a necessity. Therefore, cloud computing will be one computing paradigm supporting the Metaverse vision because it will enable the level of data storage, processing and distribution required by Metaverse Instances. Such Instances will likely take different forms: at one end of the spectrum, those of the size of today’s big social networks and, at the other end of the spectrum, many smaller Instances serving specific needs, e.g., those of a company.

Cloud computing can play a role in five areas:

  1. The Metaverse service providers may build private data centres or use managed services. However, most Metaverse Instances will be likely be built on public cloud providers.
  2. Pay-as-you-go models for on-demand compute and storage. Public cloud providers can provide distributed points of presence around the world, thus lowering latency.
  3. Artificial intelligence and machine learning. As cloud computing is already offering computing and data storage, it is natural that AI/ML will also be offered.
  4. Metaverse-as-a-Service (MaaS), i.e., fully hosted, and managed offerings allowing customers to deploy their own custom Metaverse Environments with minimum efforts.
  5. Metaverse services built entirely from scratch possibly leveraging Metaverse platforms (e.g., Vircadia and Metaverse.Network), as the foundation for Metaverse-as-a-Service offerings.

2     Edge computing

Cloud computing will not be the only computing paradigm relevant to the Metaverse. Performance and availability of Metaverse Environments can be improved by pushing Metaverse hosting and analytics to the edge so that devices can access storage and processing power from a close-by location. In the edge computing paradigm, data centres play the role of the gateway that pre-processes data before sending it to the cloud or handles a share of the processing and data storage.

Irrespective of where the User is located, a Metaverse Environment should be downloaded into a local edge data centre near the User and the other users represented by Digital Humans should also download the same Environment into their local edge data centre. The Metaverse Environments in each Device must then be synchronised with each other so that the scene the Users perceive is as smooth as possible and the interaction with it is natural.

Today’s data centres and networks typically lack the speed, capacity, and latency to enable the immersive experiences envisaged by the Metaverse vision. The next Section will examine the state of the art of the telecommunication networks.

<– Sensory information    Go to ToC    User Devices –>