<– Sensory information Go to ToC User Devices –>
The need to process Data can only increase. Here two aspects are analysed:
Basic processing |
Computing services |
1 Basic processing
For decades, the Moore law – the doubling every 24 and later every 18 months of the transistor density achievable with the lowest cost transistors – has been a good, even conservative predictor of the ability to pack processing capability into a silicon chip. The Moore law was later complemented by Ray Kurzweil’s proposal that the computational power – the number of calculations/s – should also consider the layout and clock speed, not just the transistor density.
In the 2000’s, the increase in performance of a processor has substantially slowed down compared to the preceding decades. In the last 15 years since 1990, processor performance increased by ~50%/year, but in 2018 the increase had slowed to just 3.5%/year. The slowdown was caused by several factors:
- Hard to pack small transistors. To add more transistors to a chip more space and more energy are required, but the physical limit is being reached. This is the end of “Dennard scaling” stating that the energy need per transistor area is roughly constant because smaller transistors can be packed more densely.
- Difficult to increase clock speed. To increase clock speed above a certain level, more energy is required/more heat is produced. The CPU speed, especially for fan-less laptops, has not increased beyond 3-4 GHz since the first half of the ‘2000s.
- Inefficient to add more cores. To exploit the processing capability of dual-core, quad-core, and even octo-core CPUs, the processing must be well distributed among the cores, or the gain is lost. Exceptions are specialised tasks like graphics where different parts of the screen can be simultaneously updated.
- Slow growth of supercomputer processing power. Unlike the ~80%/year of a decade ago, the processing power only grows by ~40%/year because more chips must be packed in multi-million core supercomputers because the computer chips stop improving. More energy is required to operate and refrigerate supercomputers.
Possible research avenues that may maintain a sustainable future growth in processing performance are:
- Packing transistors in more layers to reduce chip size. This is done in memory chips, stacking transistors in 128 layers or more, etched in one go. Such chips are good for portable/wearable devices and Internet of Things devices. Energy and heating problems affect dynamic random-access memory (DRAM) because almost every part of the chip is constantly powered.
- Using AI to design efficient chip architecture. Especially with AI it is possible to design 3D chips making better use of the 3rd AI can design solutions in a process where they compete against one other and the best designs compete and evolve, until an optimal design is found that can no longer be improved.
- Combining chiplets. These components take some of the load off CPUs (the CPUs themselves could be designed as chiplets). The recently established UCIe™ intends to develop a specification defining the chiplet interconnection within a package, enabling an open chiplet ecosystem and interconnect at the package level [27] offering the following advantages:
- No need to use the same processor node.
- Possibility to mix chiplets with different geometry.
- Flexibility in adopting 2.5D and 3D technology.
- Possibility to connect chiplets with those of other companies.
- Using other technologies:
- Graphene-base CPUs can achieve x1000 speed of and consume x1/100 less energy than silicon chips and enable devices with smaller size and greater functionality.
- Superconducting computers offer the possibility to make 3D chips that are very dense still use very little energy at the cost of cooling to extremely low temperatures.
- Quantum computers can handle certain types of tasks far more quickly than traditional computers, solving problems that current computers would take a very long time to solve.
- Optical computers process, store, and communicate using light waves from lasers or incoherent sources. The electronic transistor is replaced by an “optical transistor” [26].
- Neuromorphic computers integrate features inspired by neurobiological systems and could provide energy-efficient solutions to AI problems.
2 Computing services
1 Cloud computing
Cloud computing is the delivery of hosted services over the internet. Adoption of this computing paradigm allows devices to outsource data processing and storage to external computers. It should balance several elements:
- Powering and cooling costs. In appropriate conditions, these are reduced because a small number of large and centralised computers and servers is more efficient that if it is done on many devices.
- Collection of data and distribution or processed data usually implies a lot of traffic back and forth the Devices to the cloud. Unless the bitrate available between devices and data centres is sufficiently high, cloud computing ceases to be competitive.
When more than one geographically separated Devices are involved, cloud computing is not an option, but a necessity. Therefore, cloud computing will be one computing paradigm supporting the Metaverse vision because it will enable the level of data storage, processing and distribution required by Metaverse Instances. Such Instances will likely take different forms: at one end of the spectrum, those of the size of today’s big social networks and, at the other end of the spectrum, many smaller Instances serving specific needs, e.g., those of a company.
Cloud computing can play a role in five areas:
- The Metaverse service providers may build private data centres or use managed services. However, most Metaverse Instances will be likely be built on public cloud providers.
- Pay-as-you-go models for on-demand compute and storage. Public cloud providers can provide distributed points of presence around the world, thus lowering latency.
- Artificial intelligence and machine learning. As cloud computing is already offering computing and data storage, it is natural that AI/ML will also be offered.
- Metaverse-as-a-Service (MaaS), i.e., fully hosted, and managed offerings allowing customers to deploy their own custom Metaverse Environments with minimum efforts.
- Metaverse services built entirely from scratch possibly leveraging Metaverse platforms (e.g., Vircadia and Metaverse.Network), as the foundation for Metaverse-as-a-Service offerings.
2 Edge computing
Cloud computing will not be the only computing paradigm relevant to the Metaverse. Performance and availability of Metaverse Environments can be improved by pushing Metaverse hosting and analytics to the edge so that devices can access storage and processing power from a close-by location. In the edge computing paradigm, data centres play the role of the gateway that pre-processes data before sending it to the cloud or handles a share of the processing and data storage.
Irrespective of where the User is located, a Metaverse Environment should be downloaded into a local edge data centre near the User and the other users represented by Digital Humans should also download the same Environment into their local edge data centre. The Metaverse Environments in each Device must then be synchronised with each other so that the scene the Users perceive is as smooth as possible and the interaction with it is natural.
Today’s data centres and networks typically lack the speed, capacity, and latency to enable the immersive experiences envisaged by the Metaverse vision. The next Section will examine the state of the art of the telecommunication networks.