Moving Picture, Audio and Data Coding
by Artificial Intelligence

The MPAI metaverse standardisation proposal

1. Introduction

The metaverse is expected to create new jobs, opportunities, and experiences with transformational impacts on virtually all sectors of human interaction. Therefore, metaverse standards are important, but:

  1. There is no common agreement on what a “metaverse” is or should be.
  2. There are many potential users of the metaverse.
  3. There are many successful independent implementations of “metaverse”.
  4. Some important enabling technologies may be years away.

We need to agree on how to tackle metaverse standardisation, without necessarily:

  1. Reaching an agreement on what a metaverse is.
  2. Disenfranchising potential users.
  3. Alienating existing initiatives
  4. Dealing with technologies (for now)

2. The MPAI proposal

MPAI – Moving Picture, Audio, and Data Coding by Artificial Intelligence – believes that developing a (set of) metaverse standards is very challenging goal. Metaverse standardiosation requires that we should:

  1. Start small and grow.
  2. Be creative and devise a new working method.
  3. Test the method.
  4. Gather confidence in the method.
  5. Gather a wide consensus on the method.

Then, we could develop a (set of) metaverse standards.

MPAI has developed an initial roadmap:

  1. Build a metaverse terminology.
  2. Agree on basic assumptions.
  3. Collect metaverse functionalities.
  4. Develop functionality profiles.
  5. Develop a metaverse architecture.
  6. Develop functional requirements of the metaverse architecture data types.
  7. Develop a Table of Contents of Common Metaverse Specifications.
  8. Map technologies to the ToC of the Common Metaverse Specifications.

Step #1 – develop a common terminology

We need no convincing of the importance of this step as many are developing metaverse terminologies. Unfortunately, there is no attempt at converging to an industry-wide terminology.

The terminology should:

  • Have an agreed scope.
  • Be technology- and business-agnostic.
  • Not use one industry’s terms if they are used by more than one industry.

The terminology is:

  • Intimately connected with the standard that will use it.
  • Functional to the following milestones of the roadmap.

MPAI has already defined some 150 classified metaverse terms and encourages the convergence of existing terminology initiatives. The MPAI terminology can be found here.

Step #2 – Agree on basic assumptions

Assumptions are needed for a multi-stakeholder project because designing a roadmap depends on the goal and on the methods used to reach it.

MPAI has laid down 16 assumptions which it proposes for a discussion. Here the first 3 assumptions are presented. All the assumptions can be found here.

  • Assumption #1: As there is no agreement on what a metaverse is, let’s accept all legitimate requests of “metaverse” functionalities.
    • Note: an accepted functionality does not imply that a Metaverse Instance shall support it.
  • Assumptions#2Common Metaverse Specifications (CMS) will be developed.
    • Note: they will provide the technologies supporting identified Functionalities.
  • Assumption#3: CMS Technologies will be grouped into Technology Profiles in response to industry needs.
    • Note: A profile shall maximise the number technologies supported by specific groups of industries.

The notion of profile was well known and used in digital media standardisation and is defined here:

A set of one or more base standards, and, if applicable, their chosen classes, subsets, options and parameters, necessary for accomplishing a particular function.

Step #3 – Collect metaverse functionalities

The number of industries potentially interested in deploying metaverse is very large and MPAI has explored 18 of them. See here. A metaverse implementation is also likely to use external service providers – interfaces should be defined. See here.

MPAI has collected > 150 functionalities organised in: Areas – Subareas – Titles. See here.

This is not an accomplished task, but its beginning. Collecting metaverse functionalities should be a continuous task.

Step #4 – Which profiles?

The notion of profile is not currently implementable because some key technologies are not yet available and it is not clear which technologies, exisiting or otherwise, will eventually be selected.

MPAI proposes to introduce a new type of profile – functionality profile, characterised by the functionalities offered, not by the technologies implementing them. By dealing only with functionalities and not technologies, profile definition is not “contaminated” by technology considerations. MPAI is in the process of developing:

Technical Report – MPAI Metaverse Model (MPAI-MMM) – Functionality Profiles.

It is expected that the Technical Report will be published for Community Comments on the 26th of March and finally adopted on the 19th of April 2023. It will contain the following table of contents:

  1. scalable Metaverse Operational Model.
  2. Actions (what you do in the metaverse):
    1. Purpose – what the Action is for.
    2. Payload – data to the Metaverse.
    3. Response – data from the Metaverse.
  3. Items (on what you do Actions):
    1. Purpose – what the Item is for.
    2. Data – functional requirements.
    3. Metadata – functional requirements.
  4. Example functionality profiles.

The Technical Report will not contain nor make reference to technologies.

3. The next steps of the MPAI poposal

The Technical Report will demonstrate that it is possible to develop metaverse functionality profiles (and levels) that do not make reference to technologies, only to functionalities.

Step #5 – Develop a metaverse architecture.

The goal is to specify of a Metaverse Architecture, including the main functional blocks and the data types exchanged between blocks.

Step #6 – Develop functional requirements of the metaverse architecture data types.

The goal is to develop the functional requirements of the data types exchanged between functional blocks of the metaverse architecture.

Step #7 – Develop the Table of Contents of the Common Metaverse Specifications.

The goal is to produce an initial Table of Contents (ToC) of Common Metaverse Specifications to have a clear understanding of which Technologies are needed for which purpose in which parts of the metaverse architecture to achieve interoperability.

Step #8 – Map technologies to the ToC of the Common Metaverse Specifications.

MPAI intends to map its relevant technologies and see how they fit in the Common Metaverse Specification architecture. Other SDOs are invited to join the effort.

4. Conclusions

Of course, step #8 there will not provide the metaverse specifications but a tested method to produce them. MPAI envisages to reach step #8 in December 2023. It is a good price to pay before engaging in a perilous project.


MPAI publishes Version 2 Audio Enhancement for Community Comments and the Neural Network Watermarking Reference Software

Geneva, Switzerland – 22 February 2023. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 29th General Assembly (MPAI-29) approving a new version of its Audio Enhancement (MPAI-CAE) Technical Specification posted for Community Comments and the Neural Network Watermarking (MPAI-NNW) Reference Software.

Version 2 of the Context-based Audio Enhancement (MPAI-CAE) Technical Specification, besides supporting the functionalities of Version 1, specifies new technologies to enable a device to describe an audio scene in terms of audio objects and their directions. MPAI uses this Technical Specification to enable human interaction with autonomous vehicles, avatar-based videoconference and metaverse applications. The document is posted with a request for Community Comments to be sent to secretariat@mpai.community until the 20th of March 2023.

The Reference Software of Neural Network Watermarking (MPAI-NNW) provides the means, including the software, to evaluate the performance of neural network-based watermarking solutions in terms of imperceptibility, robustness, and computational cost. The version of the software is specific for image classification but can be extended to other application areas.

MPAI is continuing its work plan including the development of the following Technical Specifications:

  1. The AI Framework (MPAI-AIF) V2 Technical Specification will enable an implementer to establish a secure AIF environment to execute AI Workflows (AIW) composed of AI Modules (AIM).
  2. The Avatar Representation and Animation (MPAI-ARA) V1 Technical Specification will support creation and animation of interoperable human-like avatar models expressing a Personal Status.
  3. The Multimodal Conversation (MPAI-MMC) V2 Technical Specification will generalise the notion of Emotion by adding Cognitive State and Social Attitude and specify a new data type called Standard for Personal Status.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
  3. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  4. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  5. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  6. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

It is still a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


MPAI Member Audio Innova awarded the Cannes Neurons 2023 Palm d’Or Award

Cannes, France –  9 February 2023. The Cannes Neurons jury, composed by 12 world-renowned AI experts, has awarded MPAI Member Audio Innova srl  the Cannes Neurons Award 2023 Palm d’Or  for the best creative AI project. Audio archive preservation  based on the MPAI Audio Recording Preservation (ARP) standard is the Audio Innova project coordinated by Sergio Canazza and developed by Nadir Dalla Pozza and Niccolò Pretto. The project was  selected among 6 finalists shortlisted from many other candidates to honour the most innovative and impactful Artificial Intelligence (AI) projects.

The Cannes Neurons Award is the the prestigious prize part of the WAIFC, the world #1 event for AI in business and society which takes place every year in Cannes. To know more, see here.

In the figure: the Audio Innova team with the Palm d’Or in the Salon des Ambassadeurs at the Palais des festivals, Cannes. From left: Cristina Paulon, Sergio Canazza, Alessandro Russo, Michele Patella, Nadir Dalla Pozza (kneeling).

 


Online presentation: MPAI’s AI-based End-to-End video codec has better compression than traditional codecs

Fifteen months ago, MPAI started an investigation on AI-based End-to-End Video Coding, a new approach to video coding not based on traditional architectures. Recently published results show that Version 0.3 of the MPAI-EEV Reference Model (EEV-0.3) has generally higher performance than the MPEG-HEVC video coding standard when applied to the MPAI set of high-quality drone video sequences.

This is now supersed by the news that the soon-to-be-released EEV-0.4 subjectively outperforms the MPEG-VVC codec using low-delay P configuration. Join the online presentation of MPAI EEV-0.4

At 15 UTC on the 1st of March 2023, Place

You will learn about the model, its performance, and its planned public release both for training and test, and how you can participate in the EEV meetings and contribute to achieving the amazing promises of end-to-end video coding.

The main features of EEV-0.4 are:

  1. A new model that decomposes motion into intrinsic and compensatory motion: the former, originating from the implicit spatiotemporal context hidden in the reference pictures, does not add to the bit budget while the latter, playing a structural refinement and texture enhancement role, requires bits.
  2. Inter prediction is performed in the deep feature space in the form of progressive temporal transition, conditioned on the decomposed motion.
  3. The framework is end-to-end optimized together with the residual coding and restoration modules.

Extensive experimental results on diverse datasets (MPEG test and non-standard sequences) demonstrate that this model outperforms the VTM-15.2 Reference Model in terms of MS-SSIM.

You are invited to attend the MPAI EEV-0.4 presentation. You will learn about the model, its performance, and its planned public release both for training and test, and how you can participate in the EEV meetings and contribute to achieve the amazing promises of end-to-end video coding.


MPAI approves a new Technical Specification and a Technical Report

Geneva, Switzerland – 25 January 2023. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 28th General Assembly (MPAI-28) approving the Neural Network Watermarking (MPAI-NNW) Technical Specification, the MPAI Metaverse Model (MPAI-MMM) Technical Report, and the 2023 program of work on the Metaverse.

MPAI-28 has approved for publication the following two documents:

  1. Neural Network Watermarking (MPAI-NNW). Draft Technical Specification providing methodologies to evaluate the performance of neural network-based watermarking solutions in terms of imperceptibility, robustness, and computational cost. Further information from
YouTube video Non-YouTube video  MPAI-NNW
  1. MPAI Metaverse Model (MPAI-MMM). Draft Technical Report, a document outlining a set of desirable guidelines to accelerate the development of interoperable Metaverses. The online presentation of the draft version of this document is available at
YouTube video Non-YouTube video The MPAI Metaverse Model

MPAI has also approved the 2023 program of work related to the MPAI Metaverse Model:

  1. Functionality Profiles referencing MMM functionalities, not technologies.
  2. Metaverse Instance Architecture with the functions and data types of the building blocks.
  3. Functional requirements of the identified data types.
  4. Table of Contents of the Common Metaverse Specifications.
  5. Initial Common Metaverse Specifications that includes MPAI Technologies.

MPAI is continuing its work plan comprising the following Technical Specifications:

  1. AI Framework (MPAI-AIF). Standard for a secure AIF environment executing AI Workflows (AIW) composed of AI Modules (AIM).
  2. Avatar Representation and Animation (MPAI-ARA). Standard for generation and animation of interoperable avatar models reproducing humans and expressing a Personal Status.
  3. Context-based Audio Enhancement (MPAI-CAE). Standard to describe an audio scene to support human interaction with autonomous vehicles and metaverse applications.
  4. Multimodal Conversation (MPAI-MMC). Standard for Personal Status generalising the notion of Emotion including Cognitive State and Social Attitude.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
  3. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  4. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  5. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  6. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

As we enter the year 2023, this is a good time for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.


MPAI is offering its high-quality drone sequences to the video coding community

Fifteen months ago, MPAI started an investigation on AI-based End-to-End Video Coding, a new approach is not based on traditional video coding architectures. Recently published results from the investigation show that Version 0.3 of the MPAI-EEV Reference Model has generally higher performance than the MPEG-HEVC video coding standard when applied to the MPAI set of high-quality drone video sequences.

MPAI is now offering its Unmanned Aerial Vehicle (UAV) sequence dataset for use by the video community in testing compression algorithms. The dataset contains various drone videos captured under different conditions, including environments, flight altitudes, and camera views. These video clips are selected from several categories of real-life objects in different scene object densities and lighting conditions, representing diverse scenarios in our daily life.

Compared to natural videos, UAV-captured videos are generally recorded by drone-mounted cameras in motion and at different viewpoints and altitudes. These features bring several new challenges, such as motion blur, scale changes and complex background. Heavy occlusion, non-rigid deformation and tiny scales of objects might be of great challenge to drone video compression.

Please get an invitation from the MPAI Secretariat and come to one of the biweekly meetings of the MPAI-EEV group (starting from 1st of February 2023). The MPAI-EEV group is going to showcase its superior performance fully neural network-based video codec model for drone videos. The group is inclusive and planning for the future of video coding using end-to-end learning. Please feel free to participate, leaving your comments or suggestions to the MPAI-EEV. We will discuss your contribution and our state of the art with the goal of progressing this exciting area of coding of video sequences from drones.

Table 1 – Drone video test sequences

Source Sequence
Name
Spatial
Resolution
Frame
Count
Frame
Rate
Bit
Depth
Scene
Feature
 

Class A VisDrone-SOT TPAM12021

BasketballGround 960×528 100 24 8 Outdoor
GrassLand 1344×752 100 24 8 Outdoor
Intersection 1360×752 100 24 8 Outdoor
NightMall 1920×1072 100 30 8 Outdoor
SoccerGround 1904×1056 100 30 8 Outdoor
Class B
VisDrone-MOT
TPAM12021
Circle 1360×752 100 24 8 Outdoor
CrossBridge 2720×1520 100 30 8 Outdoor
Highway 1344×752 100 24 8 Outdoor
Class C
Corridor
IROS2018
Classroom 640×352 100 24 8 Indoor
Elevator 640×352 100 24 8 Indoor
Hall 640×352 100 24 8 Indoor
Class D
UAVDT S
ECCV2018
Campus 1024×528 100 24 8 Outdoor
RoadByTheSea 1024×528 100 24 8 Outdoor
Theater 1024×528 100 24 8 Outdoor

See https://mpai.community/standards/mpai-eev/about-mpai-eev/

Join MPAI – Share the fun – Build the future!


A look inside MPAI XR Venues

XR Venues is an MPAI project (MPAI-XRV) addressing use cases enabled by Extended Reality (XR) technologies – the combination of Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) – and enhanced by Artificial Intelligence (AI) technologies. The word “venue” is used as a synonym for “real and virtual environments”.

The XRV group has identified some 10 use cases and made a detailed analysis of three of them: eSports Tournament, Live theatrical stage performance, and Experiential retail/shopping.

How did XRV become an MPAI project? MPAI responds to industry needs with a rigorous process that includes 8 phases starting from Interest Collection up to Technical Specification. The initial phase of the process:

  1. Starts with the submission of a proposal triggering the Interest Collection stage where the interest of people other than the proposers is sought.
  2. Continues with the Use Cases stage where applications of the proposal are studied.
  3. Concludes with the Functional Requirements stage where the AI Workflows implementing the developed use cases and their composing AI Modules are identified with their functions and data formats.

Let’s see how things are developing in the XR Venues project (MPAI-XRV) now at the Functional Requirements stage. We will describe the use case of  the eSports Tournament game. This consists of two teams of 3 to 6 players arranged on either side of a real world (RW) stage, each using a computer to compete within a real-time Massively Multiplayer Online game space.

Figure 1 – An eSports Tournament

The game space occurs in a virtual world (VW) populated by:

  1. Players represented by avatars each driven by role (e.g., magicians, warriors, soldier, etc.), properties (e.g., costumes, physical form, physical features), and actions (e.g., casting spells, shooting, flying, jumping).
  2. Avatars representing other players, autonomous characters (e.g., dragon, monsters, various creatures), and environmental structures (e.g., terrain, mountains, bodies of water).

The game action is captured by multiple VW cameras and projected onto a RW immersive screen surrounding spectators and live streamed to remote spectators as a 2D video with all related sounds of the VW game space.

A shoutcaster calls the action as the game proceeds. The RW venue (XR Theatre) includes one or more immersive screens where the image of RW players, player stats or other information or imagery may also be displayed. The same may also be live streamed. The RW venue is augmented with lighting and special effects, music, and costumed performers.

Live stream viewers interact with one another and with commentators through live chats, Q&A sessions, etc. while RW spectators interact through shouting, waving and interactive devices (e.g., LED wands, smartphones). RW spectators’ features are extracted from data captured by camera and microphone or wireless data interface and interpreted.

Actions are generated from RW or remote audience behaviour and VW action data (e.g., spell casting, characters dying, bombs exploding).

At the end of the tournament, an award ceremony featuring the winning players on the RW stage is held with great fanfare.

eSports Tournament is a representative example of the XRV project where human participants are exposed to real and virtual environments that interact with one another. Figure 1 depicts the general model representing how data from a real or virtual environment are captured, processed, and interpreted to generate actions transformed into experiences that are delivered to another real or virtual environment.

Figure 2 – Environment A to Environment B Interactions

Irrespective of whether Environment A is real or virtual, Environment Capture captures signals and/or data from the environment, Feature Extraction extracts descriptors from data, and Feature Interpretation yields interpretations by analysing those descriptors. Action Generation generates actions by analysing interpretations, Experience Generation      translates action into an experience, and Environment Rendering delivers the signals and/or data corresponding to the experience into Environment B whether real or virtual. Of course, the same sequence of steps can occur in the right-to-left direction starting from Environment B.

A thorough analysis of the eSports Tournament use case has led the XRV group to develop the reference model depicted in Figure 3.

Figure 3 – Reference Model of eSports Tournament

The AI Modules on the left-hand side and in the middle of the reference model perform the Description Extraction and Descriptor Interpretation functions identified in Figure 2. The data generated by them are:

  1. Player Status is the ensemble of information internal to the player, expressed by Emotion, Cognitive State, and Attitude estimated from Audio-Video-Controller-App of the individual players.
  2. Participants Status is the ensemble of information, expressed by Emotion, Cognitive State and Attitude of participants, estimated from the collective behaviour of Real World and on-line spectators in response to actions of a team, a player, or the game through audio, video, interactive controllers, and smartphone apps. Both data types are similar to the Personal Status developed in the context of Multimodal Conversation Version 2.
  3. Game State is estimated from Player location and Player Action (both in the VW), Game Score and Clock.
  4. Game Action Status is estimated from Game State, Player History, Team History, and Tournament Level.

The four data streams are variously combined by the three AI Modules on the right-hand side to generate effects in the RW and VW, and to orientate the cameras in the VW. These correspond to the Action Generation, Experience Generation and Experience Rendering of Figure 2.

The definition of interfaces between the AI Modules of 3 will enable the independent development of those AI Modules with standard interfaces. An XR Theatre will be able to host a pre-existing game and produce an eSports Tournament supporting RW and VW audience interactivity. To the extent that the game possesses the required interfaces, the XR Theatre also can drive actions within the VW.

eSports has grown substantially in the last decade. Arena-sized eSport Tournaments with increasing complexity are now routine. An XRV Venue dedicated to eSports enabled by AI can greatly enhanced the participants’ experience with powerful multi-sensory, interactive and highly immersive media, lowering the complexity of the system and the required human resources. Standardised AI Modules for an eSports XRV Venue enhance interoperability across different games and simplify experience design


The MPAI Metaverse Model has been launched

MPAI has posted the MPAI Metaverse Model (MMM) on the 3rd of January 2023 calling for comments and contributions until the 23rd of January and organised two online presentations. You can see the recording of one presentation and the powerpoint file:

YouTube Non-YouTube The MPAI Metaverse Model WD0.5

The MMM is a proposal for a method to develop Metaverse standards. It is based on an experience honed during decades of digital media standardisation that seeks to accommodate the extreme heterogeneity of industries all needing a common technology complemented by industry specificities.

The MMM is not just a proposal of a method. It also includes a roadmap and implements the first steps of it. The steps of the roadmap are not intended to be implemented in a strict sequential way.

The table below indicates the steps. Steps 1 to 4 are ongoing and included in the MMM. Step 5 has started.

# Step Content
1 Terms and Definitions A set of interconnected and consistent set of terms and definitions.
2 Assumptions A set of assumptions guiding the development of metaverse standards, starting from:

  1. Collect functionalities.
  2. Develop the Common Metaverse Specifications (CMS) .
  3. Establish industry-specific profiles based on CMS technologies.
3 Use Cases A set of 18 use cases with workflows used to develop metaverse functionalities.
4 External Services Potentially used by a metaverse instance to develop metaverse functionalities.
5 Functional Profiles Develop profiles that reference functionalities included in the MMM, not technologies.
6 Metaverse Architecture Develop a metaverse architecture with functional blocks and data exchanged between blocks.
7 Functional Requirements of Data Format Develop functional requirements of the data formats exchanged between blocks.
8 CMS Table of Contents Identify and organise all technologies required to support the MMM functionalities.
9 MPAI standards Enter MPAI standards relevant to the metaverse into the CMS Table of Contents.

 


A bird’s eye view of the MPAI Metaverse Model

MPAI is pleased to announce that, after a full year of efforts, it has been able to publish the MPAI Metaverse Model, the master plan of a project designed to facilitate the establishment of standards promoting Metaverse Interoperability. Watch

YouTube video Non-YouTube video

The industry is showing a growing interest in the Metaverse that is expected to create new jobs, opportunities, and experiences with transformational impacts on virtually all sectors of human interaction.

Standards and Artificial Intelligence are widely recognised as two of the main drivers for the development of the Metaverse. MPAI – Moving Picture, Audio, and Data Coding by Artificial Intelligence – plays a role in both thanks to its status as an international, unaffiliated, non-profit organisation developing standards for AI-based data coding with clear Intellectual Property Rights licensing frameworks.
The MMM is a full-bodied document divided in 9 chapters.

  1. Introduction gives a high-level overview of the MMM and explains that the MMM is published for community comments where MPAI posts the MMM, anybody can send comments and contributions to the MPAI Secretariat, MPAI considers them, and publishes the MMM in final form on 25 January.
  2. Definitions gives a comprehensive set of Metaverse-related terms and definitions.
  3. Assumptions details 16 assumptions that the proposed Metaverse standardisation process will adopt. Some of them are:
    1. the steps of the standardisation process.
    2. the availability of Common Metaverse Specifications (CMS).
    3. the eventual development of Metaverse Profiles.
    4. a definition of Metaverse Instance and Interoperability.
    5. the layered structure of a Metaverse Instance.
    6. the fact that Metaverse Instances already exist.
    7. the definition of Metaverse User.
  4. Use Cases collects a large number of application domains that will benefit from the use of the Metaverse. They are analysed to derive Metaverse Functionalities, such as:
    1. Automotive,
    2. Education,
    3. Finance,
    4. Healthcare
    5. Retail.
  5. External Services collects some of the services that a Metaverse Instance may require either as a platform native or as an externally provided service and are analysed to derive Metaverse Functionalities.. Examples are:
    1. content creation
    2. marketplace
    3. crypto wallets.
  6. Functionalities is a major element of the MMM in its current form. It collects a large number of Functionalities that a Metaverse Instance may support depending on the Profile it adopts. It is organised in 9 areas, i.e.,
    1. Instance,
    2. Environment,
    3. Content Representation,
    4. Perception of the Universe by the Metaverse,
    5. Perception of the Metaverse by the Universe,
    6. User,
    7. Interaction,
    8. Information search
    9. Economy support.
    10. Each area is organised in subareas: e.g., Instance is subdivided into
      1. Management
      2. Organisation
      3. Features
      4. Storage
      5. Process Management
      6. Security.
    11. Each subarea provides the Functionalities relevant to that subarea, e.g., Process Management includes the following Functionalities:
      1. Smart Contract
      2. Smart Contract Monitoring
      3. Smart Contract Interoperability.
  7. Technologies has the challenging task of verifying how well technologies match the requirements of the Functionalities. Currently, the following Technologies are analysed:
    1. Sensory information – namely, Audio, Visual, Touch, Olfaction, Gustation, and Brain signals.
    2. Data processing – how can we cope with the end of Moore’s Law and with the challenging requirements for distributed processing.
    3. User Devices – how Devices can cope with challenging motion-to-photon requirements.
    4. Network – the prospects of networks providing services satisfying high-level requirements, e.g., latency and bit error rate.
    5. Energy – the prospects of energy storage for portable devices and of energy consumption caused by thousands of Metaverse Instances and potentially billions of Devices.
  8. Governance identifies and analyses two areas:
    1. technical governance of the Metaverse System if the industry decides that this level of governance is in the common interest.
    2. governance by public authorities operating at a national or regional level.
  9. Profiles provides an initial roadmap from the publication of the MMM to the development of Profiles through the development of
    1. Metaverse Architecture
    2. Functional Requirements of Data types
    3. Common Metaverse Specification Table of Contents
    4. mapping of MPAI standard Technologies into the CMS
    5. inclusion of all required Technologies
    6. drafting of the mission of the Governance of the Metaverse System.

The MMM is a large integrated document. Comment on the MMM and join MPAI to make it happen!


Two MPAI documents published for community comments

Geneva, Switzerland – 21 December 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 27th General Assembly (MPAI-27) celebrating the adoption without modifications of three MPAI Technical Specifications as IEEE standards, and approving the publication of the MPAI Metaverse Model (MPAI-MMM) draft Technical Report and the Neural Network Watermarking (MPAI-NNW) draft Technical Specification for community comments.

The Institute of Electrical and Electronic Engineers Standard Association has adopted three MPAI Technical Specifications – AI Framework (MPAI-AIF), Context-based Audio Enhancement (MPAI-CAE), and Multimodal Conversation (MPAI-MMC) – as IEEE standards number 3301-2022, 3302-2022, and 3300-2022, respectively. The MPAI and IEEE versions are technically equivalent, and implementers of MPAI/IEEE standards can obtain an ImplementerID from the MPAI Store.

MPAI implements a rigorous process of standards development requiring publication of a draft Technical Specification or Technical Report with a request for community comments before final approval and publication.  MPAI-27 approved the following two documents for the said preliminary publication:

  1. MPAI Metaverse Model (MPAI-MMM). Draft Technical Report, a document outlining a set of desirable guidelines to accelerate the development of interoperable Metaverses:
    1. A set of assumptions laid at the foundation of the Technical Report.
    2. Use cases based on and services to Metaverse Instances.
    3. Application of the profile approach successfully adopted for digital media standards to Metaverse standards.
    4. An initial set of functionalities used by Metaverse Instances to facilitate the definition of profiles.
    5. Identification of the main technologies enabling the Metaverse.
    6. A roadmap to definition of Metaverse Profiles.
    7. An initial list of governance and regulation issues likely to affect the design, deployment, operation, and interoperability of Metaverse Instances.

An online presentation of MPAI-MMM will be made on 2023/01/10

08:00 UTC: https://us06web.zoom.us/meeting/register/tZEtcuuurTsuHdcbXCAy-we7soWkIqK5a2MK

18:00 UTC: https://us06web.zoom.us/meeting/register/tZcocuqtrjkuGdz0_nQWhLIJMvSHbfAkqP39

The MPAI Metaverse Model is accessible online.

  1. Neural Network Watermarking (MPAI-NNW). Draft Technical Specification providing methodologies to evaluate the performance of neural network-based watermarking solutions in terms of:
    1. The watermarking solution imperceptibility defined as a measure of the potential impact of the watermark injection on the result of the inference created by the model.
    2. The watermarking solution robustness defined as the detector and decoder ability to retrieve the watermark when the watermarked model is subjected to modifications.
    3. The computational cost of the main operations performed in the end-to-end watermarking process.

The documents are accessible from the links above. Comments should be sent to the MPAI secretariat. Both documents are expected to be released in final form on 2023/01/25.

MPAI is continuing its work plan comprising the following Technical Specifications:

  1. AI Framework (MPAI-AIF). Standard for a secure AIF environment executing AI Workflows (AIW) composed of AI Modules (AIM).
  2. Avatar Representation and Animation (MPAI-ARA). Standard for generation and animation of interoperable avatar models reproducing humans and expressing a Personal Status.
  3. Context-based Audio Enhancement (MPAI-CAE). Standard to describe an audio scene to support human interaction with autonomous vehicles and metaverse applications.
  4. Multimodal Conversation (MPAI-MMC). Standard for Personal Status generalising the notion of Emotion including Cognitive State and Social Attitude.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
  3. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  4. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  5. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  6. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

As we enter the year 2023, it is a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.