Geneva, Switzerland – 21 December 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 27th General Assembly (MPAI-27) celebrating the adoption without modifications of three MPAI Technical Specifications as IEEE standards, and approving the publication of the MPAI Metaverse Model (MPAI-MMM) draft Technical Report and the Neural Network Watermarking (MPAI-NNW) draft Technical Specification for community comments.

The Institute of Electrical and Electronic Engineers Standard Association has adopted three MPAI Technical Specifications – AI Framework (MPAI-AIF), Context-based Audio Enhancement (MPAI-CAE), and Multimodal Conversation (MPAI-MMC) – as IEEE standards number 3301-2022, 3302-2022, and 3300-2022, respectively. The MPAI and IEEE versions are technically equivalent, and implementers of MPAI/IEEE standards can obtain an ImplementerID from the MPAI Store.

MPAI implements a rigorous process of standards development requiring publication of a draft Technical Specification or Technical Report with a request for community comments before final approval and publication.  MPAI-27 approved the following two documents for the said preliminary publication:

  1. MPAI Metaverse Model (MPAI-MMM). Draft Technical Report, a document outlining a set of desirable guidelines to accelerate the development of interoperable Metaverses:
    1. A set of assumptions laid at the foundation of the Technical Report.
    2. Use cases based on and services to Metaverse Instances.
    3. Application of the profile approach successfully adopted for digital media standards to Metaverse standards.
    4. An initial set of functionalities used by Metaverse Instances to facilitate the definition of profiles.
    5. Identification of the main technologies enabling the Metaverse.
    6. A roadmap to definition of Metaverse Profiles.
    7. An initial list of governance and regulation issues likely to affect the design, deployment, operation, and interoperability of Metaverse Instances.

An online presentation of MPAI-MMM will be made on 2023/01/10

08:00 UTC:

18:00 UTC:

The MPAI Metaverse Model is accessible online.

  1. Neural Network Watermarking (MPAI-NNW). Draft Technical Specification providing methodologies to evaluate the performance of neural network-based watermarking solutions in terms of:
    1. The watermarking solution imperceptibility defined as a measure of the potential impact of the watermark injection on the result of the inference created by the model.
    2. The watermarking solution robustness defined as the detector and decoder ability to retrieve the watermark when the watermarked model is subjected to modifications.
    3. The computational cost of the main operations performed in the end-to-end watermarking process.

The documents are accessible from the links above. Comments should be sent to the MPAI secretariat. Both documents are expected to be released in final form on 2023/01/25.

MPAI is continuing its work plan comprising the following Technical Specifications:

  1. AI Framework (MPAI-AIF). Standard for a secure AIF environment executing AI Workflows (AIW) composed of AI Modules (AIM).
  2. Avatar Representation and Animation (MPAI-ARA). Standard for generation and animation of interoperable avatar models reproducing humans and expressing a Personal Status.
  3. Context-based Audio Enhancement (MPAI-CAE). Standard to describe an audio scene to support human interaction with autonomous vehicles and metaverse applications.
  4. Multimodal Conversation (MPAI-MMC). Standard for Personal Status generalising the notion of Emotion including Cognitive State and Social Attitude.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
  3. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  4. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  5. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  6. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

As we enter the year 2023, it is a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.