Geneva, Switzerland – 22 February 2023. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 29th General Assembly (MPAI-29) approving a new version of its Audio Enhancement (MPAI-CAE) Technical Specification posted for Community Comments and the Neural Network Watermarking (MPAI-NNW) Reference Software.

Version 2 of the Context-based Audio Enhancement (MPAI-CAE) Technical Specification, besides supporting the functionalities of Version 1, specifies new technologies to enable a device to describe an audio scene in terms of audio objects and their directions. MPAI uses this Technical Specification to enable human interaction with autonomous vehicles, avatar-based videoconference and metaverse applications. The document is posted with a request for Community Comments to be sent to secretariat@mpai.community until the 20th of March 2023.

The Reference Software of Neural Network Watermarking (MPAI-NNW) provides the means, including the software, to evaluate the performance of neural network-based watermarking solutions in terms of imperceptibility, robustness, and computational cost. The version of the software is specific for image classification but can be extended to other application areas.

MPAI is continuing its work plan including the development of the following Technical Specifications:

  1. The AI Framework (MPAI-AIF) V2 Technical Specification will enable an implementer to establish a secure AIF environment to execute AI Workflows (AIW) composed of AI Modules (AIM).
  2. The Avatar Representation and Animation (MPAI-ARA) V1 Technical Specification will support creation and animation of interoperable human-like avatar models expressing a Personal Status.
  3. The Multimodal Conversation (MPAI-MMC) V2 Technical Specification will generalise the notion of Emotion by adding Cognitive State and Social Attitude and specify a new data type called Standard for Personal Status.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
  3. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  4. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  5. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  6. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

It is still a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.