Geneva, Switzerland – 26 October 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 25th General Assembly (MPAI-25). Among the outcomes is the decision, based on substantial inputs received in response to its Calls for Technologies, to extend three of its existing standards and to initiate the development of two new standards.
The three standards being extended are:
- AI Framework (MPAI-AIF). AIF is an MPAI-standardised environment where AI Workflows (AIW) composed of AI Modules (AIM) can be executed. Based on substantial industry input, MPAI is in a position to extend the MPAI-AIF specification with a set of APIs that allow a developer to configure the security solution adequate for the intended application.
- Context-based Audio Enhancement (MPAI-CAE). Currently, MPAI-CAE specifies four use cases: – Emotion-Enhanced Speech, Audio Recording Preservation, Speech Restoration Systems, and Enhanced Audioconference Experience. The last use case includes technology to describe the audio scene of an audio/video conference room in a standard way. MPAI-CAE is being extended to support more challenging environments such as human interaction with autonomous vehicles and metaverse applications.
- Multimodal Conversation (MPAI-MMC). MPAI-MMC V1 has specified a robust and extensible emotion description system. In the currently developed V2, MPAI is generalising the notion of Emotion to cover two more internal statuses: Cognitive State and Social Attitude and is specifying a new data format covering the three internal statuses called Personal Status.
The two new standards under development are:
- Avatar Representation and Animation (MPAI-ARA). The standard intends to provide technology to enable:
- A user to generate an avatar model and then descriptors to animate the model, and an independent user to animate the model using the model and the descriptors.
- A machine to animate a speaking avatar model expressing the Personal Status that the machine has generated during the conversation with a human (or another avatar).
- Neural Network Watermarking (MPAI-NNW). The standard specifies methodologies to evaluate neural network watermarking solutions:
- The impact on the performance of a watermarked neural network (and its inference).
- The ability of the detector/decoder to detect/decode a payload when the watermarked neural network has been modified.
- The computational cost of injecting, detecting in or decoding a payload from the watermark.
Development of these standards is planned to be completed in the early months of 2023.
MPAI-25 has also confirmed its intention to develop a Technical Report (TR) called MPAI Metaverse Model (MPAI-MMM). The TR will cover all aspects underpinning the design, deployment, and operation of a Metaverse Instance, especially interoperability between Metaverse Instances.
So far, MPAI has developed five standards for applications that have AI as the core enabling technology. It is now extending three of those existing, developing two new standards and one technical report, and engaged in the drafting of functional requirements for nine future standards. It is thus a good opportunity for legal entities supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data to join MPAI. Also considering that by joining on or after the 1st of November 2022, membership is immediately active and will last until 2023/12/31.
|Compression and Understanding of Industrial Data
|Predicts the company’s performance from governance, financial, and risk data.
|Governance of the MPAI Ecosystem
|Establishes the rules governing the submission of and access to interoperable implementations.
|Standard being extended
|Specifies an infrastructure enabling the execution of implementations and access to the MPAI Store.
|Context-based Audio Enhancement
|Improves the user experience of audio-related applications in a variety of contexts.
|Enables human-machine conversation emulating human-human conversation.
|Standard being developed
|Avatar Representation and Animation
|Specifies descriptors of avatars impersonating real humans.
|MPAI Metaverse Model
|Development of a technical report guiding creation and operation of Interoperable Metaverses.
|Neural Network Watermarking
|Measures the impact of adding ownership and licensing information to models and inferences.
|Standard being developed
|Specifies components to securely collect, AI-based process, and access health data.
|Connected Autonomous Vehicles
|Specifies components for Environment Sensing, Autonomous Motion, and Motion Actuation.
|End-to-End Video Coding
|Explores the promising area of AI-based “end-to-end” video coding for longer-term applications.
|AI-Enhanced Video Coding
|Improves existing video coding with AI tools for short-to-medium term applications.
|Integrative Genomic/Sensor Analysis
|Compresses high-throughput experiments’ data combining genomic/proteomic and other data.
|Mixed-reality Collaborative Spaces
|Supports collaboration of humans represented by avatars in virtual-reality spaces.
|Visual Object and Scene Description
|Describes objects and their attributes in a scene.
|Server-based Predictive Multiplayer Gaming
|Trains a network to compensate data losses and detects false data in online multiplayer gaming.
|XR-enabled and AI-enhanced use cases where venues may be both real and virtual.
Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.
Most importantly: please join MPAI, share the fun, build the future.