Established in September 2020, MPAI has published five standard this week bringing the total to nine. Let’s see what they are about.
MPAI Metaverse Model (MPAI-MMM) – Architecture is the first technical metaverse standard published by a standard body. MPAI MMM specifies technologies enabling two metaverse instances M-InstanceA and M-InstanceB to interoperate if they: rely on the same Operation Model, use the same Profile, and either use the same technologies, or use independent technologies while accessing Conversion Services that losslessly transform data of an M-InstanceA to data of an M-InstanceB.
AI Framework (MPAI-AIF) V2 specifies a secure environment called AI Framework (AIF) enabling dynamic configuration, initialisation, and control of AI Workflows (AIW) composed of AI Modules (AIM). AIMs and AIWs are defined by function and interfaces; AIWs also by AIM topology.
Connected Autonomous Vehicle (MPAI-CAV) – Architecture is the first technical standard on connected autonomous vehicles published by a standard body. MPAI-CAV specifies the Architecture of a CAV based on a Reference Model comprising a CAV composed of Subsystems (AIW) with specified Functions, I/O Data, and Topology. Each Subsystem is made up of Components with specified Functions and I/O Data.
Multimodal Conversation (MPAI-MMC) V2 specifies data formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications and Multimodal Conversation-related AIWs and AIWs using data formats from MPAI-MMC and other MPAI standards.
Portable Avatar Format (MPAI-PAF) specifies the Portable Avatar and related data formats allowing a sender to enable a receiver to decode and render an Avatar as intended by the sender; the Personal Status Display Composite AI Module allowing the conversion of a Text and a Personal Status to a Portable Avatar; and the AIWs and AIMs used by the Avatar-Based Videoconference Use Case.
Let’s see now which are the previously developed standards.
Context-based Audio Enhancement (MPAI-CAE) specifies data types for the improvement of the user experience in audio-related applications for a variety of contexts using context information and Audio-related AIWs and AIWs using data formats from MPAI-CAE and other MPAI standards.
Neural Network Watermarking (MPAI-NNW) specifies methodologies to evaluate the following aspects of neural network (NN) watermarking-related technologies: The impact on the performance of a watermarked NN and its inference; The ability of an NN watermarking detector/decoder to detect/decode a payload of a modified watermarked NN; The computational cost of injecting, detecting, or decoding a payload in the watermarked NN.
Compression and Understanding of Industrial Data (MPAI-CUI) specifies data formats, AIMs and an AIW to predict a company’s probability of default and business discontinuity, and to provide an organisational model index (Company Performance Prediction Use Case).
Governance of the MPAI Ecosystem (MPAI-GME) specifies the roles and rules of Ecosystem players: MPAI, Implementers, MPAI Store, Performance Assessors, Users.
MPAI was established to develop AI-enabled data coding standards across industry domains and is keeping its promise. Time to join MPAI!
Image by starline on Freepik