Geneva, Switzerland – 19 July 2021. At its 10th General Assembly, the international, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards association has continued the development of 4 standards, progressed the study of functional requirements of 4 projects and refined the definition of two use cases.
The latest addition is Mixed Reality Collaborative Spaces (MPAI-MCS) – where MPAI is studying the application of Artificial Intelligence to the creation of mixed-reality spaces populated by streamed objects such as avatars representing geographically distributed individuals, other objects and sensor data, and their descriptions. Some of the applications envisaged are education, biomedicine, science, gaming, manufacturing and remote conferencing.
Functional requirements are being developed for
- Server-based Predictive Multiplayer Gaming (MPAI-SPG) that uses AI to train a network to compensate data losses and detect false data in online multiplayer gaming.
- AI-Enhanced Video Coding (MPAI-EVC) that uses AI to improve the performance of existing data processing-based video coding tools.
- Connected Autonomous Vehicles (MPAI-CAV) that uses AI in Human-CAV interaction, Environment Sensing, Autonomous Motion, CAV to Everything and Motion Actuation.
- Integrative Genomic/ Sensor Analysis (MPAI-GSA) that uses AI to compress and understand data from combined genomic and other experiments.
The four standards are at an advanced stage of development:
- Compression and Understanding of Industrial Data (MPAI-CUI), covers the AI-based Company Performance Prediction instance, enables prediction of default probability and assessment of organisational adequacy using governance, financial and risk data of a given company.
- Multimodal Conversation (MPAI-MMC) covers three instances: audio-visual conversation with a machine impersonated by a synthesised voice and an animated face, request for information about a displayed object, and translation of a sentence using a synthetic voice that preserves the speech features of the human.
- Context-based Audio Enhancement (MPAI-CAE) covers four instances: adding a desired emotion to a speech segment without emotion, preserving old audio tapes, improving the audio conference experience and removing unwanted and keeping relevant sounds to a user on the go.
- AI Framework standard (MPAI-AIF) enables creation and automation of mixed Machine Learning (ML) – Artificial Intelligence (AI) – Data Processing (DP) – inference workflows, implemented as software, hardware, or mixed software and hardware.
MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity who supports the MPAI mission may join MPAI if it is able to contribute to the development of standards for the efficient use of data.