Geneva, Switzerland – 14 June 2023. Today, the international, non-profit, and unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) organisation developing AI-based data coding standards has concluded its 33rd General Assembly (MPAI-33) approving the Call for Technologies: MPAI Metaverse Model (MPAI-MMM) – Architecture. Responses are due by 10 July. Two online presentations of the Call will be made on 23 June at 8 and 15 UTC.
After publishing two Technical Reports on Functionalities and Functionality Profiles of the MPAI Metaverse Model, MPAI is now kicking off an ambitious plan to develop a Technical Specification on MPAI Metaverse Model – Architecture. This is a project that no standards body has ever attempted so far.
The first step of the plan is the publication of the Call for Technologies, as mandated by the MPAI standard development process. Note that the Call does not address data formats, only metaverse functionalities.
Three documents are attached to the Call: the first is Use Cases and Functional Requirements. It includes a reference to some thirty metaverse use cases explored by MPAI, a set of Functionalities that the Architecture should provide, and the functional requirements of its key elements: Processes, Items, Actions and Data Types.
The second document is the Framework Licence designed to facilitate the timely access to IP that is essential to implement the planned MPAI-MMM – Architecture standard. Finally, the third document is a Template for responses that respondents to the Call may wish to use in their responses.
Anybody may respond to the Call. However, non-members should join MPAI to participate in the development of the MPAI-MMM – Architecture standard.
MPAI is continuing its work plan comprising the development of the following Technical Specifications:
- The AI Framework (MPAI-AIF) V2 Technical Specification will enable an implementer to establish a secure AIF environment to execute AI Workflows (AIW) composed of AI Modules (AIM).
- The Avatar Representation and Animation (MPAI-ARA) V1 Technical Specification will support creation and animation of interoperable human-like avatar models able to understand and express a Personal Status.
- The Multimodal Conversation (MPAI-MMC) V2 Technical Specification will generalise the notion of Emotion by adding Cognitive State and Social Attitude and specify a new data type called Personal Status.
The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:
- AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
- Connected Autonomous Vehicles (MPAI-CAV). Targets the Human-CAV Interaction Environment Sensing, Autonomous Motion, and Motion Actuation subsystems implemented as AI Workflows.
- End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
- AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
- Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
- XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.
Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.
Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.