Geneva, Switzerland – 12 July 2023. Today, the international, non-profit, and unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) organisation developing AI-based data coding standards has concluded its 34th General Assembly (MPAI-34) approving the Call for Technologies: Connected Autonomous Vehicle (MPAI-CAV) – Architecture. Two online presentations of the Call will be made on 26 July at 8 and 15 UTC. Responses are due by 15 August.

The goal of the MPAI-CAV standard is to promote the development of a CAV industry by specifying components that can be easily integrated into larger subsystems. To achieve this goal, MPAI intends to develop the MPAI-CAV standard as a series of standards each adding more details to enhance CAV component interoperability. The first issue, MPAI-CAV – Architecture, to be developed using the results of the Call, aims to partition CAVs into subsystems and to further partition those subsystems into components. Both subsystems and components are identified by their function and interfaces, i.e., data exchanged between subsystems and components.

Three documents are attached to the Call: the first is Use Cases and Functional Requirements. It includes an initial set of Functionalities that the Architecture should provide.

The second document is the Framework Licence designed to facilitate the timely access to IP that is essential to implement the planned MPAI-CAV – Architecture standard. Finally, the third document is a Template for responses that respondents to the Call may wish to use in their responses.

Anybody may respond to the Call. However, non-members should join MPAI to participate in the development of the MPAI-CAV – Architecture standard.

MPAI is continuing its work plan comprising the development of the following Technical Specifications:

  1. The AI Framework (MPAI-AIF) V2 Technical Specification will enable an implementer to establish a secure AIF environment to execute AI Workflows (AIW) composed of AI Modules (AIM).
  2. The Avatar Representation and Animation (MPAI-ARA) V1 Technical Specification will support creation and animation of interoperable human-like avatar models able to understand and express a Personal Status.
  3. The Multimodal Conversation (MPAI-MMC) V2 Technical Specification will generalise the notion of Emotion by adding Cognitive State and Social Attitude and specify a new data type called Personal Status.
  4. The MPAI Metaverse Model (MPAI-MMM) – Architecture V1 Technical Specification will specify the Operation Model and its components Actions, Items, and Data Types.

The MPAI work plan also includes exploratory activities, some of which are close to becoming standard or technical report projects:

  1. AI Health (MPAI-AIH). Targets an architecture where smartphones store users’ health data processed using AI and AI Models are updated using Federated Learning.
  2. End-to-End Video Coding (MPAI-EEV). Extends the video coding frontiers using AI-based End-to-End Video coding.
  3. AI-Enhanced Video Coding (MPAI-EVC). Improves existing video coding with AI tools for short-to-medium term applications.
  4. Server-based Predictive Multiplayer Gaming (MPAI-SPG). Uses AI to train neural networks that help an online gaming server to compensate data losses and detects false data.
  5. XR Venues (MPAI-XRV). Identifies common AI Modules used across various XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.