1      Introduction. 1 4      MPAI-CAE.. 5 7      MPAI-EEV.. 8 12        MPAI-MMC.. 11 15        MPAI-OSD.. 14
2      MPAI-AIF. 3 4.1       Version 1. 5 8      MPAI-EVC.. 9 12.1     Version 1. 11 16        MPAI-PAF. 15
2.1       Version 1. 3 4.2       Version 2. 6 9      MPAI-GSA.. 10 12.2     Version 2. 12 17        MPAI-PRF. 15
2.2       Version 2. 4 5      MPAI-CAV.. 6 10        MPAI-GME.. 10 13        MPAI-MMM… 13 18        MPAI-SPG.. 16
3      MPAI-AIH.. 4 6      MPAI-CUI. 8 11        MPAI-HMC.. 11 14        MPAI-NNW… 14 19        MPAI-XRV.. 16

 

1          Introduction

MPAI’s standards development is based on projects evolving through a workflow extending on 7 + 1 stages. The 7th stage is split in 4: Technical Specification, Reference Software Specification, Conformance Testing Specification, and Performance Assessment Specification. Technical Reports can also be developed to investigate new fields.

 

# Acr Name Description
0 IC Interest Collection Collection and harmonisation of use cases proposed.
1 UC Use cases Proposals of use cases, their description and merger of compatible use cases.
2 FR Functional Requirements Identification of the functional requirements that the standard in­cluding the Use Case should satisfy.
3 CR Commercial Requirements Development and approval of the framework licence of the stan­dard.
4 CfT Call for Technologies Preparation and publication of a document calling for technologies supporting the functional and commercial requirements.
5 SD Standard Development Development of the standard in a specific Development Com­mit­tee (DC).
6 CC Community Comments When the standard has achieved sufficient maturity it is published with request for comments.
7 MS MPAI standard The standard is approved by the General Assembly comprising 4 documents.
7.1 TS Technical Specification The normative specification to make a conforming implement­ation.
7.2 RS Reference Software The descriptive text and the software implementing the Technical Specification
7,3 CT Conformance Testing The Specification of the steps to be executed to test an implementation for conformance.
7.4 PA Conformance Assessment The Specification of the steps to be executed to assess an implementation for performance.

 

A project progresses from one stage to the next by resolution of the General Assembly.

The stages of currently (MPAI-38) active MPAI projects are graphically represented by Table 1.

Legend: TS: Technical Specification, RS: Reference Software, CT: Conformance Testing, PA: Performance Assessment; TR: Technical Report; V2: Version 2.

Table 1 – Snapshot of the MPAI work plan (MPAI-43)

# V Work area IC UC FR CR CfT SD CC TS RS CT PA TR
AIF 1.1 AI Framework X X X
AIF 2.0 AI Framework X
AIH AI Health Data  X
CAE 1.4 Context-based Audio Enhancement X X X
CAE 2.1 Context-based Audio Enhancement X X
CAV Connected Autonomous Vehicles  X
1.0 – Architecture
– Technologies
CUI 1.1 Compress. & Understanding of Industrial Data X X X X
EEV AI-based End-to-End Video Coding X
EVC AI-Enhanced Video Coding X
GME 1.1 Governance of the MPAI Ecosystem X
GSA Integrated Genomic/Sensor Analysis X
HMC 1.0 Human and Machine Communication X
MMC 1.2 Multimodal Conversation X X X
MMC 2.1 Multimodal Conversation X  X
MMM 1.0 – MPAI Metaverse Model X
1.0 – Functionalities
1.0 – Functionality Profiles X
1.1 – Architecture X
– Technologies
NNW 1.0 Neural Network Watermarking X  X
OSD 1.0 Object and Scene Description X  X  X
PAF 1.1 Avatar Representation & Animation  X
PRF AI Module Profiles X
SPG Server-based Predictive Multiplayer Gaming X
XRV XR Venues X X
– Live Theatrical Performance

 

2          MPAI-AIF

The MPAI approach to AI standards is based on the belief that large AI applications broken up into smaller elements called AI Modules (AIM), combined in workflows called AI Workflows (AIW), exchanging processed data with known semantics to the extent possible, improves explainability of AI applications and promotes a competitive market of components with standard interfaces.

Technical Specification: AI Framework (MPAI-AIF) V2 enables dynamic configuration, initialisation, and control of mixed Artif­icial Intelligence – Machine Learning – Data Processing workflows in a standard environment called AI Framework (AIF).

2.1        Version 1

Figure 1 shows the MPAI-AIF V1 Reference Model.

Figure 1 – Reference model of the MPAI AI Framework (MPAI-AIF) V1

The MPAI-AIF Technical Specification V1 and Reference Software V1 have been approved and are available here.

2.2        Version 2

MPAI-AIF V1 assumed that the AI Framework was secure but did not provide support to developers wishing to execute an AI application in a secure environment. MPAI-AIF V2 responds to this requirement. As shown in Figure 1, the standard defines a Security Abstraction Layer (SAL). By accessing the SAL APIs, a developer can indeed create the required level of security with the desired functionalities.

 

Figure 2 – Reference model of the MPAI AI Framework (MPAI-AIF) V2

The MPAI-AIF Technical Specification V2 has been approved and is available here. MPAI-AIF V2 includes V1 as Basic Profile. The Security Profile is a superset of the Basic Profile.

The Reference Software Specification V2 is being developed.

 

3          MPAI-AIH

Artificial Intelligence for Health data (MPAI-AIH) is an MPAI project aiming to specify the interfaces and the relevant data formats of a system called AI Health Platform (AIH Platform) where:

  1. End Users use handsets with an MPAI AI Framework (AI Health Frontends) to acquire and process health data.
  2. An AIH Backend collects processed health data delivered by AIH Frontends with associated Smart Contracts specifying the rights granted by End Users.
  3. Smart Contracts are stored on a blockchain.
  4. Third Party Users can process their own and End User-provided data based on the relevant smart contracts.
  5. The AIH Backend periodically collects the AI Models trained by the AIH Frontends while processing the health data, updates its AI Model and distributes it to AI Health Platform Frontends (Federated Learning).

This is depicted in Figure 3 (for simplicity the security part of the AI Framework is not included).

Figure 3 – MPAI-AIH Reference Model

MPAI-AIH is at the Standard Development stage. The collection of public documents is available here.

 

4          MPAI-CAE

Context-based Audio Enhancement (MPAI-CAE) improves the user experience for several audio-related applications including entertainment, communication, teleconferencing, gaming, post-production, restoration etc. in a variety of contexts such as in the home, in the car, on-the-go, in the studio etc. using context information to act on the input audio content using AI.

4.1        Version 1

Figure 4 is the reference model of Unidirectional Speech Translation, a Use Case developed by Version 1.

Figure 4 – An MPAI-CAE Use Case: Emotion-Enhanced Speech

The MPAI-AIF Technical Specification, Reference Software and Conformance Testing have been approved and are available here.

4.2        Version 2

MPAI has developed the specification of the Audio Scene Description Composite AIM as part of the MPAI-CAE V2 standard.

Figure 5 – Audio Scene Description Composite AIM

MPAI-CAE V2.1 has been approved and is available here.

 

5          MPAI-CAV

Connected Autonomous Vehicles (CAV) is a Use Case addressing the Connected Autonomous Vehicle (CAV) domain and the 5 main operating instances of a CAV:

Figure 6 -– The CAV subsystems

MPAI-CAV – Architecture is published as Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture specifying the Architecture of a Connected Autonomous Vehicle (CAV) based on a Reference Model comprising:

  1. A CAV broken down into Subsystems for each of which the following is specified:
    • The Functions
    • The input/output Data
    • The Topology of Components
  2. Each Subsystem broken down into Components of which the following is specified:
    • The Functions
    • The input/output Data.

 

Figure 7 depicts the Human-CAV Interaction (HCI) Subsystem Reference Model.

Figure 7 – Reference Model of the Human-CAV Interaction Subsystem

MPAI already published Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture. This is available here.

 

The next step is the development of the Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Technologies. Functional Requirements are being developed that will be used to issue a Call for Technologies. The collection of public documents is available here.

 

6          MPAI-CUI

Compression and understanding of industrial data (MPAI-CUI) aims to enable AI-based filtering and extraction of key information to predict company performance by applying Artificial Intellig­ence to governance, financial and risk data. This is depicted in Figure 8.

Figure 8 – The MPAI-CUI Use Case

The set of specifications composing the MPAI-CUI standard is available here.

7          MPAI-EEV

There is consensus in the video coding research community that the so-called End-to-End (E2E) video coding schemes can yield significantly higher performance than those target, e.g., by MPAI-EVC. AI-based End-to-End Video Coding intends to address this promising area.

MPAI has extended the OpenDVC model producing four versions of the EEV Reference Model (Figure 9). The latest – EEV0.4 – exceeds the performance of the MPEG-EVC standard.

Figure 9 – MPAI-EEV Reference Model

MPAI is currently developing EEV0.5introducing Bi-directional Predicted Frames.

 

The collection of public documents is available here.

 

8          MPAI-EVC

AI-Enhanced Video Coding (MPAI-EVC) is a video compression stan­dard that substantially en­hances the performance of a traditional video codec by improving or replacing traditional tools with AI-based tools. Two approaches – Horizontal Hybrid and Vertical Hybrid – are envisaged. The Vertical Hybrid approach envisages an AVC/HEVC/EVC/VVC base layer plus an enhanced machine learning-based layer. This case can be represented by Figure 10.

Figure 10 – A reference diagram for the Vertical Hybrid approach

The Horizontal Hybrid approach introduces AI based algorithms combined with trad­itional image/video codec, trying to replace one block of the traditional schema with a machine learn­ing-based one. This case can be described by Figure 11 where green circles represent tools that can be replaced or enhanced with their AI-based equivalent.

Figure 11 – A reference diagram for the Horizontal Hybrid approach

MPAI is engaged in the MPAI-EVC Evidence Project seeking to find evidence that AI-based technologies provide sufficient improvement to the Horizontal Hybrid approach. A second project on the Vertical Hybrid approach is being considered.

The collection of public documents is available here.

 

9          MPAI-GSA

Integrative Genomic/Sensor Analysis (MPAI-GSA) uses AI to understand and compress the res­ult of high-throughput experiments combining genomic/proteomic and other data, e.g., from video, motion, location, weather, medical sensors.

Figure 12 addresses the Smart Farming Use Case.

Figure 12 – An MPAI-GSA Use Case: Smart Framing

The collection of public documents is available here.

 

10     MPAI-GME

Technical Specification: Governance of the MPAI Ecosystem lays down the foundations of the MPAI Ecosystem. MPAI develops and maintains the following documents the following technical documents:

  1. Technical Specification.
  2. Reference Software Specification.
  3. Conformance Testing.
  4. Performance Assessment.
  5. Technical Report

 

An MPAI Standard is a collection of a variable number of the 5 document types.

 

Figure 13 depicts the operation of the MPAI ecosystem generated by MPAI Standards.

Figure 13 – The MPAI ecosystem operation

 

The MPAI-GME Technical Specification V1.1 has been approved and is available here.

 

11     MPAI-HMC

Human and Machine Communication (MPAI-HMC) leverages five Technical Specifications: Multimodal Conversation, Context-based Audio Enhancement, Portable Avatar Format, and MPAI Metaverse Model. All of them deal with real and digital humans communicating in real or virtual environments, specifying the Human and Machine Communication Use Case in a single document. MPAI-HMC is designed to enable multi-faceted or general-purpose applications as information kiosks, virtual assistants, chatbots, information services, metaverse applications etc in multilingual contexts where Entities – humans present in a real space or represented in a virtual space as speaking avatars or machines represented in a virtual space as speaking avatars all acting in contexts – use text, speech, face, gesture, and the audio-visual scene in which they are embedded.

Figure 14 illustrates the Human and Machine communication setting target of MPAI-HMC. The terms Machine followed by a number indicates an MPAI-HMC instance.

Figure 14 – Examples of MPAI-HMC communication

 

The MPAI-HMC Technical Specification V1.0 has been approved and is available here.

 

12     MPAI-MMC

Multi-modal conversation (MPAI-MMC) aims to enable human-machine conversation that emul­ates human-human conversation in completeness and intensity by using AI.

12.1    Version 1

The MPAI mission is to develop AI-enabled data coding standards. MPAI believes that its standards should enable humans to select machines whose internal operation they understand to some degree, rather than machines that are “black boxes” resulting from unknown training with unknown data. Thus, an implemented MPAI standard breaks up monolithic AI applications, yielding a set of interacting components with identified data whose semantics is known, as far as possible.

Technical Specification: Multimodal Conversation (MPAI-MMC) is an implementation of this vision for human-machine conversation. Five Use Cases have been developed for MPAI-MMC V1: Conversation with emotion, Multimodal Question Answering (QA) and 3 Automatic Speech Translation Use Cases.

Figure 15 depicts the Reference Model of the Conversation with Emotion Use Case.

Figure 15 – An MPAI-MMC V1 Use Case: Conversation with Emotion

The MPAI-MMC Technical Specification V1.2 has been approved and is available here.

12.2    Version 2

Extending the role of emotion as introduced in Version 1 of the standard, MPAI-MMC V2 introduces Personal Status, an internal status of humans that a machine needs to estimate and that it artificially creates for itself with the goal of improving its conversation with the human or, even with another machine. Personal Status is applied to MPAI-MMC specific Use Cases, such as Conversation about a Scene, Virtual Secretary for Videoconference, and Human-Connected Autonomous Vehicle Interaction.

Several new Use Cases have been specified for Technical Specification: Multi-modal conversation V2 (MPAI-MMC) V2.1. One of them is Conversation About a Scene (CAS) of which Figure 16 is the reference model.

Figure 16 – An MPAI-MMC V2 Use Case: Conversation with Personal Status

Figure 17 gives the Reference Model of a second use case: Virtual Secretary (used by the Avatar-Based Videoconference use case).

Figure 17 – Reference Model of Avatar-Based Videoconference

MPAI-MMC V2.1 is at the Technical Specification stage and available from here.

 

13     MPAI-MMM

The MPAI Metaverse Model represents a system that captures data from the real world, processes it, and combines it with internally generated data to create virtual environments that users can interact with.

The MPAI Metaverse Model (MMM) is an MPAI project targeting a series of deliverables for Metaverse Interoperability. Two MPAI Technical Reports – Functionalities and Functionality Profiles have laid down the groundwork. Technical Specification – MPAI Metaverse Model (MPAI-MMM) – Architecture V1.1, provides initial tools by specifying the Functional Requirements of Processes, Items, Actions, and Data Types that allow two or more metaverse instances to Interoperate via a Conversion Service if they implement the Operation Model and produce Data whose Format complies with the Specification’s Functional Requirements.

Figure 18 depicts one aspect of the Specification where a Process in an M-Instance requests a Process in another M-Instance to perform an Action by relying on their Resolution Services.

Figure 18 – Resolution and Conversion Services

 

MPAI-MMM – Architecture is at the Technical Specification level and available from here.

 

14     MPAI-NNW

Neural Network Watermarking is a standard whose purpose is to enable watermarking technology providers to qualify their products by providing the means to measure, for a given size of the watermarking payload, the ability of:

  1. The watermark inserter to inject a payload without deteriorating the NN performance.
  2. The watermark detector to recognise the presence of the inserted watermark when applied to
    1. A watermarked network that has been modified (e.g., by transfer learning or pruning)
    2. An inference of the modified model.
  3. The watermark decoder to successfully retrieve the payload when applied to
    1. A watermarked network that has been modified (e.g., by transfer learning or pruning)
    2. An inference of the modified model.
  4. The watermark inserter to inject a payload at a measures computational cost on a given processing environment.
  5. The watermark detector/decoder to detect/decode a payload from a watermarked model or from any of its inferences, at a low computational cost, e.g., execution time on a given processing environment.

Figure 19 – MPAI-NNW implemented in MPAI-AIF

The Neural Network Watermarking Technical Specification is at the Technical Specification level and the Reference Software Specification is published. Both are available here.

 

15     MPAI-OSD

Visual object and scene description is an MPAI project seeking to define a set of technologies for coordinated use in many use cases target of MPAI projects and standards. Examples are: Spatial Attitude and Point of View; Objects; Scene Descriptors, and Event Descriptors.

MPAI-OSD is at the at the Technical Specification stage.

The collection of public documents is available here.

Technical Specification: Object and Scene Descriptors (MPAI-OSD) is at the Technical Specification level and available from here.

 

16     MPAI-PAF

There is a long history of computer-created objects called “digital humans”, i.e., digital objects that can be rendered to show a human appearance. In most cases the underlying assumption of these objects has been that creation, animation, and rendering is done in a closed environment. Such digital humans had little or no need for standards. However, in a communication and even more in a metaverse context, there are many cases where a digital human is not constrained within a closed environment. For instance, a client sends may data to a remote client that should be able to unambiguously interpret and use the data to reproduce a digital human as intended by the transmitting client.

These new usage scenarios require forms of standardisation. Technical Specification: Portable Avatar Format (MPAI-PAF) is a first response to the need of a user wishing to enable their transmitting client to send data that a remote client can interpret to render a digital human, having the body movement and the facial expression representing their own movements and expression.

Figure 20 is the system diagram of the Avatar-Based Videoconference Use Case enabled by MPAI-PAF.

Figure 20 – End-to-End block diagram of Avatar-Based Videoconference

MPAI-PAF is at Technical Specification stage and available from here.

The collection of public documents is available here.

 

17     MPAI-PRF

Some AIMs receive more/less input and produce more/less output data than an AIM with the same name that is used in other AIWs even though they nominally perform the same functions. Since it is not realistic to require that all AIMs be equipped with the additional logic required to exploit features that are not required, Technical Specification: AI Module Profiles (MPAI-PRF) provides a mechanism that unambiguously signals which characteristics of an AIM – called Attributes in the following – are supported by the AIM.

For instance, the Profile of a Natural Language Understanding (HMC-NLU) AIM that does not handle spatial information is labelled in two ways, allows more compact signalling matched to the number of Attributes supported by an AIM:

 

Removing unsupported Attributes MMC-NLU-V2.1(ALL-AVG-OII)
Adding supported Attributes MMC-NLU-V2.1(NUL+TXO+TXR)

 

Attributes, however, are not always sufficient to identify the capabilities of an AIM instance. For instance, an AIM instance of Personal Status Display (PAF-PSD) may support Personal Status, but only the Speech (PS-Speech) and Face (PS-Face) Personal Status Factors. This is illustrated by the following two examples:

 

Removing unsupported Attributes PAF-PSD-V1.1(ALL@IPS#SPE#FCE)
Adding supported Attributes PAF-PSD-V1.1(NUL+TXT+AVM@IPS#FCE#GST

 

Here @ prefixed to IPS signals that the AIM supports Personal Status, but only of Speech and Face and of Face and Gesture represented by PSS, PSF, and GST, the codes of the PS-Speech, PS-Face, and PS-Gesture Sub-Attributes, respectively (the full list of Personal Status Sub-Attributes is provided by Error! Reference source not found.). The second case may apply for a sign-language capable AIM.

 

Technical Specification: AI Module Profiles (MPAI-PRF) is open to Community Comments. Anybody can submit comments to the draft by sending an email to the MPAI secretariat by 2024/05/08T23:59.

 

18     MPAI-SPG

Server-based Predictive Multiplayer Gaming (MPAI-SPG) aims to minimise the audio-visual and gameplay discontinuities caused by high latency or packet losses during an online real-time game. In case information from a client is missing, the data collected from the clients involved in a particular game are fed to an AI-based system that predicts the moves of the client whose data are missing. The same technologies provide a response to the need to detect who amongst the players is cheating.

Figure 21 depicts the MPAI-SPG reference model including the cloud gaming model.

Figure 21 – The MPAI-SPG Use Case

The collection of public documents is available here.

 

19     MPAI-XRV

XR Venues (MPAI-XRV) – Live Theatrical Performance addresses Broadway theatres, musicals, dramas, operas, and other performing arts increasingly use video scrims, backdrops, and projection mapping to create digital sets rather than constructing physical stage sets. This allows animated backdrops and reduces the cost of mounting shows. The use of immersion domes – especially LED volumes – promises to surround audiences with virtual environments that the live performers can inhabit and interact with.

MPAI-XRV has developed a reference model that describes the components of the Real-to-Virtual-to-Real scenario depicted in Figure 22.

Figure 22 – General Reference Model of the Real-to-Virtual-to-Real Interaction

 

The MPAI XR Venues (XRV) – Live Theatrical Stage Performance project, a use case of MPAI-XRV intends to define AI Modules that facilitate setting up live multisensory immersive stage performances which ordinarily require extensive on-site show control staff to operate. With XRV it will be possible to have more direct, precise yet spontaneous show implementation and control that achieve the show director’s vision but free staff from repetitive and technical tasks letting them amplify their artistic and creative skills.

An XRV Live Theatrical Stage Performance can extend into the metaverse as a digital twin. In this case, elements of the Virtual Environment experience can be projected in the Real Environment and elements of the Real Environment experience can be rendered in the Virtual Environment (metaverse).

The figure shows how the XRV system captures the Real (stage and audience) and Virtual (metaverse) Environment, AI-processes the captured data, injects new components into the Real and Virtual Environments.

Figure 23 – Reference Model of MPAI-XRV – Live Theatrical Stage Performance

 

MPAI-XRV – Live Theatrical Stage Performance is at the Standard Development stage.

The collection of public documents is available here.