Moving Picture, Audio and Data Coding
by Artificial Intelligence

All posts

MPAI Metaverse Model: what is it? A look at the table of contents

We think that the publication of the MPAI Metaverse Model (MPAI-MMM) standard last week is an important step in making the metaverse a viable proposition because it provides practical means for an implementer to develop a metaverse instance (M-Instance) that interoperates with another similarly developed M-Instance.

This post has the moderate ambition of just describing the high-level content of the standard looking at the Table of Contents.

Foreword obviously deals with general matters, such as the AI Framework (MPAI-AIF) standard and the MPAI Store, but also with the recent notion of Qualifier introduced by the Data Types, Formats, and Attributes (MPAI-TFA) that enables a machine to communicate its technology choices to another machine, a vital component if interoperability (you van have a look at A new type of “data about data”).

Introduction includes a summary of the work carried out by MPAI in the MMM projects: two technical reports and two standards.

Scope formalises the meaning of metaverse interoperability.

Definitions defines some 150 terms used by the MMM standard.

References provides a set of normative and informative references.

Operation despite of it being an informative section, it lays down the main elements of the MMM operation.

ProcessesActions, and Items are the steel, cement, and bricks of which the MMM building is composed. Processes perform Actions on Items (data) to achieve a goal and usually need the help of other Processes to reach it.

Scripting Language (MMM Script) helps a Processes to communicate its needs and requests in an efficient way. It also provides a rigorous human readable method to describe use case using Processes, Actions, and Items.

Profiles provide a simple way to achieve interoperability avoiding the need of possibly costly and useless (for the application) technologies.

Verification Use Cases is the moment of truth for MPAI-MMM. MMM Script is used to describe nine rather complex use cases covering widely different domains to show that the MMM-specified technologies can support the identified used cases.

Not sure that the above will convince you to read the MPAI-MMM standard from page 1 to the end (but MPAI-MMM is published as a series of connected web pages -:), I hope you can dedicate 15 more minutes to read Operation where you will learn more about the internals of MPAI-MMM.


On its fourth anniversary MPAI publishes the first specification supporting metaverse interoperability

Geneva, Switzerland – 30th September 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards – has concluded its 48th General Assembly (MPAI-48) by approving for publication

  1. The combined MPAI Metaverse Model – Architecture (MMM-ARC) V1.2 and Technologies (MMM-TEC) V1.0 standards
  2. The set of Multimodal Conversation (MPAI-MMC) V2.2; Object and Scene Description (MPAI-OSD) V1.1; Portable Avatar Format (MPAI-PAF) V1.2; and Data Types, Formats, and Attributes (MPAI-TFA) V1.0 standards.
  3. The Open-Source Reference Implementation of the Television Media Analysis (OSD-TMA) V1.1 Use case.

All new standards are released using the MPAI full web-based publication method providing the specification and, where available, the reference software, the conformance testing, and the performance assessment.

MPAI-48 also learned that four MPAI standards have been adopted as IEEE standards: two new (MPAI-MMM and MPAI-PAF) and two revised (MPAI-AIF and MPAI-MMC) bringing the total to eight.

The combined Technical Specification: MPAI Metaverse Model (MPAI-MMM) –  Architecture (MMM-ARC) V1.2 and Technical Specification: MPAI Metaverse Model (MPAI-MMM) –  Technologies (MMM-ARC) V1.0 specify five types of Processes operating in an M-Instance, thirty Actions that Processes can perform, 65 Data Types and their Qualifiers that Processes can Act on to achieve client and M-Instance interoperability.

Technical Specification: Multimodal Conversation (MPAI-MMC) V2.2 specifies 23 Data Types that enable more human-like and content-rich forms of human-machine conversation, applies them to eight use cases in different domains and specifies 23 AI Modules. MPAI-MMC reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Object and Scene Description (MPAI-OSD) V1.1 specifies 27 Data Types enabling the digital representation of spatial information of Audio and Visual Objects and Scenes, applies them to the Television Media Analysis (OSD-TMA) use case and specifies 15 AI Modules. MPAI-OSD reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Portable Avatar Format (MPAI-PAF) V1.2 specifies five Data Types enabling a receiving party to render a digital human as intended by the sending party, applies them to the Avatar-Based Videoconference Use Case, and specifies 13 AI Modules. MPAI-PAF reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies Qualifiers – a Data Type containing Sub-Types, Formats, and Attributes – associated to “media” Data Types – currently Text, Speech, Audio, Visual, 3D Model, and Audio-Visual – that facilitate/enable the operation of an AI Module receiving a Data Type instance. MPAI-48 has published Version 1.1 of the standard with Data Qualifiers for the MPAI Metaverse Model.

The reference software implementation of the Television Media Analysis (OSD-TMA) V1.1 Use Case produces a description of a TV program that includes the audio and visual objects, the IDs of the speakers and the faces with their space and time information, and the text of the speaker utterances.

Online presentations of MPAI-MMC, MPAI-MMM, MPAI-OSD, MPAI-PAF, and the reference software implementation of OSD-TMA will be made according to the following timetable:

Title Acronym UTC Registration link
Multimodal Conversation MPAI-MMC 15 @14 https://tinyurl.com/22h6d437
MPAI Metaverse Model MPAI-MMM 18 @15 https://tinyurl.com/242ahnuu
Object and Scene Description MPAI-OSD 16 @15 https://tinyurl.com/278azb4q
Portable Avatar Format MPAI-PAF 17 @14 https://tinyurl.com/2982tqoz
Television Media Analysis OSD-TMA 14 @15 https://tinyurl.com/yc8xhy7h

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): building a community of MPAI-AIF-based implementers.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): developing the Audio Six Degrees of Freedom (CAE-6DF).
  4. Connected Aonomous Vehicle (MPAI-CAV): developing the new MPAI-CAV Technologies (CAV-TEC) part of the standard and updating the Architecture part.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing use cases and functional requirements for MPAI-CUI V2.0 supporting a wide range of corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): working on a Call for Technologies for up-sampling filter for video.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software.
  10. Multimodal Conversation (MPAI-MMC): Developing technologies for more Natural-Language-based user interfaces capable of handling more complex questions.
  11. MPAI Metaverse Model (MPAI-MMM): extending the MPAI-MMM specs to support more applications.
  12. Neural Network Watermarking (MPAI-NNW): studying the use of fingerprinting as a technology for neural network traceability.
  13. Object and Scene Description (MPAI-PAF): studying applications requiring more space-time handling applications.
  14. Portable Avatar Format (MPAI-PAF): studying more applications using digital humans needing new technologies.
  15. AI Module Profiles (MPAI-PRF): specifying which features AI Workflow or more AI Modules need to support.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss.
  17. Data Types, Formats, and Attribes (MPAI-TFA) extending the standard to data types used by MPAI standards (e.g., aomotive and health).
  18. XR Venues (MPAI-XRV): developing the standard for improved development and execion of Live Theatrical Performances and studying the prospects of Cillaborative Immersive Laboratories.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribe to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and Yoube.

 

 


A new type of “data about data”

AI Modules (AIM) organised in AI Workflows (AIW) executed in the AI Framework enabling initialisation, dynamic configuration, and control of AIWs are a key element of the MPAI approach to AI-based Data Coding standards as depicted in Figure 1. AIMs communicate to other AIMs in the AIW the Data obtained by executing specific functions.

The effectiveness of the functions performed by the AIMs is improved if they know more about the capabilities of the AIMs they are connected to and the Data they receive as demonstrated by natural language processing. An instance of the MPAI Natural Language Processing (MMC-NLU) AIM can produce the recognised text and Meaning using three levels of information:

1.Just the input text.

2.Also the object identifiers referenced in the text.

3.Additionally, the object context in a relevant space.

The accuracy of the refined text and Meaning produced  by an MPAI-NLU AIM is expected to improve when moving from the first to the third case. The cases correspond with different levels of AIM capabilities.

Technical Specification: AI Module Profiles (MPAI-PRF) enables an AIM instance to signal its Attributes – such as input data, output data, and functionality – and Sub-Attributes – such as languages supported by a Text and Speech Translation AIM – that uniquely characterise the AIM. Currently, MPAI-PRF defines the Attributes of eight AIMs but Profiles for more AIMs are likely to be defined in the future.

The effectiveness of the functions performed by an AIM can also be enabled or enhanced if the AIM knows more about the characteristics of the Data received. Examples of characteristics include:

  • The CIE 1931 colour space of an instance of the Visual Data Type.
  • The MP3 format of a speech segment.
  • The WAV file format of an audio segment.
  • The gamma correction applied to the device that produced a video.
  • The ID of an object instance in an audio segment.
  • The Text conveyed by a speech segment.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies a new Data Type called Qualifier, a container that can be used to represent, for instance, that a Visual Data Type instance:

  • Uses a given colour space (Sub-Type)
  • Was produced by an AVC codec (Format).
  • Is described by Dublin Core Metadata (Attribute).

The current versions of MPAI Technical Specifications generally assume that most of the Media Objects exchanged by AIMs are composed of “Content” and “Qualifiers”.

Therefore, Qualifiers are a specialised type of metadata intended to support the operation of AIMs receiving data from other AIMs and conveying information on Sub-Types, Formats, and Attributes related to the Content. Qualifiers are  intended to convey information for use by an AIM. They are human-readable but intended only to be used by AIMs.

MPAI also provides a standard method to attach information to a Data Type instance called Annotation defined as Data attached to an Object or a Scene. As opposed to Qualifier that describes intrinsic properties of a Data Type, an Annotation is spatially and temporally local and changeable.

Future versions of MPAI-TFA will likely be published because of the large variety of application needs that will require the specification of Qualifiers for additional Data Types. MPAI-TFA users are invited to communicate their need for extension of existing and specification of additional Data Types in MPAI-TFA to the MPAI Secretariat. Therefore, versioning of Qualifiers is a critical component of MPAI-TFA.


MPAI propounds the development of Collaborative Immersive Laboratories (CIL)

Collaborative Immersive Laboratory (XRV-CIL) is project designed to enable researchers in network-connected physical venues equipped with devices that create an immersive virtual environment to manipulate and visualise laboratory data with other researchers located at different places and having a simultaneous immersive experience.

XR Venue (MPAI-XRV) is an MPAI project addressing a multiplicity of use cases enabled by Extended Reality (XR) and enhanced by Artificial Intelligence (AI) technologies. MPAI-XRV specifies design methods for AI Workflows and AI Modules that automate complex processes in a variety of application domains. Venue is used as a synonym for Real and Virtual Environment. CIL is one of the XRV projects.

One use case for CIL would be to work with medical data such as scans to discover patterns within cellular data to facilitate therapy identification as part of the following workflow:

  1. Start from a file (e.g., a LIF file for data from a confocal microscope) that contains slices of a 3D Object (+ time) produced by machines from different manufacturers and enable real time navigation of the 3D object starting from slices.
  2. Use AI trained filter to filter out the noise. Noise is information not part of the scanned object that is found in the slices.
  3. Preserve the slices by applying specific processes, e.g., dehydration.
  4. Enhance some specific features of the object by using appropriate contrasting agents, e.g., monoclonal antibodies.
  5. Use the slices in sufficient number to train a Machine:
    1. To count the cells in a human tissue from different organs, different living bodies, and anatomical features presenting different health conditions.
    2. To identify the typology and functions of the cells caused by the influence of genomics and environment, i.e., phenotyping.
  6. Request the trained Machine to produces “inferences” used to count and identify the cells having specified features.
  7. Generate statistics of the inferences produced by the Machine.
  8. Human navigates the cleaned (noise-filtered) slices as an object and verify whether the inferences of the Machine can be trusted.
  9. A trajectory of possible outcomes can plot towards multiple decision paths for desired outcomes based on change in how the living body changes over time guiding proactive decisions in habit and therapeutical interventions.
  10. After a certain time, redo steps 1 to 9.

For instance, Figure 1 shows a CT or MRI dataset being normalised, analysed and the result rendered with e.g., a renderer that is common to the participating labs. Each Lab may enter annotations to the dataset or apply rendering controls that enhance appropriate parts of the rendered dataset.

Figure 1 – An example of data analysed and rendered in an XRV-CIL

Figure 1 represents a specific case of the full XRV-CIL project while Figure 2 represents the more general case.

Figure 2 – The multi-technology, multi-location XRV-CIL case

Let us assume that there are N geographically distributed labs providing datasets acquired with different technologies at different times related to a particular application domain (where each lab may provide more than one dataset). Technology-specific AI Modules normalise the datasets. A Fusion AI Module controlled by Fusion Parameters from each lab provides M Fused Data (number of Fused Data is independent of the number of input datasets).

Fused Data are processed by Analysis AI Modules driven by Analysis Parameters possibly coming from one or more labs. They produce Desired Results which are then Rendered specifically for each Lab either locally or in the cloud.

The model of Figure 2 is applicable to various domains for scientific, industrial, and educational applications such as:

  1. Medical
  2. Anthropological
  3. Multi- and hyper-spectral Imaging
  4. Spectroscopy
  5. Chemistry
  6. Geology and Material Science
  7. Non-destructive testing
  8. Oceanography
  9. Astronomy

XRV-CIL promises to dramatically improve the way data is collaboratively acquired, processes, and shared among laboratories.

MPAI, the international, unaffiliated, non-profit organisation developing standards for AI-based data coding might contribute to the areas of dataset normalisation, specification of input/output and metadata of processing elements, interaction protocols with rendered processing results. MPAI could also contribute to identification of specific AI technologies to process datasets, e.g., cell counting mentioned above.

 


MPAI publishes a new version of Context-based Audio Enhancement (MPAI-CAE) and a new standard for Data Qualifiers (MPAI-TFA)

Geneva, Switzerland – 21st August 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards – has concluded its 47th General Assembly (MPAI-47) by approving for publication the Context-based Audio Enhancement (MPAI-CAE) V2.2 standard and the new Data Types, Formats, and Attributes (MPAI-TFA) V1.0 standard for Community Comments. The new versions are released using the new full web-based publication method.

Technical Specification: Context-based Audio Enhancement (MPAI-CAE) V2-2 improves the user experience for different audio-related applications, such as entertainment, restoration, and communication in a variety of different contexts such as in the home, in the office, and in the studio. V2.2 extends the capabilities of several data formats used across MPAI standard.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies Qualifiers – a Data Type containing Sub-Types, Formats, and Attributes – associated to “media” Data Types – currently Text, Speech, Audio, and Visual – that facilitate-enable the operation of an AI Module receiving a Data Type instance.

The capabilities of the standards will be presented online on September 24 at 16:00 UTC for MPAI-CAE V2.2 and August 27 at T14:00 UTC for MPAI-TFA). To attend, please register at https://tinyurl.com/2wj8e4bn for MPAI-CAE V2.2 and at https://tinyurl.com/3p8j74st for MPAI-TFA V1.0.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): waiting for response to the Audio Six Degrees of Freedom (CAE-6DF).
  4. Connected Autonomous Vehicle (MPAI-CAV): developing the new MPAI-CAV Technologies (CAV-TEC) part of the standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing use cases and functional requirements for MPAI-CUI V2.0 supporting more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): working on a Call for Technologies.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software.
  10. Multimodal Conversation (MPAI-MMC): finalising V2.2 and developing Performance Assessment of some important AI Modules.
  11. MPAI Metaverse Model (MPAI-MMM): developing the new MPAI-MMM Technologies (MMM-TEC) part of the standard.
  12. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  13. Object and Scene Description (MPAI-PAF): finalising V1.2 and developing reference software, conformance testing and new areas for digital humans.
  14. Portable Avatar Format (MPAI-PAF): finalising V1.2 and developing reference software, conformance testing and new areas for digital humans.
  15. AI Module Profiles (MPAI-PRF): specifying which features an AI Workflow or and AI Module supports.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  17. Data Types, Formats, and Attributes (MPAI-TFA): extending he standard to data types used by other MPAI standards.
  18. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performances.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.


MPAI publishes Version 1.1 of the Human and Machine Communication standard

Geneva, Switzerland – 10 July 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 46th General Assembly (MPAI-46) by approving for publication the new Version 1.1 of the Human and Machine Communication standard.

Technical Specification: Human and Machine Communication (MPAI-HMC) V1.1 enables an Entity to hold a multimodal communication with another Entity possibly in a different Context. The standard is agnostic of the parties in a communication as an Entity can be a human in an audio-visual scene of a real space or a Machine in an Audio-Visual Scene of a Virtual Space. Humans and Machines can operate in different Contexts, e.g., language and culture. MPAI-HMC references a range of technologies specified in five MPAI Standards.

MPAI-HMC will be presented online on 22 July at 15 UTC (8 PDT, 11 EDT, 23 CST, 24 KST). To attend the presentation, register at https://us06web.zoom.us/meeting/register/tZEtde-orTwqE9x9sSkauN9CxKsLvbJrIeSF.

At previous meetings, MPAI has published four draft standards for Community Comments: Context-base Audio Enhancement V2.2, Multimodal Conversation V2.2, Object and Scene Description V1.1, and Portable Avatar Format V1.2. Interested parties should check the mentioned links and make comments as the deadline for submission has not been reached yet.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): developing reference software, conformance testing and new areas for digital humans.
  13. AI Module Profiles (MPAI-PRF): to specify which features an AI Module supports.
  14. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  15. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performance.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter, and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


MPAI publishes new Standard, Reference Software, and Conformance Testing Specification

Geneva, Switzerland – 12 June 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 45th General Assembly (MPAI-45) by approving for publication the new AI Module Profiles, Neural Network Watermarking Reference Software, and the Multimodal Conversation Reference Software.

Technical Specification: AI Module Profiles (MPAI-PRF) V1.0 is an important addition to the MPAI architecture because it enables an AI Module to signal its capabilities in terms of input and output data and specific functionalities.

Reference Software Specification: Neural Network Watermarking (MPAI-NNW) V1.2 makes available to the community software implementing the functionalities of the Neural Network Watermarking Standard when implemented in an AI Framework and using limited capability Microcontroller Units.

Conformance Testing Specification: Multimodal Conversation (MPAI-MMC) V2.1 publishes methods and data sets to enable a developer or a user to ascertain the claims of an implementation to conform with the specification of the Conversation with Emotion, Multimodal Question Answering, and Unidirectional Speech Translation AI Workflows.

At its previous meeting, MPAI has published three Calls for Technologies on Six Degrees of Freedom Audio, Connected Autonomous Vehicle – Technologies, and MPAI Metaverse Model – Technologies. Interested parties should check the mentioned links for update as the deadline for submission has not been reached yet.

MPAI is happy to announce that the Institute if Electrical and Electronic Engineers has adopted the companion Connected Autonomous Vehicle – Architecture standard as an IEEE standard identified as 3307-2024.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): developing reference software, conformance testing and new areas for digital humans.
  13. AI Module Profiles (MPAI-PRF): to specify which features an AI Module supports.
  14. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  15. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performance.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter, and follow MPAI on social media LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


A standard for autonomous vehicle componentisation

The 44th MPAI General Assembly has published three Calls for Technologies. The Connected Autonomous Vehicle – Technologies (CAV-TEC) Call requests parties having rights to technologies satisfying the CAV-TEC Use Cases and Functional Requirements and the CAV-TEC Framework Licence to respond to the Call preferably using the CAV-TEC Template for Responses. An online presentation of this Call will be held on 2024/06/6 (Thursday) at 16 UTC. Please register, if you wish to attend the presentation (recommended if you intend to respond).

MPAI kicked off the Connected Autonomous Vehicle (MPAI-CAV)) project in the first days after its establishment. The project  was particularly challenging and only in September 2023, MPAI was ready to publish Version 1.0 of Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture (CAV-ARC). This specified a CAV as a system composed of Subsystems for each of which functions, input/output data, and topology of components were specified. In its turn each subsystem was broken down into components of which functions and input/output data were specified. Each subsystem was assumed to be implemented as an AI Workflow (AIW) made of Components implemented as AI Modules (AIM) executed in an AI Framework (AIF) as specified by the AI Framework (MPAI-AIF) standard.

This is illustrated in Figure 1 where a human staying outside of a CAV interacts with it via the Human-CAV Interaction Subsystem (HCI) requesting the Autonomous Motion Subsystem (AMS) to take the human to a destination. The AMS requests spatial information from the Environment Sensing Subsystem (ESS), decides a route (possibly after consulting with the HCI and the human) and starts the travel. The spatial information provided by the ESS is used to create a model of the external environment and is possibly integrated with other environment models obtained from CAVs in range. Finally, the AMS has enough information to issue a command to the Motion Actuation Subsystem (MAS) to move the CAV to the desired place that can be close if the environment is “complex” or rather far if it is “simple”.

Figure 1 – The reference model of MPAI Connected Autonomous Vehicle

CAV-ARC is a functional specification in the sense that it identifies subsystems and components and their functions, but not the precise functions of the data exchanged. This can be seen in Figure 2 where, say, the Road State data type is identified and its functions generally described but without a full specification of functional requirements.

Figure 2 – The Autonomous Motion Subsystem.

The purpose of the CAV-TEC Call is to identify and characterise all the data types required by the CAV reference model and stimulate specific technology proposals.

Because many components of the HCI are shared with other MPAI standards, in September 2023 MPAI has published Multimodal Conversation (MPAI-MMC) V2.0 that includes the HCI specification whose scope goes beyond the general CAV-ARC scope. Most of the CAV-ARC data types of the CAV-HCI reference model of Figure 3 are fully specified.

Figure 4 – The Human-CAV Interaction Subsystem

What is still missing – and is part of the Call – is the full specification of the messages exchanged by the HCI with the AMS and its peer HCIs in remote CAVs.

The Use Cases and Functional Requirements document attached to the Call contains an initial form of JSON syntax and semantics of all data types and requests comments on their appropriateness and proposals for data type formats and attributes.

It is interesting to note that MPAI assumes that a CAV generates a “private” metaverse used to plan decisions to move the CAV in a real environment. A CAV may request – and the requested CAV may decide to share – part of their private metaverses to facilitate understanding of the common real space(s) they traverse. MPAI investigations have shown that a CAV’s private metaverse can be represented _and_ shared by the same or slightly extended MPAI-MMM metaverse technologies.

This observation has been put in practice and part of the Technical Specification: MPAI Metaverse Model (MPAI-CAV) – Architecture (MPAI-TEC) V1.1 is referenced by the MPAI-TEC Call. It should be noted that the parallel MMM-TEC Call for Technologies seeks to enhance the current MMM-ARC specification by providing an initial form of JSON syntax and semantics of all data types and requesting comments on their appropriateness and proposals for data type formats and attributes.

Bringing to reality the dream of autonomous vehicles will be a major contribution to improving our life and environment. Standards can greatly contribute to the conversion of CAVs from siloed systems to systems made of standard components that are more reliable, explainable, and affordable.


Achieving metaverse interoperability

The 44th MPAI General Assembly has published three Calls for Technologies. The MPAI Metaverse Model – Technologies. requests parties having rights to technologies satisfying the MMM-TEC Use Cases and Functional Requirements and the MMM-TEC Framework Licence to respond to the Call preferably using the MMM-TEC Template for Responses. An online presentation of this Call will be held on 2024/05/31 (Tuesday) at 15 UTC. Please register, if you wish to attend the presentation (recommended if you intend to respond).

MPAI kicked off the MPAI Metaverse Model (MPAI-MMM) project some 30 months ago. The project  has already produced two Technical Reports exploring the field, one on Functionalities and one on Functional Profiles. In September 2023, MPAI published Version 1.0 of Technical Specification: MPAI Metaverse Model (MPAI-MMM) – Architecture (MMM-TEC). This specified the MMM Operation Model, composed of interacting Processes (specifically, Devices, Services, and Users representing humans), exchanging Items (data) and performing Actions on the Items. Two metaverse instances implementing the MMM Operation Model can interoperate – i.e., exchange and perform Actions on Items – if they satisfy the MMM-ARC specified Functional Requirements to enable Conversion Services to overcome possible technology incompatibility.

This is illustrated in Figure 3 where there are three humans (green rectangles) staying outside of an MMM instance and communicating with the MMM instance via Devices located half-way between the real world (universe) and the virtual world (metaverse). Each of human1 and human3 has a Device connected to a User while human2 has two Devices connected to one User each. The first User of the human2 is rendered as two Personae (Avatars) and the User of the third human is not rendered (i.e., it is just a Process performing Actions in the MMM).

Figure 3 – MPAI-MMM Operation Model

The links in the Figure represent possible interactions between MMM Processes. While not represented here for simplicity, Processes in different Metaverse Instances (M-Instances) may also interact. While MMM-ARC provided an initial form of interoperability, the MMM-TEC Call for Technologies published on 15 May seeks to provide a stronger form of interoperability.

The Use Cases and Functional Requirements attached to the Call contains an initial form of JSON syntax and semantics of Items and requests comments on their appropriateness and proposals for Items formats and attributes.

It is interesting to note that MPAI assumes that a CAV generates a “private” metaverse used to plan decision to move. A CAV may request and a CAV may decide to share part of their private metaverses to facilitate understanding of the common real space(s) they traverse. Investigations carried out by MPAI have shown that a CAV’s private metaverse can be represented _and_ shared by the same MPAI-MMM metaverse technologies.

This is the link to the next online presentation on the third MPAI Call for Technologies on the 6th of June at 16 UTC.


An MPAI standard for new dimensions of experience

What are the dimensions targeted by the new MPAI standard? To enable humans to experience virtual replicas of an audio scene of the real world from different perspectives while moving in it and orienting their heads. The standard will be called Six Degrees of Freedom Audio, and the acronym will be CAE-6DF.

As a rule, before developing a new standard, MPAI publishes a Call for Technologies where it describes the purpose of the Call and what a respondent to the Call should do to have it accepted for consideration. The Call is complemented by two documents – one specifying the functional requirements and one the commercial requirements the planned standard should satisfy.

The 44th MPAI General Assembly has published three Calls for Technology, one for Six Degrees of Freedom Audio. The standard will be developed by the Context-based Audio Enhancement Development Committee (CAE-DC). An online presentation of this Call will be held on 2024/05/28 (Tuesday) at 16 UTC. If you wish to attend the presentation (recommended if you intend to respond), please register.

State-of-the-art VR headsets provide high-quality realistic visual content by tracking both the user’s orientation and position in 3D space. This capability opens new opportunities for enhancing the degree of immersion in VR experiences. VR games have become increasingly immersive over the years based on these developments.

However, despite the success of synthetic virtual environments such as 3D first-person games, those that feature content dynamically captured from the real world are yet to be widely deployed. Recent developments, such as dynamic Neural Radiance Fields (NERFs) and 4D Gaussian splatting, promise to give users the ability to be fully immersed in visual scenes populated by both static and dynamic entities.

Capturing audiovisual scenes with both static and dynamic entities promises a full immersion experience, but visual immersion alone is not sufficient without an equally convincing auditory immersion. CAE-6DF should enable users to experience an immersive theatre production through a VR headset, for example walking around actors and getting closer to different conversations, or a concert where a user can choose different seats to experience the performance with a 360° video associated with those viewing positions. Additionally, CAE-6DF should enable experiencing the acoustics of the concert hall from different perspectives.

The CAE-6DF Call is seeking innovative technologies that enable and support such experiences, specifically looking for technologies to efficiently represent content of scene-based or object-based formats, or a mixture of these, process it with low latency and provide high responsiveness to user movements. It should be possible to render the audio scene over loudspeakers or headphones. These technologies should also consider audio-visual cross-modal effects to present a high level of auditory immersion that complements the visual immersion provided by state-of-the-art volumetric environments.

Figure 1 depicts a reference model of the planned CAE-6DF standard where a lowercase or Capital initial letter of a term implies that the term represents an entity that is either part of the real space or of a Virtual Space.

Figure 1 – real spaces and Virtual Spaces in CAE-6DF

On the left-hand side there are real audio spaces. In the middle there is a Virtual Space generated by a computing platform which host digital representations of acoustical scenes and synthetic Audio Objects generated by the platform. Rendering of arbitrary user-selected Points of Views of the Audio Scene is performed on the right-hand side real space in a perceptually veridical fashion.

The Use Cases and Functional Requirements document attached to the Call considers four use cases:

  1. Immersive Concert Experience (Music plus Video).
  2. Immersive Radio Drama (Speech plus Foley/Effects).
  3. Virtual lecture (Audio plus Video).
  4. Immersive Opera/Ballet/Dance/Theatre experience (Music, Drama with 360° Video/6DoF Visual).

From these, a set of Functional Requirements are derived.

  1. Audio experience and impact of visual conditions on the Audio experience:
    1. Audio-Visual Contract, i.e. alignment of audio scenes with visual scenes.
    2. Effects of locomotion on human audio-visual perception.
    3. Orientation response, i.e., turning toward a sound source of interest.
    4. Distance perception where visual and auditory experiences affect each other.
  2. Content profiles:
    1. Scene-based: the captured Audio Scene, for example using Ambisonics, is accurately reconstructed with a high degree of correspondence to the audio scene’s acoustic ambient characteristics.
    2. Object-based: the Audio Scene comprises Audio Objects and associated metadata to allow synthesising a perceptually veridical, but not necessarily physically accurate, representation of the captured Audio Scene.
    3. Mixed: a combination of scene-based and object-based profiles where Audio Objects can be overlaid or mixed with Scene-based Content.
  3. Rendering modalities:
    1. Loudspeaker-based, i.e., the content is rendered through at least two loudspeakers.
    2. Headphone-based, i.e., the content is rendered through headphones.
  4. Characteristics of rendering space when content is rendered through loudspeakers:
    1. Shape and dimensions: Not larger than the captured space.
    2. Acoustic ambient characteristics:
      1. Early decay time (EDT) lower than the captured space.
      2. Frequency mode density lower than the captured space.
      3. Echo density lower than the captured space.
      4. Reverberation time (T60) lower than the captured space.
      5. Energy decay curve characteristics same or lower than the captured space.
      6. Background noise less than 50dB(A) SPL.
  5. The rendering space, if the headphones block ambient acoustical characteristics of the rendering space, should have the following characteristics:
    1. Shape and dimensions: Not larger than the captured space.
    2. Acoustic ambient characteristics: No constraints on the ambient characteristics as defined in point 2.2
  6. User movement in the rendering space:
    1. May be the result of actual locomotion/orientation of the User as tracked by sensors.
    2. May be the result of virtual locomotion/orientation as actuated by controlling devices.
    3. The maximum responsive latency of the audio system to user movement should be 20 ms or less (some applications may have higher latency).

A comment about the mentioned “Commercial Requirements” is that this is a misnomer because MPAI is not “selling” anything. Even the MPAI standards are freely downloadable from the MPAI web site. Indeed, the formal name used by MPAI is Framework Licence and it is a document that includes a set of guidelines that a submitter of a proposal commits to adopt when the standard will be approved and a licence for the use of patented items is issued. The CAE-6DF Framework Licence is available.

Finally, to facilitate the work of those submitting a response, MPAI is providing a document called Template for Responses.

CAE-6DF will join the growing list of MPAI standards: eleven standards have already been published – on application environment, audio, connected autonomous vehicle, company performance prediction, ecosystem governance, human and machine communication, object and scene description, and portable avatar format – and is about to publish a new one on AI Module Profiles. Reference software and conformance testing specifications are in the course of being published. The standards are revised and extended and new versions published when necessary. New standards are under development such as online gaming, AI for health, and XR venues and several projects in new areas such as AI-based video coding are being investigated.