Moving Picture, Audio and Data Coding
by Artificial Intelligence

Archives: 2022-09-07

Achieving metaverse interoperability

The 44th MPAI General Assembly has published three Calls for Technologies. The MPAI Metaverse Model – Technologies. requests parties having rights to technologies satisfying the MMM-TEC Use Cases and Functional Requirements and the MMM-TEC Framework Licence to respond to the Call preferably using the MMM-TEC Template for Responses. An online presentation of this Call will be held on 2024/05/31 (Tuesday) at 15 UTC. Please register, if you wish to attend the presentation (recommended if you intend to respond).

MPAI kicked off the MPAI Metaverse Model (MPAI-MMM) project some 30 months ago. The project  has already produced two Technical Reports exploring the field, one on Functionalities and one on Functional Profiles. In September 2023, MPAI published Version 1.0 of Technical Specification: MPAI Metaverse Model (MPAI-MMM) – Architecture (MMM-TEC). This specified the MMM Operation Model, composed of interacting Processes (specifically, Devices, Services, and Users representing humans), exchanging Items (data) and performing Actions on the Items. Two metaverse instances implementing the MMM Operation Model can interoperate – i.e., exchange and perform Actions on Items – if they satisfy the MMM-ARC specified Functional Requirements to enable Conversion Services to overcome possible technology incompatibility.

This is illustrated in Figure 3 where there are three humans (green rectangles) staying outside of an MMM instance and communicating with the MMM instance via Devices located half-way between the real world (universe) and the virtual world (metaverse). Each of human1 and human3 has a Device connected to a User while human2 has two Devices connected to one User each. The first User of the human2 is rendered as two Personae (Avatars) and the User of the third human is not rendered (i.e., it is just a Process performing Actions in the MMM).

Figure 3 – MPAI-MMM Operation Model

The links in the Figure represent possible interactions between MMM Processes. While not represented here for simplicity, Processes in different Metaverse Instances (M-Instances) may also interact. While MMM-ARC provided an initial form of interoperability, the MMM-TEC Call for Technologies published on 15 May seeks to provide a stronger form of interoperability.

The Use Cases and Functional Requirements attached to the Call contains an initial form of JSON syntax and semantics of Items and requests comments on their appropriateness and proposals for Items formats and attributes.

It is interesting to note that MPAI assumes that a CAV generates a “private” metaverse used to plan decision to move. A CAV may request and a CAV may decide to share part of their private metaverses to facilitate understanding of the common real space(s) they traverse. Investigations carried out by MPAI have shown that a CAV’s private metaverse can be represented _and_ shared by the same MPAI-MMM metaverse technologies.

This is the link to the next online presentation on the third MPAI Call for Technologies on the 6th of June at 16 UTC.


An MPAI standard for new dimensions of experience

What are the dimensions targeted by the new MPAI standard? To enable humans to experience virtual replicas of an audio scene of the real world from different perspectives while moving in it and orienting their heads. The standard will be called Six Degrees of Freedom Audio, and the acronym will be CAE-6DF.

As a rule, before developing a new standard, MPAI publishes a Call for Technologies where it describes the purpose of the Call and what a respondent to the Call should do to have it accepted for consideration. The Call is complemented by two documents – one specifying the functional requirements and one the commercial requirements the planned standard should satisfy.

The 44th MPAI General Assembly has published three Calls for Technology, one for Six Degrees of Freedom Audio. The standard will be developed by the Context-based Audio Enhancement Development Committee (CAE-DC). An online presentation of this Call will be held on 2024/05/28 (Tuesday) at 16 UTC. If you wish to attend the presentation (recommended if you intend to respond), please register.

State-of-the-art VR headsets provide high-quality realistic visual content by tracking both the user’s orientation and position in 3D space. This capability opens new opportunities for enhancing the degree of immersion in VR experiences. VR games have become increasingly immersive over the years based on these developments.

However, despite the success of synthetic virtual environments such as 3D first-person games, those that feature content dynamically captured from the real world are yet to be widely deployed. Recent developments, such as dynamic Neural Radiance Fields (NERFs) and 4D Gaussian splatting, promise to give users the ability to be fully immersed in visual scenes populated by both static and dynamic entities.

Capturing audiovisual scenes with both static and dynamic entities promises a full immersion experience, but visual immersion alone is not sufficient without an equally convincing auditory immersion. CAE-6DF should enable users to experience an immersive theatre production through a VR headset, for example walking around actors and getting closer to different conversations, or a concert where a user can choose different seats to experience the performance with a 360° video associated with those viewing positions. Additionally, CAE-6DF should enable experiencing the acoustics of the concert hall from different perspectives.

The CAE-6DF Call is seeking innovative technologies that enable and support such experiences, specifically looking for technologies to efficiently represent content of scene-based or object-based formats, or a mixture of these, process it with low latency and provide high responsiveness to user movements. It should be possible to render the audio scene over loudspeakers or headphones. These technologies should also consider audio-visual cross-modal effects to present a high level of auditory immersion that complements the visual immersion provided by state-of-the-art volumetric environments.

Figure 1 depicts a reference model of the planned CAE-6DF standard where a lowercase or Capital initial letter of a term implies that the term represents an entity that is either part of the real space or of a Virtual Space.

Figure 1 – real spaces and Virtual Spaces in CAE-6DF

On the left-hand side there are real audio spaces. In the middle there is a Virtual Space generated by a computing platform which host digital representations of acoustical scenes and synthetic Audio Objects generated by the platform. Rendering of arbitrary user-selected Points of Views of the Audio Scene is performed on the right-hand side real space in a perceptually veridical fashion.

The Use Cases and Functional Requirements document attached to the Call considers four use cases:

  1. Immersive Concert Experience (Music plus Video).
  2. Immersive Radio Drama (Speech plus Foley/Effects).
  3. Virtual lecture (Audio plus Video).
  4. Immersive Opera/Ballet/Dance/Theatre experience (Music, Drama with 360° Video/6DoF Visual).

From these, a set of Functional Requirements are derived.

  1. Audio experience and impact of visual conditions on the Audio experience:
    1. Audio-Visual Contract, i.e. alignment of audio scenes with visual scenes.
    2. Effects of locomotion on human audio-visual perception.
    3. Orientation response, i.e., turning toward a sound source of interest.
    4. Distance perception where visual and auditory experiences affect each other.
  2. Content profiles:
    1. Scene-based: the captured Audio Scene, for example using Ambisonics, is accurately reconstructed with a high degree of correspondence to the audio scene’s acoustic ambient characteristics.
    2. Object-based: the Audio Scene comprises Audio Objects and associated metadata to allow synthesising a perceptually veridical, but not necessarily physically accurate, representation of the captured Audio Scene.
    3. Mixed: a combination of scene-based and object-based profiles where Audio Objects can be overlaid or mixed with Scene-based Content.
  3. Rendering modalities:
    1. Loudspeaker-based, i.e., the content is rendered through at least two loudspeakers.
    2. Headphone-based, i.e., the content is rendered through headphones.
  4. Characteristics of rendering space when content is rendered through loudspeakers:
    1. Shape and dimensions: Not larger than the captured space.
    2. Acoustic ambient characteristics:
      1. Early decay time (EDT) lower than the captured space.
      2. Frequency mode density lower than the captured space.
      3. Echo density lower than the captured space.
      4. Reverberation time (T60) lower than the captured space.
      5. Energy decay curve characteristics same or lower than the captured space.
      6. Background noise less than 50dB(A) SPL.
  5. The rendering space, if the headphones block ambient acoustical characteristics of the rendering space, should have the following characteristics:
    1. Shape and dimensions: Not larger than the captured space.
    2. Acoustic ambient characteristics: No constraints on the ambient characteristics as defined in point 2.2
  6. User movement in the rendering space:
    1. May be the result of actual locomotion/orientation of the User as tracked by sensors.
    2. May be the result of virtual locomotion/orientation as actuated by controlling devices.
    3. The maximum responsive latency of the audio system to user movement should be 20 ms or less (some applications may have higher latency).

A comment about the mentioned “Commercial Requirements” is that this is a misnomer because MPAI is not “selling” anything. Even the MPAI standards are freely downloadable from the MPAI web site. Indeed, the formal name used by MPAI is Framework Licence and it is a document that includes a set of guidelines that a submitter of a proposal commits to adopt when the standard will be approved and a licence for the use of patented items is issued. The CAE-6DF Framework Licence is available.

Finally, to facilitate the work of those submitting a response, MPAI is providing a document called Template for Responses.

CAE-6DF will join the growing list of MPAI standards: eleven standards have already been published – on application environment, audio, connected autonomous vehicle, company performance prediction, ecosystem governance, human and machine communication, object and scene description, and portable avatar format – and is about to publish a new one on AI Module Profiles. Reference software and conformance testing specifications are in the course of being published. The standards are revised and extended and new versions published when necessary. New standards are under development such as online gaming, AI for health, and XR venues and several projects in new areas such as AI-based video coding are being investigated.

 


More members for the MPAI Standards Family

The latest – 15 June 2024 – MPAI General Assembly (MPAI-44) proved, if ever there was a need, that MPAI produces non only standards, but also promising new projects. MPAI-44 launched three new projects on 6 degrees of freedom audio (CSAE-6DF), technologies for connected autonomous vehicle components, and technologies for the MPAI Metaverse Model.

The CAE-6DF Call is seeking innovative technologies that enable users to walk in a virtual space representing a remote real space and enjoy the same experience as if they were in the remote space. Visit the CAE-6DF page and register to attend the online presentation of the Call of the 28th of May, communicate your intention to respond to the Call by the 4th of June, and submit your response by the 16th of September.

The CAV-TEC Call requests technologies that build on the Reference Architecture of the Connected Autonomous Vehicle (CAV-ARC) to achieve a componentisation of connected autonomous vehicles. Visit the CAV-TEC page and register to attend the online presentation of the Call of the 6th of July, communicate your intention to respond to the Call by the 14th of June, and submit your response by the 5th of July.

The MMM-TEC Call requests technologies that build on the Reference Architecture of the MPAI Metaverse Model (MMM-ARC) and enable a metaverse instance to interoperate with other metaverse instances. Visit the MMM-TEC page and register to attend the online presentation of the Call of the 31st of July, communicate your intention to respond to the Call by the 7th of June, and submit your response by the 6th of July.

MPAI-44 has brought more results.

Four existing standards have been republished with significant new material that extends the current functionalities of the four and supports the needs of the three new projects in the form of extensions of existing standards that are published for Community Comments:

  • Object and Scene Description (MPAI-OSD) V1.1 adds a new Use Case for automatic audio-visual analysis of television programs and new functionalities for Visual and Audio-Visual Objects and Scenes.
  • The Cases of Context-based Audio Experience (MPAI-CAE) V2.2 standard (CAE-USC) supports new functionalities for Audio Object and Scene Descriptors.
  • The Multimodal Conversation (MPAI-MMC) V2.2 standard introduces new AI Modules and new Data Formats to support the new MPAI-OSD Television Media Analysis Use Case.
  • The Portable Avatar Format (MPAI-PAF) V1.2 standard extends the specification of the Portable Avatar to support new functionality requested by the CAV-TEC and MMM-TEC Calls.

Everybody is welcome to review the draft standards (new versions of existing standards) and send comments to the MPAI Secretariat by 23:59 UTC of the relevant day.  Comments will be considered when the standards will be published in final form.

The standards mentioned above cover a significant share of the MPAI portfolio, but your navigation need not stop here. If you wish to delve into the other MPAI standards, you can go to their appropriate web pages where you can read overviews and find many links to relevant web pages. Each MPAI page contains a web version and other support material such as PowerPoint presentations and video recordings.


MPAI publishes a record three Calls for Technologies, four new Standards for Community Comments and one Reference Software Specification

Geneva, Switzerland – 15 May 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 44th General Assembly (MPAI-44) with an unprecedented display of productivity.

Three Calls for Technologies:

  • Context-based Audio Enhancement (MPAI-CAE): Six Degrees of Freedom (CAE-6DF) to develop an ambitious standard that will enable users to have the spatial experience of a remote audio environment by “walking into it”. Register for the online presentation on 2024/05/28 16:00 UTC.
  • Connected Autonomous Vehicle (MPAI-CAV) – Technologies to extend the existing MPAI-CAV – Architecture standard by supporting reference to specific technologies that are the target of the Call. Register for the online presentation on 2024/06/06 16:00 UTC.
  • MPAI Metaverse Model (MPAI-MMM) – Technologies to extend the existing MPAI-MMM – Architecture standard by supporting reference to specific technologies that are the target of the Call. Register for the online presentation on 2024/05/31 16:00 UTC.

Four new Standards published with a request for Community Comments:

  • Object and Scene Description (MPAI-OSD) V1.1 includes a new Use Case for automatic audio-visual analysis of television programs and new functionalities for Visual and Audio-Visual Objects and Scenes.
  • Context-based Audio Experience (MPAI-CAE) Use Cases (CAE-USC) V2.2 supports new functionalities in Audio Object and Audio Scene Descriptors.
  • Multimodal Conversation (MPAI-MMC) V2.2 introduces new AI Modules and new Data Formats to support the new MPAI-OSD Television Media Analysis Use Case.
  • Portable Avatar Format (MPAI-PAF) V1.2 extends the specification of the Portable Avatar to support new functionality requested by the MPAI-CAV and MPAI-MMM Calls.

One Reference Software Specification:

MPAI’s scope of activities is quite wide as shown by its 11 already published standards that include 18 Use Cases and some 75 AI Modules, and 65 Data Types shared across different standards. MPAI has succeeded in developing a common layer of technologies supporting a variety of application domains.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and conformance testing and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): developing reference software, conformance testing and new areas for digital humans.
  13. AI Module Profiles (MPAI-PRF): to specify which features an AI Module supports.
  14. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  15. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performance.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.


A new brick for the MPAI architecture

At its 43rd General Assembly of 2024 April17, MPAI approved the publication of the draft AI Module Profiles (MPAI-PRF) standard with a request for Community Comments. The scope of MPAI-PRF is to provide a solution to the problem that MPAI finds more and more often: the AI Modules (AIM) it specifies in different standards have the same basic functionality but may have different features.

First two words about AIMs. MPAI develops application-oriented standards for applications that MPAI calls AI Workflows (AIW) that can be broken down into components called Ai Modules. AIWs are specified by what they do (functions), by the input and output data and by how its AIMs are interconnected (Topology). Similarly, AIMs are specified by what they do (functions) and by the input and output data. AIMs are Composite if they include interconnected AIMs or Basic if its internal structure is unknown.

Let’s look at the Natural Language Understanding (MMC-NLU) AIM of Figure 1.

Figure 1 – The Natural Language Understanding (MMC-NLU) AIM

The NLU AIM’s basic function is to receive a Text Object – directly from a keyboard or through an Automatic Speech Recognition (ASR) AIM (in which case it is called Recognised Text) and produce a Text Object that can be Refined Text in case it is the output of an ASR AIM and the Meaning of the text. The NLU AIM, however, can also receive “spatial information” about the Audio and/or Visual Objects in terms of their position and orientation in the Scene that the machine is processing. Obviously, this additional information helps the machine produce a response that is more attuned to the context.

This a case shows that there is a need to unambiguously name these two functionally equivalent but very different instances of the same NLU AIMs.

The notion of Profiles, originally developed by MPEG in the summer of 1992 for the MPEG-2 standard and then universally adopted in the media domain comes in handy. An AIM Profile is a label that uniquely identifies the set of AIM Attributes of an AIM instance where Attribute is “input data, output data, or functionality that uniquely characterises an AIM instance”. In the case of the NLU AIM, Text Object (TXO), Recognised Text (TXR), Object Instance Identifier (OII), Audio-Visual Scene Geometry (AVG), and Meaning (TXD or Text Descriptors).

The Draft AI Module Profiles (MPAI-PRF) Standard offers two ways to signal the Attributes of an AIM: those that are supported or those that are not supported. Both can be used, but likely the first (list of those that are supported) if it is shorter than the second (list of those that are not supported) and vice versa. The Profile of an NLU AIM instance that does not handle spatial information can thus be labelled in two ways:

List of supported Attributes MMC-NLU-V2.1(ALL-AVG-OII)
List of unsupported Attributes MMC-NLU-V2.1(NUL+TXO+TXR)

V2.1 refers to the version of the Multimodal Conversation MPAI-MMC standards that specifies the NLU AIM. ALL signals that the Profile is expressed in “negative logic” in the sense that the removed Attributes are AVG for Audio-Visual Scene Geometry and OII. NUL signals that the Profile is expressed in “positive logic” in the sense that the added Attributes are TXO for Text Object from a keyboard and TXR for Recognised Text.

The Profile story does not end here. Attributes are not always sufficient to identify the capabilities of an AIM instance. Let’s take the Entity Dialogue Processing (MMC-EDP) of Figure 2 an AIM that uses different information sources derived from the information issued by an Entity, typically a human – but potentially also a machine – with which this machine is communicating.

Figure 2 – The  Entity Dialogue Processing (MPAI-EDP)

The input data is Text Object and Meaning (output of the NLU), Audio or Visual Instance ID and Scene Geometry (already used by the NLU AIM) and Personal Status, a data type that represents the internal state of the Entity in terms of three Factors (Cognitive State, Emotion, and Social Attitude) and four Modalities (Text, Speech, Face, and Body) for each Factor.

The output of the EDP AIM is Text that can be fed to a regular Text-To-Speech AIM, but can additionally be the machine’s Personal Status, obviously pretended by the machine, but of great value for the Personal Status Display (PAF-PSD) AIM depicted in Figure 3.

Figure 3 – The Personal Status Display AIM

This uses the machine’s Text and Personal Status (IPS) to synthesise the machine using an Avatar Model (AVM) as a speaking avatar. An AIM instance of the PSD AIM may support the Personal Status, but only its Speech (PS-Speech, PSS) and Face (PS-Face, PSF) Factors, as in the case of a PSD AIM designed for sign language. This is formally represented by the following two expressions:

List of supported Attributes PAF-PSD-V1.1(ALL@IPS#PSS#PSF)
List of unsupported Attributes PAF-PSD-V1.1(NUL+TXO+AVM@IPS#PSF#PSG

@IPS#PSS#PSF in the first expression indicates that the PSD AIM supports all Attributes, but the Personal Status only includes the Speech and Face Factors. In the second expression +TXO+AVM indicates that the PSD AIM supports Text and Avatar Model and @IPS#PSF#PSG that the Personal Status Factors supported are Face (PSF) and Gesture (PSG).

AI Module Profiles is another element of the AI application infrastructure that MPAI is building with its standards. Read the AI Module Profiles standard for an in-depth understanding. Anybody can submit comments to the draft by sending an email to the MPAI secretariat by 2024/05/08T23:59. MPAI will consider each comment received for possible inclusion in the final version of MPAI-PRF.


MPAI publishes the draft AI Module Profile Standard with a request for Community Comments

Geneva, Switzerland – 17 April 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 43rd General Assembly (MPAI-43) approving the publication of the draft AI Module Profile V1.0 Standard with a request for Community Comments.

AI Module Profiles (MPAI-PRF) V1.0 enables the signalling of AI Module Attributes – input data, output data, or functionality – that uniquely characterise an AIM instance. An AIM Profile is thus a label that uniquely identifies the set of AIM Attributes that are either supported or not supported by that AIM instance. Anybody can submit comments to the draft by sending an email to the MPAI secretariat by 2024/05/08T23:59.

MPAI also informs that the code, the presentation file, and the video recording of the V1.1 version of the Neural Network Watermarking (MPAI-NNW) Reference Software Specification presented  of the on the 16th of April are now publicly available. The software enables a user to make queries that include a text and an image and obtain a watermarked vocal response that enables the issuer of the query to ascertain that the response is from the intended source. The second software can be used to run watermarked AI-based applications on resource-constrained processing platforms without significant performance loss.

MPAI is continuing its work plan that involving the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and conformance testing and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): reference software, conformance testing and new areas.
  13. Server-based Predictive Multiplayer Gaming (MPAI-SPG): technical report on mitigation of data loss and cheating.
  14. XR Venues (MPAI-XRV): development of the standard.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 


MPAI releases reference software leveraging AI Framework and Neural Network Watermarking for Generative AI applications

Geneva, Switzerland – 20 March 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 42nd General Assembly (MPAI-42) approving the release of Reference Software using Neural Network Watermarking for Generative AI applications.

The new V1.1 version of the Neural Network Watermarking (MPAI-NNW) Reference Software includes an implementation of the AIF Framework and of an AI Workflow enabling a user to make queries that include a text and an image and obtain a vocal response. This inference is watermarked, to enable the issuer of the query to ascertain that the response they receive is from the intended source. The Software will be presented online on the 16th of April at 15 UTC. Register at https://us06web.zoom.us/meeting/register/tZ0udeutqT0vHdBh1DLiUxoRr59cUs7iQzzN.

Presentations and video recordings of all MPAI standards are available (ppt= PowerPoint file), YT=YouTube, nYT=WimTV):

AI Framework (MPAI-AIF) ppt YT nYT
Context-based Audio Enhancement (MPAI-CAE) ppt YT nYT
Connected Autonomous Vehicle (MPAI-CAV) – Architecture ppt  YT nYT
Compression and Understanding of Industrial Data (MPAI-CUI) ppt YT nYT
Governance of the MPAI Ecosystem (MPAI-GME) ppt YT nYT
Human and Machine Communication (MPAI-HMC) ppt YT nYT 
Multimodal Conversation (MPAI-MMC) ppt YT nYT
MPAI Metaverse Model (MPAI-MMM) – Architecture ppt  YT  nYT
Neural Network Watermarking MPAI-NNW) ppt YT nYT
Object and Scene Description (MPAI-OSD) ppt YT nYT
Portable Avatar Format (MPAI-PAF) ppt  YT  nYT

MPAI is continuing its work plan that involving the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and conformance testing and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): reference software, conformance testing and new areas.
  13. Server-based Predictive Multiplayer Gaming (MPAI-SPG): technical report on mitigation of data loss and cheating.
  14. XR Venues (MPAI-XRV): development of the standard.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


Recent MPAI standards – presentations and video recordings

In the last few months, MPAI has published eight new or update MPAI standards. They have been presented online in the 11-15 March 2024 week.

Here are the titles of the standards with links to the presentations and video recording provided by two services. They are a good opportunity to stay abreast of the progress in MPAI

rev MPAI Metaverse Model  (MPAI-MMM) – Architecture ppt  YT  nYT
new Portable Avatar Format  (MPAI-PAF) ppt  YT  nYT
new Human and Machine Communication  (MPAI-HMC) ppt YT nYT 
new Connected Autonomous Vehicle  (MPAI-CAV) – Architecture ppt  YT nYT
rev Context-based Audio Enhancement (MPAI-CAE) ppt YT nYT
new Object and Scene Description (MPAI-OSD) ppt YT nYT
rev Multimodal Conversation  (MPAI-MMC) ppt YT nYT
rev AI Framework (MPAI-AIF) ppt YT nYT
MPAI presentation ppt YT nYT

MPAI publishes two standards: the new version of Context-based Audio Enhancement and the new Human and Machine Communication

Geneva, Switzerland – 21 February 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 41st General Assembly (MPAI-41) approving the publication of two standards and announcing the availability of all its standards in linked form on the web.

Context-based Audio Enhancement (MPAI-CAE) V2.1 extends the previously published Version 2.0 adding full online references to the specification of all AI Workflows, AI Modules, JSON Metadata, and Data Types used by the standard.

Human and Machine Communication (MPAI-HMC) V1.0 integrates a wide range of technologies from existing MPAI standards to enable new forms of communication between entities, i.e., humans present or represented in a real or virtual space or machines represented in a virtual space as speaking avatars and acting in a context using text, speech, face, gesture, and audio-visual scene in which they are embedded. It.

In the 11-15 March week, MPAI will be presenting its recently published standards at a series of planned 40-min online sessions. The presentations will illustrate the scope, the features, and the technologies of each standard and will be followed by open discussions. The new web-based access to all published MPAI standards will also be presented. All times are UTC

Standard March Registr.
AI Framework (MPAI-AIF) 11 T16:00 Link
Context-based Audio Enhancement (MPAI-CAE) 12 T17:00 Link
Connected Autonomous Vehicle (MPAI-CAV) – Architecture 13 T15:00 Link
Human and Machine Communication (MPAI-HMC) 13 T16:00 Link
Multimodal Conversation (MPAI-MMC) 12 T14:00 Link
MPAI Metaverse Model (MPAI-MMM) – Architecture 15 T15:00 Link
Portable Avatar Format (MPAI-PAF) 14 T14:00 Link

MPAI is continuing its work plan that involving the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. Human and Machine Communication (MPAI-HMC): developing reference software.
  7. Multimodal Conversation (MPAI-MMC): developing reference software and conformance testing, and exploring new areas.
  8. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  9. Neural Network Watermarking (MPAI-NNW): reference software for enhanced applications.
  10. Portable Avatar Format (MPAI-PAF): reference software, conformance testing and new areas.
  11. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  12. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  13. Server-based Predictive Multiplayer Gaming (MPAI-SPG): technical report on mitigation of data loss and cheating.
  14. XR Venues (MPAI-XRV): development of the standard.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


Standards that innovate technology and standardisation

At its 40th General Assembly (MPAI-40), MPAI approved one draft, one new, and three extension standards. For an organisation that has already nine standards in its game bag, this may not look like big news. There are two reasons, though, to consider this a remarkable moment in the MPAI short but intense life.

The first reason is that the draft standard posted for Community Comments – Human and Machine Communication (MPAI-HMC) – does not specify new technologies but leverages technologies from existing MPAI standards: Context-based Audio Enhancement (MPAI-CAE), Multimodal Conversation (MPAI-MMC), the newly approved Object and Scene Description (MPAI-OSD), and Portable Avatar Format (MPAI-PAF).

If not new technologies, what does MPAI-HMC specify then? To answer this question let’s consider Figure 1.

Figure 1 – The MPAI-HMC communications model

The human labelled as #1 is part of a scene with audio and visual attributes and communicates with the Machine by transmitting speech information and the entire audio-visual scene including him or herself. The Machine receives that information, processes it, and emits internally generated audio-visual scenes that include itself uttering vocal and displaying visual manifestations of its own internal state generated to interact more naturally with the human. The human may also communicate with the Machine when other humans are in the scene with him or her and the Machine can discern the individual human and identify (i.e., give a name to) audio and visual objects. However, only one human at a time can communicate with the Machine.

The Machine need not capture the human in a real space. His or her digital representation can be rendered in a Virtual Space as a Digitised Human. The human may not be alone but together with other Digitised Humans or with Virtual Humans, i.e., audio-visual representations of processes, such as Machines. For this reason, we will use the word Entity to indicate both a human or their avatar and a Machine rendered as an avatar.

The Machine can also act as an interpreter between the Entities and Contexts labelled as #1 or #2 and #3 or #4. By Context we mean information surrounding an Entity that provides additional insight into the information communicated by the Entity. An example of Context is language and, more generally, culture.

Communication between #1 and #3 represents the case of a human in a Context communicating with a Machine, e.g., an information service, in another Context. In this case the Machine communicates with the human by sensing and actuating audio-visual information, but the communication between the Machine and #3 may use a different communication protocol. The payload used to communicate is the “Portable Avatar” defined as a Data Type specified by the MPAI-PAF standard representing an Avatar and its Context.

Communication between the human in #1 and the Machine is based on raw audio-visual communication while communication between Machine and Entity #3 is carried out using a Portable Avatar .

Read a collection of usage scenarios.

The name of the standard is Human and Machine Communication (MPAI-HMC). It is published as a draft with a request for Community Comments, the last step before publication. Comments are due by 2024/02/19T23:59 UTC to secretariat@mpai.community.

To explain the second reason why the 40th General Assembly is a remarkable moment we have to recall that most MPAI application standards are based on the notion of AI Workflow (AIW) composed of interconnected AI Modules (AIM) executed in the AI Framework (AIF) specified by the MPAI-AIF standard. Four out of five documents are now  published in a new format where the Use Cases-AI Modules- Data Types chapters make reference to a common body of AIMs and Data Types.

Component-based software engineering aims to build software out of modular components. MPAI is implementing this notion in the world of standards.

See the links below and enjoy:

MPAI-HMC: https://mpai.community/standards/mpai-hmc/mpai-hmc-specification/

MPAI-MMC: https://mpai.community/standards/mpai-mmc/mpai-mmc-specification/

MPAI-OSD: https://mpai.community/standards/mpai-osd/mpai-osd-specification/

MPAI-PAF: https://mpai.community/standards/mpai-paf/mpai-paf-specification/