Moving Picture, Audio and Data Coding
by Artificial Intelligence

Archives: 2021-10-27

MPAI calls for Up-sampling Filter for Video application technologies

MPAI-50 has published the Call for Technologies: Up-sampling Filters for Video applications (MPAI-UFV) V1.0 requesting parties having rights to technologies satisfying the Use Cases and Functional Requirements and the Framework Licence of the planned Technical Specification: Up-sampling Filter for Video Applications (MPAI-UFV) V1.0 to respond to this Call for Technologies preferably using the Template for responses.

The goal of MPAI-UFV V1.0 is to develop a standard up-sampling filter that provides optimal performance when applied to a video to generate a video with a higher number of lines and pixels.

The submissions received will be assessed, collaboratively improved, if found suitable, and used in the development of the planned MPAI-UFV Technical Specification.

MPAI membership is not a prerequisite for responding to this Call for Technologies. However, if a (part of a) submission that is accepted for inclusion in the planned MPAI-UFV TS, the Proponent must join MPAI, or lose the opportunity to have the accepted technologies included in the TS.

MPAI will select the most suitable technologies based on their technical merits. However, MPAI is not obligated to select a particular technology or any of the proposed technologies if those submitted are found to be inadequate.

Proponents shall mandatorily:

  1. Submit a complete description of the proposed up-sampling filter with a level of detail allowing an expert in the field to develop and implement the proposed filter.
  2. Upload the following software to the MPAI storage:
    1. Docker image that contains the encoding and decoding environments, encoder, decoder, and bitstreams.
    2. Up-sampling filter in source code (preferred) or executable form. Note that proponents of accepted proposals will be requested to provide the source code of the up-sampling filter.
    3. Python scripts to enable testers to carry out the Performance Test.
  3. Submit the following results:
    1. Tables of objective quality results obtained by the submitter with their proposed solution.
    2. The decoded Test Sequences.
    3. The up-sampling results for SD to HD and HD to 4K obtained with the proposed solution.
    4. The VMAF-BD Rate assessment provided with a graph for each QP and a table with minimum, maximum and average value for each sequence.
    5. A Complexity assessment using MAC/pixel and number of parameters of the submitted up-sampling filter. Use of SADL (6) is recommended.

Submissions will be evaluated by an Evaluation Team created from:

  1. MPAI Member representatives in attendance.
  2. Non-MPAI Member representatives who are respondents to any of the received submissions.
  3. Non respondent experts/non MPAI Member representatives invited in a consulting capacity.
  4. No one from 1. and 2. will be denied membership in the Evaluation Panel if they request it.

Submissions will be evaluated by an Evaluation Team created from:

  1. MPAI Member representatives in attendance.
  2. Non-MPAI Member representatives who are respondents to any of the received submissions.
  3. Non respondent experts/non MPAI Member representatives invited in a consulting capacity.
  4. No one from 1. and 2. will be denied membership in the Evaluation Panel if they request it.

Proposals will be assessed using the following process:

  1. The objectively computed Quality Tests will use the Test Sequences provided to MPAI by an independent academic after the proposal deadline and distributed to Respondents.
  2. Each Respondent presents their proposal.
  3. Evaluation Team members ask questions.
  4. Evaluation Team organises the Tests.
  5. A volunteer member of the Evaluation Team executes the docker image of a Respondent and computes the values obtained using the test set provided by the respected academic.
  6. The Objective Quality Evaluation will use the VMAF-BD Rate metrics to compare the sequences obtained by up-sampling with the bicubic filter the AVC, HEVC and VVC co-decoded with QP Values: 22, 27, 32, 37, 42 with the same co-decoded sequences up-sampled with the proposed up-sampling algorithm.
  7. Latency is the number of frames used by the up-sampling process. Note that the actual number of frames used to produce the response should be specified.
  8. The Complexity evaluation will use MAC/pixel and number of parameters.

The timeline of proposal submission is:

Step Date Time
MPAI-UFV Call for Technologies issued. 2024/11/20 17:00 UTC
MPAI-UFV Call for Technologies presented online. 2024/11/27 14:00 UTC
Notification of intention to submit a proposal. 2024/12/17 23:59 UTC
Response submission deadline. 2025/02/11 23:59 UTC
Start of response evaluation. 2024/02/18 (MPAI-53)

Those intending to propose should become familiar with the four documents mentioned above and posted here. An online presentation will be made on 2024/11/27 at 14UTC. Register at https://bit.ly/3V0sdt1 to attend.


MPAI calls for “Up-sampling Filter for Video applications” technologies

Geneva, Switzerland – 21st November 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards – has concluded its 49th General Assembly (MPAI-49) by approving for publication:

  1. Call for Technologies: Up-sampling Filter for Video applications (MPAI-UFV)
  2. Technical Specification: Neural Network Traceability (MPAI-NNT) V1.0
  3. Technical Specification: Human and Machine Communication (MPAI-HMC) V2.0
  4. New versions of four standards with added Conformance Testing

Call for Technologies: Up-sampling Filter for Video applications (MPAI-UFV) invites any party able and wishing to contribute to the development of the planned MPAI-UFV Technical Specification to submit a response. MPAI-UFV is expected to leverage AI to provide higher video quality when up-sampling a video from Standard Definition to HDTV and from HDTV to 4K.

Technical Specification: Neural Network Traceability (MPAI-NNT) V1.0 specifies methods to evaluate the ability to trace back to its source a neural network that has been modified, the computational cost of injecting, extracting, detecting, decoding, or matching data from a neural network, and the impact on the performance of a neural network with inserted traceability data and its inference.

Technical Specification: Human and Machine Communication (MPAI-HMC) V2.0 enables advanced forms of communication between humans in a real space or represented in a Virtual Space, and Machines represented as humanoids in a Virtual Space or rendered as humanoids in a real space. This new version leverages MPAI-CAE V2.3, MPAI MMC V2.3, MPAI-OSD V1.2, MPAI-PAF V1.3 and Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.2 – just published as an MPAI standard.

The Call and the standards will be presented online according to the schedule reported below.

Type Register for presentation of Code Date/Time (UTC)
Call for Tech. Upsampling Filter for Video application UFV 2024/11/27T14
New Standard Neural Network Traceability NNT 2024/12/10T15
Revised Standard Human and Machine Communication HMC 2024/12/09T16

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): building a community of MPAI-AIF-based implementers.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): developing the Audio Six Degrees of Freedom (CAE-6DF) standard.
  4. Connected Aonomous Vehicle (MPAI-CAV): updating the MPAI-CAV Architecture part developing the new MPAI-CAV Technologies (CAV-TEC) part of the standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing use cases and functional requirements for MPAI-CUI V2.0 supporting a wide range of corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): waiting for responses to the Call for Technologies for video up-sampling filter.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software.
  10. Multimodal Conversation (MPAI-MMC): Developing technologies for more Natural-Language-based user interfaces capable of handling more complex questions.
  11. MPAI Metaverse Model (MPAI-MMM): extending the MPAI-MMM specs to support more applications.
  12. Neural Network Watermarking (MPAI-NNW): studying the use of fingerprinting as a technology for neural network traceability.
  13. Object and Scene Description (MPAI-PAF): studying applications requiring more space-time handling applications.
  14. Portable Avatar Format (MPAI-PAF): studying more applications using digital humans needing new technologies.
  15. AI Module Profiles (MPAI-PRF): specifying which features AI Workflow or more AI Modules need to support.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss.
  17. Data Types, Formats, and Attribes (MPAI-TFA) extending the standard to data types used by MPAI standards (e.g., aomotive and health).
  18. XR Venues (MPAI-XRV): developing the standard for improved development and execion of Live Theatrical Performances and studying the prospects of Cillaborative Immersive Laboratories.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.


MPAI Metaverse Model: what is it? A look at the table of contents

We think that the publication of the MPAI Metaverse Model (MPAI-MMM) standard last week is an important step in making the metaverse a viable proposition because it provides practical means for an implementer to develop a metaverse instance (M-Instance) that interoperates with another similarly developed M-Instance.

This post has the moderate ambition of just describing the high-level content of the standard looking at the Table of Contents.

Foreword obviously deals with general matters, such as the AI Framework (MPAI-AIF) standard and the MPAI Store, but also with the recent notion of Qualifier introduced by the Data Types, Formats, and Attributes (MPAI-TFA) that enables a machine to communicate its technology choices to another machine, a vital component if interoperability (you van have a look at A new type of “data about data”).

Introduction includes a summary of the work carried out by MPAI in the MMM projects: two technical reports and two standards.

Scope formalises the meaning of metaverse interoperability.

Definitions defines some 150 terms used by the MMM standard.

References provides a set of normative and informative references.

Operation despite of it being an informative section, it lays down the main elements of the MMM operation.

ProcessesActions, and Items are the steel, cement, and bricks of which the MMM building is composed. Processes perform Actions on Items (data) to achieve a goal and usually need the help of other Processes to reach it.

Scripting Language (MMM Script) helps a Processes to communicate its needs and requests in an efficient way. It also provides a rigorous human readable method to describe use case using Processes, Actions, and Items.

Profiles provide a simple way to achieve interoperability avoiding the need of possibly costly and useless (for the application) technologies.

Verification Use Cases is the moment of truth for MPAI-MMM. MMM Script is used to describe nine rather complex use cases covering widely different domains to show that the MMM-specified technologies can support the identified used cases.

Not sure that the above will convince you to read the MPAI-MMM standard from page 1 to the end (but MPAI-MMM is published as a series of connected web pages -:), I hope you can dedicate 15 more minutes to read Operation where you will learn more about the internals of MPAI-MMM.


On its fourth anniversary MPAI publishes the first specification supporting metaverse interoperability

Geneva, Switzerland – 30th September 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards – has concluded its 48th General Assembly (MPAI-48) by approving for publication

  1. The combined MPAI Metaverse Model – Architecture (MMM-ARC) V1.2 and Technologies (MMM-TEC) V1.0 standards
  2. The set of Multimodal Conversation (MPAI-MMC) V2.2; Object and Scene Description (MPAI-OSD) V1.1; Portable Avatar Format (MPAI-PAF) V1.2; and Data Types, Formats, and Attributes (MPAI-TFA) V1.0 standards.
  3. The Open-Source Reference Implementation of the Television Media Analysis (OSD-TMA) V1.1 Use case.

All new standards are released using the MPAI full web-based publication method providing the specification and, where available, the reference software, the conformance testing, and the performance assessment.

MPAI-48 also learned that four MPAI standards have been adopted as IEEE standards: two new (MPAI-MMM and MPAI-PAF) and two revised (MPAI-AIF and MPAI-MMC) bringing the total to eight.

The combined Technical Specification: MPAI Metaverse Model (MPAI-MMM) –  Architecture (MMM-ARC) V1.2 and Technical Specification: MPAI Metaverse Model (MPAI-MMM) –  Technologies (MMM-ARC) V1.0 specify five types of Processes operating in an M-Instance, thirty Actions that Processes can perform, 65 Data Types and their Qualifiers that Processes can Act on to achieve client and M-Instance interoperability.

Technical Specification: Multimodal Conversation (MPAI-MMC) V2.2 specifies 23 Data Types that enable more human-like and content-rich forms of human-machine conversation, applies them to eight use cases in different domains and specifies 23 AI Modules. MPAI-MMC reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Object and Scene Description (MPAI-OSD) V1.1 specifies 27 Data Types enabling the digital representation of spatial information of Audio and Visual Objects and Scenes, applies them to the Television Media Analysis (OSD-TMA) use case and specifies 15 AI Modules. MPAI-OSD reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Portable Avatar Format (MPAI-PAF) V1.2 specifies five Data Types enabling a receiving party to render a digital human as intended by the sending party, applies them to the Avatar-Based Videoconference Use Case, and specifies 13 AI Modules. MPAI-PAF reuses Data Types and AI Modules from other MPAI standards.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies Qualifiers – a Data Type containing Sub-Types, Formats, and Attributes – associated to “media” Data Types – currently Text, Speech, Audio, Visual, 3D Model, and Audio-Visual – that facilitate/enable the operation of an AI Module receiving a Data Type instance. MPAI-48 has published Version 1.1 of the standard with Data Qualifiers for the MPAI Metaverse Model.

The reference software implementation of the Television Media Analysis (OSD-TMA) V1.1 Use Case produces a description of a TV program that includes the audio and visual objects, the IDs of the speakers and the faces with their space and time information, and the text of the speaker utterances.

Online presentations of MPAI-MMC, MPAI-MMM, MPAI-OSD, MPAI-PAF, and the reference software implementation of OSD-TMA will be made according to the following timetable:

Title Acronym UTC Registration link
Multimodal Conversation MPAI-MMC 15 @14 https://tinyurl.com/22h6d437
MPAI Metaverse Model MPAI-MMM 18 @15 https://tinyurl.com/242ahnuu
Object and Scene Description MPAI-OSD 16 @15 https://tinyurl.com/278azb4q
Portable Avatar Format MPAI-PAF 17 @14 https://tinyurl.com/2982tqoz
Television Media Analysis OSD-TMA 14 @15 https://tinyurl.com/yc8xhy7h

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): building a community of MPAI-AIF-based implementers.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): developing the Audio Six Degrees of Freedom (CAE-6DF).
  4. Connected Aonomous Vehicle (MPAI-CAV): developing the new MPAI-CAV Technologies (CAV-TEC) part of the standard and updating the Architecture part.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing use cases and functional requirements for MPAI-CUI V2.0 supporting a wide range of corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): working on a Call for Technologies for up-sampling filter for video.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software.
  10. Multimodal Conversation (MPAI-MMC): Developing technologies for more Natural-Language-based user interfaces capable of handling more complex questions.
  11. MPAI Metaverse Model (MPAI-MMM): extending the MPAI-MMM specs to support more applications.
  12. Neural Network Watermarking (MPAI-NNW): studying the use of fingerprinting as a technology for neural network traceability.
  13. Object and Scene Description (MPAI-PAF): studying applications requiring more space-time handling applications.
  14. Portable Avatar Format (MPAI-PAF): studying more applications using digital humans needing new technologies.
  15. AI Module Profiles (MPAI-PRF): specifying which features AI Workflow or more AI Modules need to support.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss.
  17. Data Types, Formats, and Attribes (MPAI-TFA) extending the standard to data types used by MPAI standards (e.g., aomotive and health).
  18. XR Venues (MPAI-XRV): developing the standard for improved development and execion of Live Theatrical Performances and studying the prospects of Cillaborative Immersive Laboratories.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribe to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and Yoube.

 

 


A new type of “data about data”

AI Modules (AIM) organised in AI Workflows (AIW) executed in the AI Framework enabling initialisation, dynamic configuration, and control of AIWs are a key element of the MPAI approach to AI-based Data Coding standards as depicted in Figure 1. AIMs communicate to other AIMs in the AIW the Data obtained by executing specific functions.

The effectiveness of the functions performed by the AIMs is improved if they know more about the capabilities of the AIMs they are connected to and the Data they receive as demonstrated by natural language processing. An instance of the MPAI Natural Language Processing (MMC-NLU) AIM can produce the recognised text and Meaning using three levels of information:

1.Just the input text.

2.Also the object identifiers referenced in the text.

3.Additionally, the object context in a relevant space.

The accuracy of the refined text and Meaning produced  by an MPAI-NLU AIM is expected to improve when moving from the first to the third case. The cases correspond with different levels of AIM capabilities.

Technical Specification: AI Module Profiles (MPAI-PRF) enables an AIM instance to signal its Attributes – such as input data, output data, and functionality – and Sub-Attributes – such as languages supported by a Text and Speech Translation AIM – that uniquely characterise the AIM. Currently, MPAI-PRF defines the Attributes of eight AIMs but Profiles for more AIMs are likely to be defined in the future.

The effectiveness of the functions performed by an AIM can also be enabled or enhanced if the AIM knows more about the characteristics of the Data received. Examples of characteristics include:

  • The CIE 1931 colour space of an instance of the Visual Data Type.
  • The MP3 format of a speech segment.
  • The WAV file format of an audio segment.
  • The gamma correction applied to the device that produced a video.
  • The ID of an object instance in an audio segment.
  • The Text conveyed by a speech segment.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies a new Data Type called Qualifier, a container that can be used to represent, for instance, that a Visual Data Type instance:

  • Uses a given colour space (Sub-Type)
  • Was produced by an AVC codec (Format).
  • Is described by Dublin Core Metadata (Attribute).

The current versions of MPAI Technical Specifications generally assume that most of the Media Objects exchanged by AIMs are composed of “Content” and “Qualifiers”.

Therefore, Qualifiers are a specialised type of metadata intended to support the operation of AIMs receiving data from other AIMs and conveying information on Sub-Types, Formats, and Attributes related to the Content. Qualifiers are  intended to convey information for use by an AIM. They are human-readable but intended only to be used by AIMs.

MPAI also provides a standard method to attach information to a Data Type instance called Annotation defined as Data attached to an Object or a Scene. As opposed to Qualifier that describes intrinsic properties of a Data Type, an Annotation is spatially and temporally local and changeable.

Future versions of MPAI-TFA will likely be published because of the large variety of application needs that will require the specification of Qualifiers for additional Data Types. MPAI-TFA users are invited to communicate their need for extension of existing and specification of additional Data Types in MPAI-TFA to the MPAI Secretariat. Therefore, versioning of Qualifiers is a critical component of MPAI-TFA.


MPAI propounds the development of Collaborative Immersive Laboratories (CIL)

Collaborative Immersive Laboratory (XRV-CIL) is project designed to enable researchers in network-connected physical venues equipped with devices that create an immersive virtual environment to manipulate and visualise laboratory data with other researchers located at different places and having a simultaneous immersive experience.

XR Venue (MPAI-XRV) is an MPAI project addressing a multiplicity of use cases enabled by Extended Reality (XR) and enhanced by Artificial Intelligence (AI) technologies. MPAI-XRV specifies design methods for AI Workflows and AI Modules that automate complex processes in a variety of application domains. Venue is used as a synonym for Real and Virtual Environment. CIL is one of the XRV projects.

One use case for CIL would be to work with medical data such as scans to discover patterns within cellular data to facilitate therapy identification as part of the following workflow:

  1. Start from a file (e.g., a LIF file for data from a confocal microscope) that contains slices of a 3D Object (+ time) produced by machines from different manufacturers and enable real time navigation of the 3D object starting from slices.
  2. Use AI trained filter to filter out the noise. Noise is information not part of the scanned object that is found in the slices.
  3. Preserve the slices by applying specific processes, e.g., dehydration.
  4. Enhance some specific features of the object by using appropriate contrasting agents, e.g., monoclonal antibodies.
  5. Use the slices in sufficient number to train a Machine:
    1. To count the cells in a human tissue from different organs, different living bodies, and anatomical features presenting different health conditions.
    2. To identify the typology and functions of the cells caused by the influence of genomics and environment, i.e., phenotyping.
  6. Request the trained Machine to produces “inferences” used to count and identify the cells having specified features.
  7. Generate statistics of the inferences produced by the Machine.
  8. Human navigates the cleaned (noise-filtered) slices as an object and verify whether the inferences of the Machine can be trusted.
  9. A trajectory of possible outcomes can plot towards multiple decision paths for desired outcomes based on change in how the living body changes over time guiding proactive decisions in habit and therapeutical interventions.
  10. After a certain time, redo steps 1 to 9.

For instance, Figure 1 shows a CT or MRI dataset being normalised, analysed and the result rendered with e.g., a renderer that is common to the participating labs. Each Lab may enter annotations to the dataset or apply rendering controls that enhance appropriate parts of the rendered dataset.

Figure 1 – An example of data analysed and rendered in an XRV-CIL

Figure 1 represents a specific case of the full XRV-CIL project while Figure 2 represents the more general case.

Figure 2 – The multi-technology, multi-location XRV-CIL case

Let us assume that there are N geographically distributed labs providing datasets acquired with different technologies at different times related to a particular application domain (where each lab may provide more than one dataset). Technology-specific AI Modules normalise the datasets. A Fusion AI Module controlled by Fusion Parameters from each lab provides M Fused Data (number of Fused Data is independent of the number of input datasets).

Fused Data are processed by Analysis AI Modules driven by Analysis Parameters possibly coming from one or more labs. They produce Desired Results which are then Rendered specifically for each Lab either locally or in the cloud.

The model of Figure 2 is applicable to various domains for scientific, industrial, and educational applications such as:

  1. Medical
  2. Anthropological
  3. Multi- and hyper-spectral Imaging
  4. Spectroscopy
  5. Chemistry
  6. Geology and Material Science
  7. Non-destructive testing
  8. Oceanography
  9. Astronomy

XRV-CIL promises to dramatically improve the way data is collaboratively acquired, processes, and shared among laboratories.

MPAI, the international, unaffiliated, non-profit organisation developing standards for AI-based data coding might contribute to the areas of dataset normalisation, specification of input/output and metadata of processing elements, interaction protocols with rendered processing results. MPAI could also contribute to identification of specific AI technologies to process datasets, e.g., cell counting mentioned above.

 


MPAI publishes a new version of Context-based Audio Enhancement (MPAI-CAE) and a new standard for Data Qualifiers (MPAI-TFA)

Geneva, Switzerland – 21st August 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards – has concluded its 47th General Assembly (MPAI-47) by approving for publication the Context-based Audio Enhancement (MPAI-CAE) V2.2 standard and the new Data Types, Formats, and Attributes (MPAI-TFA) V1.0 standard for Community Comments. The new versions are released using the new full web-based publication method.

Technical Specification: Context-based Audio Enhancement (MPAI-CAE) V2-2 improves the user experience for different audio-related applications, such as entertainment, restoration, and communication in a variety of different contexts such as in the home, in the office, and in the studio. V2.2 extends the capabilities of several data formats used across MPAI standard.

Technical Specification: Data Types, Formats, and Attributes (MPAI-TFA) V1.0 specifies Qualifiers – a Data Type containing Sub-Types, Formats, and Attributes – associated to “media” Data Types – currently Text, Speech, Audio, and Visual – that facilitate-enable the operation of an AI Module receiving a Data Type instance.

The capabilities of the standards will be presented online on September 24 at 16:00 UTC for MPAI-CAE V2.2 and August 27 at T14:00 UTC for MPAI-TFA). To attend, please register at https://tinyurl.com/2wj8e4bn for MPAI-CAE V2.2 and at https://tinyurl.com/3p8j74st for MPAI-TFA V1.0.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): waiting for response to the Audio Six Degrees of Freedom (CAE-6DF).
  4. Connected Autonomous Vehicle (MPAI-CAV): developing the new MPAI-CAV Technologies (CAV-TEC) part of the standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing use cases and functional requirements for MPAI-CUI V2.0 supporting more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): working on a Call for Technologies.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software.
  10. Multimodal Conversation (MPAI-MMC): finalising V2.2 and developing Performance Assessment of some important AI Modules.
  11. MPAI Metaverse Model (MPAI-MMM): developing the new MPAI-MMM Technologies (MMM-TEC) part of the standard.
  12. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  13. Object and Scene Description (MPAI-PAF): finalising V1.2 and developing reference software, conformance testing and new areas for digital humans.
  14. Portable Avatar Format (MPAI-PAF): finalising V1.2 and developing reference software, conformance testing and new areas for digital humans.
  15. AI Module Profiles (MPAI-PRF): specifying which features an AI Workflow or and AI Module supports.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  17. Data Types, Formats, and Attributes (MPAI-TFA): extending he standard to data types used by other MPAI standards.
  18. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performances.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.


MPAI publishes Version 1.1 of the Human and Machine Communication standard

Geneva, Switzerland – 10 July 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 46th General Assembly (MPAI-46) by approving for publication the new Version 1.1 of the Human and Machine Communication standard.

Technical Specification: Human and Machine Communication (MPAI-HMC) V1.1 enables an Entity to hold a multimodal communication with another Entity possibly in a different Context. The standard is agnostic of the parties in a communication as an Entity can be a human in an audio-visual scene of a real space or a Machine in an Audio-Visual Scene of a Virtual Space. Humans and Machines can operate in different Contexts, e.g., language and culture. MPAI-HMC references a range of technologies specified in five MPAI Standards.

MPAI-HMC will be presented online on 22 July at 15 UTC (8 PDT, 11 EDT, 23 CST, 24 KST). To attend the presentation, register at https://us06web.zoom.us/meeting/register/tZEtde-orTwqE9x9sSkauN9CxKsLvbJrIeSF.

At previous meetings, MPAI has published four draft standards for Community Comments: Context-base Audio Enhancement V2.2, Multimodal Conversation V2.2, Object and Scene Description V1.1, and Portable Avatar Format V1.2. Interested parties should check the mentioned links and make comments as the deadline for submission has not been reached yet.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): developing reference software, conformance testing and new areas for digital humans.
  13. AI Module Profiles (MPAI-PRF): to specify which features an AI Module supports.
  14. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  15. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performance.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter, and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


MPAI publishes new Standard, Reference Software, and Conformance Testing Specification

Geneva, Switzerland – 12 June 2024. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, and unaffiliated organisation developing AI-based data coding standards has concluded its 45th General Assembly (MPAI-45) by approving for publication the new AI Module Profiles, Neural Network Watermarking Reference Software, and the Multimodal Conversation Reference Software.

Technical Specification: AI Module Profiles (MPAI-PRF) V1.0 is an important addition to the MPAI architecture because it enables an AI Module to signal its capabilities in terms of input and output data and specific functionalities.

Reference Software Specification: Neural Network Watermarking (MPAI-NNW) V1.2 makes available to the community software implementing the functionalities of the Neural Network Watermarking Standard when implemented in an AI Framework and using limited capability Microcontroller Units.

Conformance Testing Specification: Multimodal Conversation (MPAI-MMC) V2.1 publishes methods and data sets to enable a developer or a user to ascertain the claims of an implementation to conform with the specification of the Conversation with Emotion, Multimodal Question Answering, and Unidirectional Speech Translation AI Workflows.

At its previous meeting, MPAI has published three Calls for Technologies on Six Degrees of Freedom Audio, Connected Autonomous Vehicle – Technologies, and MPAI Metaverse Model – Technologies. Interested parties should check the mentioned links for update as the deadline for submission has not been reached yet.

MPAI is happy to announce that the Institute if Electrical and Electronic Engineers has adopted the companion Connected Autonomous Vehicle – Architecture standard as an IEEE standard identified as 3307-2024.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing open-source applications based on the AI Framework.
  2. AI for Health (MPAI-AIH): developing the specification of a system enabling clients to improve models processing health data and federated learning to share the training.
  3. Context-based Audio Enhancement (CAE-DC): preparing new projects.
  4. Connected Autonomous Vehicle (MPAI-CAV): Functional Requirements of the data used by the MPIA-CAV – Architecture standard.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): preparation for an extension to existing standard that includes support for more corporate risks.
  6. End-to-End Video Coding (MPAI-EEV): video coding using AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC). video coding with AI tools added to existing tools.
  8. Human and Machine Communication (MPAI-HMC): developing reference software.
  9. Multimodal Conversation (MPAI-MMC): developing reference software and exploring new areas.
  10. MPAI Metaverse Model (MPAI-MMM): developing reference software specification and identifying metaverse technologies requiring standards.
  11. Neural Network Watermarking (MPAI-NNW): developing reference software for enhanced applications.
  12. Portable Avatar Format (MPAI-PAF): developing reference software, conformance testing and new areas for digital humans.
  13. AI Module Profiles (MPAI-PRF): to specify which features an AI Module supports.
  14. Server-based Predictive Multiplayer Gaming (MPAI-SPG): developing technical report on mitigation of data loss and cheating.
  15. XR Venues (MPAI-XRV): developing the standard enabling improved development and execution of Live Theatrical Performance.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter, and follow MPAI on social media LinkedIn, Twitter, Facebook, Instagram, and YouTube.

 

 


A standard for autonomous vehicle componentisation

The 44th MPAI General Assembly has published three Calls for Technologies. The Connected Autonomous Vehicle – Technologies (CAV-TEC) Call requests parties having rights to technologies satisfying the CAV-TEC Use Cases and Functional Requirements and the CAV-TEC Framework Licence to respond to the Call preferably using the CAV-TEC Template for Responses. An online presentation of this Call will be held on 2024/06/6 (Thursday) at 16 UTC. Please register, if you wish to attend the presentation (recommended if you intend to respond).

MPAI kicked off the Connected Autonomous Vehicle (MPAI-CAV)) project in the first days after its establishment. The project  was particularly challenging and only in September 2023, MPAI was ready to publish Version 1.0 of Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture (CAV-ARC). This specified a CAV as a system composed of Subsystems for each of which functions, input/output data, and topology of components were specified. In its turn each subsystem was broken down into components of which functions and input/output data were specified. Each subsystem was assumed to be implemented as an AI Workflow (AIW) made of Components implemented as AI Modules (AIM) executed in an AI Framework (AIF) as specified by the AI Framework (MPAI-AIF) standard.

This is illustrated in Figure 1 where a human staying outside of a CAV interacts with it via the Human-CAV Interaction Subsystem (HCI) requesting the Autonomous Motion Subsystem (AMS) to take the human to a destination. The AMS requests spatial information from the Environment Sensing Subsystem (ESS), decides a route (possibly after consulting with the HCI and the human) and starts the travel. The spatial information provided by the ESS is used to create a model of the external environment and is possibly integrated with other environment models obtained from CAVs in range. Finally, the AMS has enough information to issue a command to the Motion Actuation Subsystem (MAS) to move the CAV to the desired place that can be close if the environment is “complex” or rather far if it is “simple”.

Figure 1 – The reference model of MPAI Connected Autonomous Vehicle

CAV-ARC is a functional specification in the sense that it identifies subsystems and components and their functions, but not the precise functions of the data exchanged. This can be seen in Figure 2 where, say, the Road State data type is identified and its functions generally described but without a full specification of functional requirements.

Figure 2 – The Autonomous Motion Subsystem.

The purpose of the CAV-TEC Call is to identify and characterise all the data types required by the CAV reference model and stimulate specific technology proposals.

Because many components of the HCI are shared with other MPAI standards, in September 2023 MPAI has published Multimodal Conversation (MPAI-MMC) V2.0 that includes the HCI specification whose scope goes beyond the general CAV-ARC scope. Most of the CAV-ARC data types of the CAV-HCI reference model of Figure 3 are fully specified.

Figure 4 – The Human-CAV Interaction Subsystem

What is still missing – and is part of the Call – is the full specification of the messages exchanged by the HCI with the AMS and its peer HCIs in remote CAVs.

The Use Cases and Functional Requirements document attached to the Call contains an initial form of JSON syntax and semantics of all data types and requests comments on their appropriateness and proposals for data type formats and attributes.

It is interesting to note that MPAI assumes that a CAV generates a “private” metaverse used to plan decisions to move the CAV in a real environment. A CAV may request – and the requested CAV may decide to share – part of their private metaverses to facilitate understanding of the common real space(s) they traverse. MPAI investigations have shown that a CAV’s private metaverse can be represented _and_ shared by the same or slightly extended MPAI-MMM metaverse technologies.

This observation has been put in practice and part of the Technical Specification: MPAI Metaverse Model (MPAI-CAV) – Architecture (MPAI-TEC) V1.1 is referenced by the MPAI-TEC Call. It should be noted that the parallel MMM-TEC Call for Technologies seeks to enhance the current MMM-ARC specification by providing an initial form of JSON syntax and semantics of all data types and requesting comments on their appropriateness and proposals for data type formats and attributes.

Bringing to reality the dream of autonomous vehicles will be a major contribution to improving our life and environment. Standards can greatly contribute to the conversion of CAVs from siloed systems to systems made of standard components that are more reliable, explainable, and affordable.