Moving Picture, Audio and Data Coding
by Artificial Intelligence

The MPAI 2022 Calls for Technologies – Part 1 (AI Framework)

A foundational element of the MPAI architecture is the fact that monolithic AI applications have some characteristics that make them undesirable. For instance, they are single-use, i.e., it is hard to reuse technologies used by the application in another application and they are obscure, i.e., it is hard to understand why a machine has produced a certain output when subjected to a certain input. The first characteristic means that it is hard to make complex applications because an implementer must possess know-how of all features of the applications and the second is that they often are “unexplainable”.

MPAI launched AI Framework (AIF), its first official standardisation activity in December 2020, less than 3 months after its establishment. AIF is a standard environment where it is possible to execute AI Workflows (AIW) composed of AI Modules (AIM). Both AIWs and AIMs are defined by their function and their interfaces. AIF is unconcerned by the technology used by an AIM but needs to know the topology of an AIW.

Ten months later (October 2021)the MPAI-AIF standard was approved. Its structure is represented by Figure 1.

Figure 1 – The MPAI-AIF Reference Model

MPAI’s AI Framework (MPAI-AIF) specifies the architecture, interfaces, protocols, and Application Programming Interfaces (API) of the AI Framework (AIF), an environment specially designed for execution of AI-based implementations, but also suitable for mixed AI and traditional data processing workflows.

The AIF, the AIW and the AIMs are represented by JSON Metadata. The User Agent and the AIMs call the Controller through a set of standard APIs. Likewise, the Controller calls standard APIs to interact with Communication (a service for inter-AIM communication), Global Storage (a service for AIMs to store data for access by other AIMs) and the MPAI Store (a service for downloading AIMs required by an AIW). Access represents access to application-specific data.

Through the JSON Metadata, an AIF with appropriate resources (specified in the AIF JSON Metadata) can execute an AIW requiring AIMs (specified in the AIF JSON Metadata) that can be downloaded from the MPAI Store.

The MPAI-AIF standard has the following main features:

  1. Independence of the Operating System.
  2. Modular component-based architecture with specified interfaces.
  3. Encapsulation of component interfaces to abstract them from the development environment.
  4. Interface with the MPAI Store enabling access to validated components.
  5. Component can be implemented as software, hardware or mixed hardware-software.
  6. Components: execute in local and distributed Zero-Trust architectures, can interact with other implementations operating in proximity and support Machine Learning functionalities.

The MPAI-AIF standard achieves much of the original MPAI vision because AI applications:

  1. Need not be monolithic but can be composed of independently developed modules with standard interfaces
  2. Are more explainable
  3. Can be found in an open market.

Feature #6 above is a requirement, but the standard does not provide practical means for an application developer to ensure that the execution of the AIW takes place in a secure environment. Version 2 of MPAI-AIF intends to provide exactly that. As MPAI-AIF V1 does not specify any trusted service that an implementer can rely on, MPAI-AIF V2 identifies specific trusted services supporting the implementation of a Trusted Zone meeting a set of functional requirements that enable AIF Components to access trusted services via APIs, such as:

  1. AIM Security Engine.
  2. Trusted AIM Model Services
  3. Attestation Service.
  4. Trusted Communication Service.
  5. Trusted AIM Storage Service
  6. Encryption Service.

Figure 1 represents the Reference Models of MPAI-AIF V2.

Figure 2 – Reference Models of MPAI-AIF V2

The AIF Components shall be able to call Trusted Services APIs after establishing the developer-specified security regime based on the following requirements:

  1. The AIF Components shall access high-level implementation-independent Trusted Services API to handle:
    1. Encryption Service.
    2. Attestation Service.
    3. Trusted Communication Service.
    4. Trusted AIM Storage Service including the following functionalities:
      1. AIM Storage Initialisation (secure and non-secure flash and RAM)
      2. AIM Storage Read/Write.
      3. AIM Storage release.
    5. Trusted AIM Model Services including the following functionalities:
      1. Secure and non-secure Machine Learning Model Storage.
      2. Machine Learning Model Update (i.e., full, or partial update of the weights of the Model).
      3. Machine Learning Model Validation (i.e., verification that the model is the one that is expected to be used and that the appropriate rights have been acquired).
    6. AIM Security Engine including the following functionalities:
      1. Machine Learning Model Encryption.
      2. Machine Learning Model Signature.
      3. Machine Learning Model Watermarking.
  2. The AIF Components shall be easily integrated with the above Services.
  3. The AIF Trusted Services shall be able to use hardware and OS security features already existing in the hardware and software of the environment in which the AIF is implemented.
  4. Application developers shall be able to select the application’s security either or both by:
    1. Level of security that includes a defined set of security features for each level, i.e., APIs are available to either select individual security services or to select one of the standard security levels available in the implementation.
    2. Developer-defined security, i.e., a combination of a developer-defined set of security features.
  5. The specification of the AIF V2 Metadata shall be an extension of the AIF V1 Metadata supporting security with either or both standardised levels and a developer-defined combination of security features.
  6. MPAI welcomes the submission of use cases and their respective threat models.

MPAI has rigorously followed its standard development process in producing the Use Cases and Functional Requirements summarised in this post. MPAI has additionally produced The Commercial Requirements (Framework Licence) and the text of the Call for Technologies.

Below are a few useful links for those wishing to know more about the MPAI-AIF V2 Call for Technologies and how to respond to it:

  1. The “About MPAI-AIF” web page provides some general information about MPAI-AIF.
  2. The MPAI-AIF V1 standard can be downloaded from here.
  3. The 1 min 20 sec video (YouTube and (non-YouTube) concisely illustrates the MPAI-AIFV2 Call for Technologies.
  4. The slides and the video recording of the online presentation (YouTubenon-YouTube) made at the 11 July online presentation give a complete overview of MPAI-AIF V2.

The MPAI secretariat shall receive the responses to the MPAI-AIF V2 Call for Technologies by 10 October 2022 at 23:39 UTC. For any need, please contact the MPAI secretariat.

 


Personal Status in human-machine conversation

MPAI has a Development Committee in the area of human-machine conversation (MMC-DC). In September 2021, MMC-DC has produced its first standard titled Multimodal Conversation (MPAI-MMC). That standard provides a standard way to represent Emotion with the following syntax:

{

“$schema”:”http://json-schema.org/draft-07/schema”,

“definitions”:{

“emotionType”:{

“type”:”object”,

“properties”:{

“emotionDegree”:{

“enum”: [“High”, “Medium”, “Low”]

},

“emotionName”:{

“type”:”number”

},

“emotionSetName”:{

“type”:”string”

}

}

},

“type”:”object”,

“properties”:{

“primary”:{

“$ref”:”#/definitions/emotionType”

},

“secondary”:{

“$ref”:”#/definitions/emotionType”

}

}

}

The semantics is given by:

Name Definition
emotionType Specifies the Emotion that the input carries.
emotionDegree Specifies the Degree of Emotion as one of “Low,” “Medium,” and “High.”
emotionName Specifies the ID of an Emotion listed in Table 2.
emotionSetName Specifies the name of the Emotion set which contains the Emotion. Emotion set of  Table 2 is used as a baseline, but other sets are possible.

Table 1 gives some examples of the MPAI standardised three-level Basic Emotion Set partly based on:

Table 1 – Basic Emotion Set

EMOTION CATEGORIES GENERAL ADJECTIVAL SPECIFIC ADJECTIVAL
ANGER angry furious
irritated
frustrated
APPROVAL, DISAPPROVAL admiring/approving
disapproving
indifferent
awed
contemptuous
AROUSAL aroused/excited/energetic cheerful
playful
lethargic
sleepy
ATTENTION attentive expectant/anticipating
thoughtful
distracted/absent-minded
vigilant
hopeful/optimistic
BELIEF credulous sceptical
CALMNESS calm peaceful/serene
resigned

The semantics of somr elements in Table 1 is provided by Table 2.

Table 2 – Semantics of the Basic Emotion Set

ID Emotion Meaning
1 admiring/approving emotion due to perception that others’ actions or results are valuable
2 amused positive emotion combined with interest (cognitive)
3 anger emotion due to perception of physical or emotional damage or threat
4 anxious/uneasy low or medium degree of fear, often continuing rather than instant
5 aroused/excited/energetic cognitive state of alertness and energy
6 arrogant emotion communicating social dominance
7 astounded high degree of surprised
8 attentive cognitive state of paying attention
9 awed approval combined with incomprehension or fear
10 bewildered/puzzled high degree of incomprehension
11 bored not interested
12 calm relative lack of emotion

In July MPAI has issued a call for technologies to extend the MPAI-MMC standard. One of the technologies requested is Personal Status defined as “The ensemble of information internal to a person, including Emotion, Cognitive State, and Attitude”. The 3 components are defined

Attitude An element of the internal status related to the way a human or avatar intends to position vis-à-vis the Environment or subsets of it, e.g., “Respectful”, “Confrontational”, “Soothing”.
Cognitive State An element of the internal status reflecting the way a human or avatar understands the Environment, such as “Confused”, “Dubious”, “Convinced”.
Emotion An element of the internal status resulting from the interaction of a human or avatar with the Environment or subsets of it, such as “Angry”, “Sad”, “Determined”.

The Personal Status is conveyed by one or more Modalities, currently, Text, Speech, Face and Gesture.

Respondents to the call are requested to propose the following:

  1. A Personal Status format capable of describing the evolution of Personal Status over time.
  2. A Fused Personal Status format supporting the requirements to:
    1. Include the Emotion, Cognitive Status, and Attitude making up a Personal Status.
    2. Retain information on the measured values of the different factors in a Personal Status conveyed by the different Modalities.
    3. Describe the evolution of Personal Status over time.

A Personal Status standard can be used as a standard component in human-machine conversation. One such component is Personal Status Extraction, depicted in Figure 2.

Figure 2 –Personal Status Extraction

Another component is Personal Status Display depicted in Figure 3.

Figure 3 – Personal Status Display

 


MPAI 101

The 19th of July 2022 was the second anniversary of the launch of the MPAI idea. After two years of existence, it is useful to have a summary of MPAI’s vision, mission, processes, achievements, plans, and the sister organisation MPAI Store. Those in a hurry can have a look at a 2 min video about MPAI (YouTubenon-YouTube).

Vision. The MPAI idea was driven by the impact digital media standards had on the media industry. While traditionally not very inclined to adopt “official” standards, that industry has seen relentless development in the last 1/3 of a century since digital media standards came to the fore and the industry began adopting them.

The state of Artificial Intelligence today is like the state of digital media some 1/3 of a century ago. Many players hold many technologies, but none has the power alone to create a level playing field where different players can deploy interoperable products, services, and applications.

Mission. The international, non-profit, and unaffiliated MPAI organisation develops standards for AI-based data coding and seeks to play the role of enabling that level playing field. 1/3 of a century ago the blocking factor was the high amount of data generated by the digitisation of analogue media. Today this remains an issue, but Artificial Intelligence can also be applied to all sorts of data when it is convenient to transform it from one format into another format.

Processes. Developing standards is a challenging business because standards are often based on sophisticated technologies that result from large research investments and have the potential to be used by millions of people. MPAI takes the following approach:

  1. Anybody should be allowed to propose standards and contribute to the definition of their functional requirements.
  2. Before the development of a standard starts users should know as many details of functional and commercial requirements as legally possible.
  3. Investments that have produced good research results should be remunerated.
  4. Once approved, the terms and conditions for using a standard should be known in a timely and simple fashion.

MPAI is developing its standards using a process that accommodates such requirements:

  1. Anybody can propose standards, attend online meetings, and develop functional requirements.
  2. MPAI Principal Members develop and approve the Framework Licence of a standard. Unlike Fair, Reasonable and Non-Discriminatory (FRAND) declarations, the Framework Licence includes terms and conditions without values (dollars, percentages, rates, dates, etc.) and a declaration that:
    1. The licence will be issued before commercial implementations are available on the market.
    2. The total cost will be in line with the total cost of the licenses for similar data coding technologies.
    3. The market value of the specific standardised technology will be considered.
  3. MPAI issues Calls for Technologies requesting proposals satisfying functional and commercial requirements.
  4. Anybody can respond to a Call and participate in the integration of technologies for a standard on the condition of membership in MPAI and acceptance of the Framework Licence for proposals submitted.

Achievements. MPAI has developed 4 technical specifications and 1 standard, i.e., the full set of technical specification, reference software, conformance testing and performance assessment:

  1. AI Framework (MPAI-AIF) enables the creation of environments (AIF) that execute AI Workflows (AIW) composed of basic components called AI Modules (AIM). It is a foundational MPAI standard on which other MPAI application standards are built.
  2. Context-based Audio Enhancement (MPAI-CAE) uses AI to improve the user experience for audio-related entertainment, teleconferencing, restoration, and other applications in contexts such as in the home, in the car, on the go, in the studio, etc.
  3. Compression and Understanding of Industrial Data (MPAI-CUI) uses AI to handle financial data for such purposes as assessing adequacy of governance and predicting the default and business discontinuity probabilities of a company.
  4. Multimodal Conversation (MPAI-MMC) uses AI to enable conversation between humans and machines emulating human-human conversation in completeness and intensity.

MPAI has also developed Governance of the MPAI Ecosystem (MPAI-GME), a foundational standard laying down the rules that govern the submission of and access to MPAI standard implementations with attributes of Reliability, Robustness, Replicability, and Fairness, available from the MPAI Store.

Plans. MPAI is engaged in 3 projects which have just reached the Call for Technologies stage and aim at:

  1. Providing the AI Framework standard with a security infrastructure so that AIF V2 components can access security services. Please have a look at the 1 min 20 sec video about the MPAI-AIF V2 Call for Technologies (YouTubenon-YouTube);  the slides presented at the online meeting on 2022/07/11; the video recording of the online presentation (Youtubenon-YouTube) made at that 11 July presentation; and the  Call for TechnologiesUse Cases and Functional Requirements, and Framework Licence.
  2. Extending the Multimodal Conversation standard. Please have a look at the 2 min video (YouTube ) and video (non YouTube) illustrating MPAI-MMC V2; the slides presented at the online meeting on 2022/07/12; the video recording of the online presentation (Youtube, non-YouTube) made at that 12 July presentation; and the Call for TechnologiesUse Cases and Functional Requirements, and Framework Licence. MPAI-AIF V2 calls for a range of technologies, such as:
    1. Extraction of Personal Status, a set of internal characteristics from a person or avatar, currently Emotion, Cognitive State, and Attitude, conveyed by Modalities: Text, Speech, Face, and Gesture.
    2. Generation of a speaking avatar from Text and Personal Status, typically generated by a machine conversing with a human.
    3. Audio-Visual Scene Description to describe the structured composition of the audio-visual objects in a scene.
    4. Avatar Model to describe a static avatar from the waist up displaying movements in face and gesture.
    5. Avatar Descriptors to represent the instantaneous alterations of the face, head, arms, hands, and fingers of an Avatar Model.
    6. Extraction of Speech and Face Descriptors for remote authentication.
  3. Developing the Neural Network Watermarking (MPAI-NNW) standard providing the means to measure the performance of a neural network watermarking technology. Please have a look at the 1 min 30 sec video (YouTube ) and video (non YouTube) illustrating MPAI-MMC V2; the slides presented at the online meeting on 2022/07/12; the video recording of the online presentation (Youtube, non-YouTube) made at that 12 July presentation[ and the Call for TechnologiesUse Cases and Functional Requirements, and Framework Licence.

MPAI is also engaged in several other projects which have not reached the Call for Technologies stage:

  1. AI Health (MPAI-AIH): addresses users equipped with an AIF-enabled smartphone who collect, process, and license health data to a central service which satisfies data processing requests from third parties in line with the data licence. Improved neural network models are shared and improved via federated learning.
  2. Avatar Representation and Animation (MPAI-ARA): addresses the extraction of visual human features to animate a speaking avatar which accurately reproduces the features and the movements of a human.
  3. Connected Autonomous Vehicle (MPAI-CAV): addresses the AI Modules and AI Workflows of a CAV, i.e., a system capable of moving autonomously based on the analysis of the data produced by a range of sensors exploring the environment and the information transmitted by other sources in range.
  4. AI-based End-to-End Video Coding (MPAI-EEV): seeks to reduce the number of bits required to represent 2D video by exploiting AI-based end-to-end data coding technologies without being constrained by how data coding has traditionally been used for video coding.
  5. AI-Enhanced Video Coding (MPAI-EVC): aims at substantially enhancing the performance of a traditional video codec (MPEG-5 EVC) by improving or replacing traditional tools with AI-based tools.
  6. Integrative Genomic/Sensor Analysis (MPAI-GSA): aims at understanding and compressing the result of high-throughput experiments combining genomic/proteomic and other data, e.g., from video, motion, location, weather, and medical sensors.
  7. Mixed-Reality Collaborative Spaces (MPAI-MCS): addresses virtual spaces where humans and avatars collaborate to achieve common goals, such as Conversation About a Scene (CAS) and Avatar-Based Videoconference (ABV). These are two use cases enabled by MPAI-MMC V2.
  8. Visual Object and Scene Description (MPAI-OSD): addresses use cases sharing the goal of describing visual objects and locating them in space. Scene description includes the description of objects, their attributes in a scene and their semantic description.
  9. Server-based Predictive Multiplayer Gaming (MPAI-SPG): aims to mitigate the gameplay discontinuities caused by high latency or packet losses in online and cloud gaming applications and to detect game players who are getting an unfair advantage by manipulating the data generated by their game client.
  10. XR Venues (MPAI-XRV) addresses use cases enabled by AR/VR/MR (XR) and enhanced by Artificial Intelligence technologies. Examples are eSports, Experiential retail/shopping, and Immersive art experiences.

MPAI Store: Standards are about interoperability, but what is MPAI Interoperability? MPAI defines it as the ability to replace an Implementation of an AI Workflow or an AI Module with a functionally equivalent and conforming Implementation. MPAI defines 3 Interoperability Levels of an AIW executed in an AIF:

Level 1 – The AIW is implementer-specific and satisfies the MPAI-AIF Standard.

Level 2 – The AIW is specified by an MPAI Application Standard.

Level 3 – The AIW is specified by an MPAI Application Standard and validated by a Performance Assessor.

Implementations should be labelled so as not to confuse users. The Governance of the MPAI Ecosystem assigns this task to the MPAI Store, a not-for-profit organisation that verifies the security of implementations, tests the claimed conformance to an MPAI technical specification, records the result of a Performance Assessor, and makes the implementation available for download. The MPAI Store also manages a reputation system recording reviews of MPAI implementation.

MPAI offers Users access to the promised benefits of AI with a guarantee of increased transparency, trust and reliability as the Interoperability Level of an Implementation moves from Level 1 to 3.


The second round of MPAI standardisation begins

On 19 July 2020 – two years ago – the wild idea of an organisation dedicated to the development of AI-based data coding standards was made public. What has happened in these two years?

  1. MPAI was established in September 2020.
  2. Four Calls for Technologies were published in December 2020, and January-February 2021.
  3. The corresponding four Technical Specifications were published in September-November-December 2021:
    1. AI Framework (MPAI-AIF, a standard environment to execute AI workflows composed of AI Modules),
    2. Compression and Understanding of Industrial Data (MPAI-CUI, standard AI-based financial data processing technologies and their application to Company Performance Prediction).
    3. Multimodal Conversation (MPAI-MMC, standard AI-based human-machine conversation technologies and their application to 5 use cases),
    4. Context-based Audio Enhancement (MPAI-CAE, standard AI-based audio experience-enhancement technologies and their application to 4 use cases)
  4. Completion of the set of specifications composing an MPAI standard, namely: Reference Software, Conformance Testing and Performance Assessment in addition to Technical Specification. So far this has been partly achieved.
  5. IEEE adoption without modification of the Technical Specifications. The first MPAI technical specification converted to an IEEE standard is expected to be approved in the second half of September 2022.
  6. Publication of three Calls for Technologies and associated Functional and Commercial Requirements for data formats and technologies:
    1. The extended AI Framework standard (MPAI-AIF V2) will retain the functionalities specified by Version 1 and will enable the components of the Framework to access security functionalities.
    2. The extended Multimodal Conversation standard (MPAI-MMC V2) will enable a variety of new use cases such as separation and location of audio-visual objects in a scene (e.g., human beings, their voices and generic objects); the ability of a party in metaverse1 to import an environmental setting and a group of avatars from metaverse2; representation and interpretation of the visual features of a human to extract information about their internal state (e.g., emotion) or to accurately reproduce the human as an avatar.
    3. The Neural Network Watermarking standard (MPAI-NNW) will provide the means to assess if the insertion of a watermark deteriorates the performance of a neural network; how well a watermark detector can detect the presence of a watermark and a watermark decoder can retrieve the payload; and how to quantify the computational cost to inject, detect, and decode a payload.
  7. Finally, MPAI has decided to establish the MPAI Store. This is the place where implementations of MPAI technical specifications will be submitted, validated, tested, and made available for download.

A short life with many results. Much more to accomplish.


MPAI calls for technologies supporting three new standards

Geneva, Switzerland – 19 July 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 22nd General Assembly. Among the outcomes is the publication of three Calls for Technologies supporting the Use Cases and Functional Requirements identified for extensions of two existing standards – AI Framework and Multimodal Conversation – and for a new standard – Neural Network Watermarking.

Each of the three Calls is accompanied by two documents. The first document identifies the Use Cases whose implementation the standard is intended to enable and the Functional Requirements that the proposed data formats and associated technologies are expected to support.

The extended AI Framework standard (MPAI-AIF V2) will retain the functionalities specified by Version 1 and will enable the components of the Framework to access security functionalities.

The extended Multimodal Conversation (MPAI-MMC V2) will specify a variety of new technologies such as separation and location of audio-visual objects in a scene (e.g., human beings, their voices and generic objects); the ability of a party in metaverse1 to import an environmental setting and a group of avatars from metaverse2; representation and interpretation of the visual features of a human to extract information about their internal state (e.g., emotion) or to accurately reproduce the human as an avatar.

Neural Network Watermarking (MPAI-NNW) will provide the means to assess if the insertion of a watermark deteriorates the performance of a neural network; how well a watermark detector can detect the presence of a watermark and a watermark decoder can retrieve the payload; and how to quantify the computational cost to inject, detect, and decode a payload.

The second document accompanying a Call for Technologies is the Framework Licence for the standard that will be developed from the technologies submitted in response to the Call. The Framework Licence is a licence without critical data such as cost, dates, rates etc.

The document packages of the Calls can be found on the MPAI website.

Those intending to respond to the Calls should do so by submitting their responses to the MPAI secretariat by 23:39 UTC on 10 October 2022.

MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.

So far, MPAI has developed 5 standards (not italic in the list below), is currently engaged in extending 2 approved standards (underlined) and is developing another 10 standards (italic).

Name of standard Acronym Brief description
AI Framework MPAI-AIF Specifies an infrastructure enabling the execution of implementations and access to the MPAI Store.
Context-based Audio Enhancement MPAI-CAE Improves the user experience of audio-related applications in a variety of contexts.
Compression and Understanding of Industrial Data MPAI-CUI Predicts the company’s performance from governance, financial, and risk data.
Governance of the MPAI Ecosystem MPAI-GME Establishes the rules governing the submission of and access to interoperable implementations.
Multimodal Conversation MPAI-MMC Enables human-machine conversation emulating human-human conversation.
Avatar Representation and Animation MPAI-ARA Specifies descriptors of avatars impersonating real humans.
Connected Autonomous Vehicles MPAI-CAV Specifies components for Environment Sensing, Autonomous Motion, and Motion Actuation.
End-to-End Video Coding MPAI-EEV Explores the promising area of AI-based “end-to-end” video coding for longer-term applications.
AI-Enhanced Video Coding MPAI-EVC Improves existing video coding with AI tools for short-to-medium term applications.
Integrative Genomic/Sensor Analysis MPAI-GSA Compresses high-throughput experiments’ data combining genomic/proteomic and other data.
Mixed-reality Collaborative Spaces MPAI-MCS Supports collaboration of humans represented by avatars in virtual-reality spaces.
Neural Network Watermarking MPAI-NNW Measures the impact of adding ownership and licensing information to models and inferences.
Visual Object and Scene Description MPAI-OSD Describes objects and their attributes in a scene.
Server-based Predictive Multiplayer Gaming MPAI-SPG Trains a network to compensate data losses and detects false data in online multiplayer gaming.
XR Venues MPAI-XRV XR-enabled and AI-enhanced use cases where venues may be both real and virtual.

Please visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

Most importantly: please join MPAI, share the fun, build the future.

 

 


What is new in MPAI Multimodal Conversation

The MPAI project called Multimodal Conversation (MPAI-MMC), one of the earliest MPAI projects, has the ambitious goal of using AI to enable forms of conversation between humans and machines that emul­ate the conversation between humans in completeness and intensity. An important element to achieving this goal is the leveraging of all modalities used by a human when talking to another human: speech, but also text, face, and gesture.

In the Conversation with Emotion use case standardised in Version 1 (V1) of MPAI-MMC, the machine activates different modules, that MPAI calls AI Modules (AIM) that produce data in response to the data generated by a human:

AI Module Produces What data From what data
Speech Recognition (Emotion) Extracts Text
Human speech emotion
Speech
Language Understanding Produces Refined text Recognised text
Extracts Meaning
Text emotion
Recognised text
Video Analysis Extracts Face emotion Face Object
Emotion Fusion Produces Fused emotion Text Emotion
Speech Emotion
Face Emotion
Dialogue Processing Produces Machine text
Machine emotion
Meaning
Refined Text
Fused Emotion
Speech Synthesis (Emotion) Produces Machine speech with Emotion Text
Emotion
Lips Animation Produces Machine Face with Emotion Speech
Emotion

This is graphically depicted in Figure 1 where the green blocks correspond to the AIMs.

Figure 1 – Conversation with Emotion (V1)

Multimodal Conversation Version 2 (V2), for which a Call for Technologies is planned to be issued on 19 July 2022, intends to improve MPAI-MMC V1 by extending the notion of Emotion with the notion of Personal Status. This is the ensemble of personal information that includes Emotion, Cognitive State, and Attitude. The former two – Emotion and Cognitive State – result from the interaction with the environment, while the last – Attitude – is the stance that will be taken for new interactions based on the achievedEmotion and Cognitive State.

Figure 2 shows the composite AI Module introduced in MPAI-MMC V2: Personal Status Extraction (PSE). This contains specific AIMs that describe the individual text, speech, face and gesture modalities and interpret descriptors. PSE plays a fundamental role in the human-machine conversation as we will see soon.

Figure 2 – Personal Status Extraction

A second fundamental component – Personal Status Display (PSD) – is depicted in Figure 3. Its role is to enable the machine to manifest itself to the party it is conversing with. The manifestation is driven by the words generated by the machine and by the Personal Status it intends to attach to its speech, face, and gesture.

Figure 3 – Personal Status Display

Is there a reason why the word “party” has been used in lieu of “human”. Yes, there is. The Personal Status Display can be used to manifest a machine to a human, but potentially to another avatar. The same can be said of Personal Status Extraction which can extract the Personal Status of a human, but could do that on an avatar as well. MPAI-MMC V2 has examples of both.

Figure 4 shows how can we can leverage the Personal Status Extraction and Personal Status Display AIMs to enhance the performance of Conversation with Emotion – pardon – Conversation with Personal Status.

Figure 4 – Conversation with Personal Status V2.0

In Figure 4 speech recognition extracts the text from speech. Language Understanding Question and Dialogue Processing can do a better job because they have access to Personal Status. Finally, the Personal Status Display is a re-usable component that generates a speaking avatar from text and the Personal Status conveyed by the three speech, face, and gesture modalities.

Figure 4 assumes that the outside world provides clean speech, face and gesture. Most often, unfortunately, this is not the case. There is no single speech and, even if there is just one, it is embedded in all sorts of sounds surrounding us. The same can be said of face and gesture. There may be more than one person, and extracting the face or the head, arms, hands, and finger making up the gesture of a human is anything but simple. Figure 5 introduces two critical components Audio Scene Description (ASD) and Visual Scene Description (VSD).

Figure 5 – Conversation with Personal Status and Audio-Visual Scene Description

The task of Audio-Visual Scene Description (AVSD) can be described as “digitally describe a portion of the world with a level of clarity and precision achievable by a human”. The goal expressed in this form can be both unattainable with today’s technology because description of “any” scene is too general. On the other hand, it can also be not sufficient for some purposes because very often the world can be described by using sensors a human does not have.

The scope of Multimodal Conversation V2, however, is currently limited to 3 use cases:

  1. A human has a conversation with a machine about the objects in a room.
  2. A group of humans has a conversation with a Connected Autonomous Vehicle (CAV) outside and inside it (in the cabin).
  3. Groups of humans have a videoconference where humans are individually represented by avatars having a high similarity with the humans they represent.

VSD should provide a description of the visual scene as composed of visual objects classified as human and generic objects. The human object should be decomposable in face, head, arm, hand, and finger objects and should have position and velocity information. The ASD should provide a description of the speech sources as audio objects with their position and velocity.

The first use case is well represented by Figure 6.

Figure 6 – Conversation About a Scene

The machine sees the human as a human object. The Object Identification ID uses the Gesture Descriptors to understand where the finger of the human points at. If at that position there is an object, the Object Identification AIM uses the Physical Object Descriptors to assign an ID to the object. The machine also feeds Face Object and Human Object into the Personal Status Extraction AIM to understand what the human’s Emotion, Cognitive State and Attitude in order is to enable the Question and Dialogue Processing AIM to fine tune its answer.

Is this all we have to say about Multimodal Conversation V2.0? Well, no, this is the beginning. So, stay tuned for more news or, better, attend the MPAI-MMC V2 online presentation on Tuesday 12 July 2022 at 14 UTC. Please register here to attend.


An introduction to MPAI Multimodal Conversation V2

The MPAI project called Multimodal Conversation (MPAI-MMC) has the ambitious goal to use AI to enable forms of human-machine conversation that emul­ate human-human conversation in completeness and intensity. This means that MMC will leverage all modalities that a human uses when talking to another human: of course, speech, but also text, face and gesture.

In the Conversation with Emotion use case of MMC V1 the machine activates different modules (in italic) to produce data (underlined) in response to a human:

  1. Speech Recognition (Emotion) extracts text and speech emotion.
  2. Language Understanding produces refined text, and extracts meaning and text emotion.
  3. Video Analysis extracts face emotion.
  4. Emotion Fusion fuses the 3 emotions into fused emotion.
  5. Dialogue Processing produces machine text and machine emotion.
  6. Speech Synthesis (Emotion) produces speech with machine emotion.
  7. Lips Animation produces machine face (an avatar) with facial emotion and lips in sync with speech.

This is depicted in Figure 1.

Multimodal Conversation Version 2 (V2) intends to substantially improve MPAI-MMC V2 by adding Personal Cognitive State and Attitude to Emotion. The combination of the three is called Personal Status, the ensemble of information internal to a person. Emotion and Cognitive State are the result of an interaction with the environment, while Attitude is the stance for new interactions.

Figure 1 shows one component – Personal Status Extraction (PSE) – identified for MPAI-MMC V2. PSE, a Composite AIM containong other specific AIMs that describe modalities and interpret derscriptors, plays a fundamental role in human-machine conversation

Figure 1 – Personal Status Extraction

A second fundamental component – Personal Status Display – is depicted in Figure 2.

Figure 2 – Personal Status Description

 


Functional requirements for 3 new standards published 

 Geneva, Switzerland – 22 June 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 21st General Assembly. Among the outcomes is the approval of three Use Cases and Functional Requirements documents for AI Framework V2, Multimodal Conversation V2 and Neural Network Watermarking V1.

This milestone is important because MPAI Principal Members intending to participate in the development of the standards can develop the Framework Licences of the three planned standards. The Framework Licence has been devised by MPAI to facilitate the practical availability of approved standards (see here for an example). It is a licence without critical data such as cost, dates, rates etc. MPAI is now drafting the Calls for Technologies for the 3 standards and plans to adopt and publish them on 2022/07/19, the 2nd anniversary of the launch of the MPAI project.

AI Framework (MPAI-AIF) V1 specifies an infrastructure enabling the execution of implementations and access to the MPAI Store. V2 will add security support to the framework and is the next step following today’s release of the MPAI-AIF V1 Reference Software.

Multimodal Conversation (MPAI-MMC) V1 Enables human-machine conversation emulating human-human conversation. V2 will specify technologies supporting 5 new use cases:

  1. Personal Status Extraction: provides an estimate of the Personal Status (PS) – of a human or an avatar – conveyed by Text, Speech, Face, and Gesture. PS is the ensemble of information internal to a person, including Emotion, Cognitive State, and Attitude.
  2. Personal Status Display: generates an avatar from Text and PS that utters speech with the intended PS while the face and gesture show the intended PS.
  3. Conversation About a Scene: a human holds a conversation with a machine about objects in a scene. While conversing, the human points their fingers to indicate their interest in a particular object. The machine is helped by the understanding of the human’s PS.
  4. Human-Connected Autonomous Vehicle (CAV) Interaction: a group of humans converse with a CAV which understands the utterances and the PSs of the humans it converses with and manifests itself as the output of a Personal Status Display.
  5. Avatar-Based Videoconference: avatars representing humans with a high degree of accuracy participate in a videoconference. A virtual secretary (VS) represented as an avatar displaying PS creates an online summary of the meeting with a quality enhanced by the virtual secretary’s ability to understand the PS of the avatar it converses with.

Neural Network Watermarking (MPAI-NNW): will provide the means to measure, for a given size of the watermarking payload, the ability of 1) the watermark inserter to inject a payload without deteriorating the NN performance, 2) the watermark detector to recognise the presence and the watermark decoder to successfully retrieve the payload of the inserted watermark, 3) the watermark inserter to inject a payload and the watermark detector/decoder to detect/decode a payload from a watermarked model or from any of its inferences at a measured computational cost.

MPAI will hold four online presentations of the documents on the following dates:

Title Acronym Day of July   Time Note
AI Framework V2 MPAI-AIF 11 15:00 UTC Register
Multimodal Conversation V2 MPAI-MMC 07 14:00 UTC Register
Multimodal Conversation V2 MPAI-MMC 12 14:00 UTC Register
Neural Network Watermarking MPAI-NNW 12 15:00 UTC Register

MPAI-MMC will be presented in two sessions because of the number and scope of the use cases and of the supporting technologies.

Those intending to attend a presentation event are invited to register at the link above.

MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.

So far, MPAI has developed 5 standards (normal font in the list below), is currently engaged in extending two approved standards (underlined) and is developing other 9 standards (italic).

Name of standard Acronym Brief description
AI Framework MPAI-AIF Specifies an infrastructure enabling the execution of implementations and access to the MPAI Store.
Context-based Audio Enhancement MPAI-CAE Improves the user experience of audio-related applications in a variety of contexts.
Compression and Understanding of Industrial Data MPAI-CUI Predicts the company performance from governance, financial, and risk data.
Governance of the MPAI Ecosystem MPAI-GME Establishes the rules governing the submission of and access to interoperable implementations.
Multimodal Conversation MPAI-MMC Enables human-machine conversation emulating human-human conversation.
Server-based Predictive Multiplayer Gaming MPAI-SPG Trains a network to com­pensate data losses and detects false data in online multiplayer gaming.
AI-Enhanced Video Coding MPAI-EVC Improves existing video coding with AI tools for short-to-medium term applications.
End-to-End Video Coding MPAI-EEV Explores the promising area of AI-based “end-to-end” video coding for longer-term applications.
Connected Autonomous Vehicles MPAI-CAV Specifies components for Environment Sensing, Autonomous Motion, and Motion Actuation.
Avatar Representation and Animation MPAI-ARA Specifies descriptors of avatars impersonating real humans.
Neural Network Watermarking MPAI-NNW Measures the impact of adding ownership and licensing information to models and inferences.
Integrative Genomic/Sensor Analysis MPAI-GSA Compresses high-throughput experiments data combining genomic/proteomic and other.
Mixed-reality Collaborative Spaces MPAI-MCS Supports collaboration of humans represented by avatars in virtual-reality spaces.
Visual Object and Scene Description MPAI-OSD Describes objects and their attributes in a scene.

Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.

Most importantly: join MPAI, share the fun, build the future.

 

 


MPAI wants to do it again

On the 30th of September 2021, on the first anniversary of its incorporation, MPAI approved Version 1 of its Multimodal Conversation standard (MPAI-MMC). The standard included 5 use cases: Conversation with Emotion, Multimodal Question Answering and e Automatic Speech Translation Use Cases. Three months later, MPAI approved Version 1 of Context-based Audio Enhancement (MPAI-CAE). The standard included 4 use cases: Emotion-Enhanced Speech, Audio Recording Preservation, Speech Restoration System and Enhanced Audioconference Experience.

A lot more has happened in MPAI beyond these two standards, even before the approval of the two standards, and now MPAI is ready to launch a new project that includes 5 use cases:

  1. Personal Status Extraction (PSE).
  2. Personal Status-driven Avatar (PSA).
  3. Conversation About a Scene (CAS).
  4. Human-CAV (Connected Autonomous Vehicle) Interaction (HCI).
  5. Avatar-Based Videoconference (ABV).

This article will give a brief introduction to the 5 use cases.

  1. Personal Status Extraction (PSE). Personal Status is a set of internal characteristics of a person, currently, Emotion, Cognitive State, and Attitude. Emotion and Cognitive State result from the interaction of a human with the Environment. Cognitive State is more rational (e.g., “Confused”, “Dubious”, “Convinced”). Emotion is less rational (e.g., “Angry”, “Sad”, “Determined”). Attitude is the stance that a human takes when s/he has reached an Emotion and Cognitive State (e.g., “Confrontational”, “Respectful”, “Soothing”). The PSE use case is about how Personal Status can be extracted from its Manifestations: Text, Speech, Face and Gesture.
  2. Personal Status-driven Avatar (PSA). In Conversation with Emotion (MPAI-MMC V1) a machine was represented by an avatar whose speech and face displayed an emotion congruent with the emotion displayed by a human the machine is conversing with. The PSA use case is about the interaction of a machine with humans in different use cases. The machine is represented by an avatar whose text, speech, face, and gesture display a Personal Status congruent with the Personal Status manifested by the human the machine is conversing with.
  3. Conversation About a Scene (CAS): A human and a machine converse about the objects in a room with little or no noise. The human uses a finger to indicate their interest in a particular object. The machine understands the Personal Status shown by the human in their speech, face, and gesture, e.g., the human’s satisfaction because the machine understands their question. The machine manifests itself as the head-and-shoulders of an avatar whose face and gesture (head) convey the machine’s Personal Status resulting from the conversation in a way that is congruent with the speech it utters.
  4. Human-CAV (Connected Autonomous Vehicle) Interaction (HCI): a group of humans converse with a Connected Autonomous Vehicle (CAV) on a domain-specific subject (travel by car). The conversation can be held both outside of the CAV when the CAV recognises the humans to let them into the CAV or inside when the humans are sitting in the cabin. The two Environments are assumed to be noisy. The machine understands the Speech, and the human’s Personal Status shown on their Text, Speech, Face, and Gesture. The machine appears as the head and shoulders of an avatar whose Text, Speech, Face, and Gesture (Head) convey a Personal Status congruent with the Speech it utters.
  5. Avatar-Based Videoconference (ABV). Avatars representing geographically distributed humans participate in a videoconference reproducing the movements of the upper part of the human participants (from the waist up) with a high degree of accuracy. Some locations may have more than one participant. A special participant in the Virtual Environment where the Videoconference is held can be the Virtual Secretary. This is an entity displayed as an avatar not representing a human participant whose role is to: 1) make and visually share a summary of what other avatars say; 2) receive comments on the summary; 3) process the vocal and textual comments taking into account the avatars’ Personal Status showing in their text, speech, face, and gesture; 4) edit the summary accordingly; and 5) display the summary. A human participant or the meeting manager composes the avatars’ meeting room and assigns each avatar’s position and speech as they see fit.

These use cases imply a wide range of technologies (more than 40). While the requirements for these technologies and the full description of the use cases are planned to be approved at the next General Assembly (22 June), MPAI is preparing the Framework Licence and the Call for Technologies. The latter two are planned to be approved at the next-to-next General Assembly on 19 July. MPAI gives respondents about 3 months to complete their submissions.

More information about the MPAI process and the Framework Licence is available on the MPAI website.


MPAI for affordable Artificial Intelligence

After a series of ups and downs that lasted about sixty years, the set of technologies that go by the name of Artificial Intelligence (AI) has powerfully entered the design, production and strategy realities of many companies. Although it would not be easy – it would perhaps be an ineffective use of time – to argue against those who claim that AI is neither Artificial nor Intelligent, the term AI is sufficiently useful and indicative that it has found wide use in both saying and doing.

To characterise AI, it is useful to compare it with the antecedent technology called Data Processing (DP). When handling a data source, an expert could understand the characteristics of the data, e.g., the values, or rather the transformations of the data capable of extracting the most representative quantities. A good example was Digital Signal Processing (DSP) well represented by those agglomerates of sophisticated algorithms that go by the name of standards for audio and video compression.

In all these cases we find wonderful examples of how human ingenuity has been able to dig into enormous masses of data for years and discover the peculiarities of audio and video signals one by one to give them a more efficient, i.e., one that required fewer bits to represent the same or nearly the same data.

AI presents itself as a radical alternative to what DP fans have done so far. Instead of employing humans to dig into the data to find hidden relationships, machines are trained to search for and find these hidden relationships. In other words, instead of training humans to find relationships, train humans to train the machine to find those relationships.

The machines intended for this purpose consist of a network of variously connected nodes. Drawing on the obvious parallel of the brain, the nodes are called neurons and the network is therefore called the neural network. In the training phase, the machine is presented with many – maybe millions – of examples and, thanks to an internal logic, the connections can be corrected backwards so that the next time – hopefully – the result is better tuned.

Intuitively, it could be said that the more complex the universe of data that the machine must “learn”, the more complex the network must be. This is not necessarily true because the machine has been built to understand the internal relationships of the data and what appears to us at first sight complex could have a rule or a set of relatively simple rules that underlie the data and that the machine can “understand”.

Training a neural network can be expensive. The first cost element is the large amounts of data for training the network. This can be supervised (the man tells the machine how well it fared) or unsupervised (the machine understands this by itself). The second cost element is the large amounts of calculations to change the weights, i.e., the importance of the connections between neurons for each iteration. The third cost element is the access to IT infrastructures to carry out the training. Finally, if the trained neural network is used to offer a service, the cost of accessing potentially important computing resources every time the machine produces an inference, that is, it processes data to provide an answer.

On 19 July 2020, the idea of ​​establishing a non-profit organisation with the mission of developing standards for data encoding using mainly AI techniques was launched. One hundred days later the organisation was formed in Geneva under the name of MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence.

Why should we need an organisation for data coding standards using AI? The answer is simple and can be formulated as follows: MPEG standards – based on DP – have enormously accelerated, and actually promoted the evolution and dissemination of audio-visual products, services and applications. It is, therefore, reasonable to expect that MPAI standards – based on AI – will accelerate the evolution and diffusion of products, services, and applications for the data economy. Yes, because even audio-visual sources in the end produce – and for MPEG always they did so – data.

One of the first objectives that MPAI set to itself was the pure and simple lowering of the development and operating costs of AI applications. How can a standard achieve this?

The answer starts a bit far away, that is, from the human brain. We know that the human brain is made up of connected neurons. However, the connections of the approximately 100 billion neurons are not homogeneously distributed because the brain is made up of many neuronal “aggregations” whose function the research in the field is gradually coming to understand. So, rather than neurons connecting with parts of the brain, we are talking about neurons that have many interconnections with other neurons within an aggregation, while it is the aggregation itself passing the results of its processing to other aggregations. For example, the visual cortex, the part of the brain processing the visual information located in the occipital lobe and part of the visual pathway has a layered structure with 6 interconnected layers. The 4th layer is further subdivided in 4 sublayers.

Whatever its motivations, one of the first standards approved by the MPAI General Assembly (November 2021, 14 months after MPAI was established, was AI ​​Framework (MPAI-AIF), a standard that specifies the architecture and constituent components of an environment able to implement AI systems consisting of AI Modules (AI Module or AIM) organised in AI Workflows (AI Workflow or AIW), as shown in Figure 1.

Figure 1 – Reference model of MPAI-AIF

The main requirements that have guided the development of the MPAI-AIF standard specifying this environment are:

  1. Independence from the operating system.
  2. Modularity of components.
  3. Interfaces that encapsulate components abstracted from the development environment.
  4. Wide range of implementation technologies: software (Microcontrollers to High-Performance Computing systems), hardware, and hardware-software.
  5. AIW execution in local and distributed Zero-Trust environments.
  6. AIF interaction with other AIFs operating in the vicinity (e.g., swarms of drones).
  7. Direct support for Machine Learning functions.
  8. Interface with MPAI Store to access validated components.

Controller performs the following functions:

  1. Offers basic functionality, e.g., scheduling, and communication between AIM and other AIF components.
  2. Manages resources according to the instructions given by the user.
  3. Is linked to all AIM/AIW in a given AIF.
  4. Activates/suspends/resumes/deactivates AIWs based on user or other inputs.
  5. Exposes three APIs:
    1. AIM APIs allow AIM/AIW to communicate with it (register, communicate and access the rest of the AIF environment).
    2. User APIs allow user or other controllers to perform high-level tasks (e.g., turn the controller on/off, provide input to the AIW via the controller).
    3. Controller-to-controller APIs allow a controller to interact with another controller.
  6. Accesses the MPAI Store APIs to communicate to the Store.
  7. One or more AIWs can run locally or on multiple platforms.
  8. Communicates with other controllers running on separate agents, requiring one or more controllers in proximity to open remote ports.

Communication connects an output port of one AIM with an input port of another AIM using events or channels. It has the following characteristics:

  1. Activated jointly with the controller.
  2. Persistence is not required.
  3. Channels are Unicast – physical or logical.
  4. Messages have high or normal priority and are communicated via channels or events.

AI Module (AIM) receives data, performs a well-defined function and produces data. It has the following features:

  1. Communicates with other components via ports or events.
  2. Can incorporate other AIMs within it.
  3. Can register and log out dynamically.
  4. Can run locally or on different platforms, e.g., in the cloud or on swarms of drones, and communicate with a remote controller.

AI Workflow (AIW) is a structured aggregation of AIMs receiving and processing data according to a function determined by a use case and producing the required data.

Shared Storage stores data making it available to other AIMs.

AIM Storage stores the data of individual AIMs.

User Agent interfaces the user with an AIF via the controller.

Access offers access to static or slow-varying data that are required by the AIM, such as domain knowledge data, data models, etc.

MPAI Store stores and makes implementations available to users.

MPAI-AIF is an MPAI standard that can be freely downloaded from the MPAI website. An open-source implementation of MPAI-AIF will be available shortly.

MPAI-AIF is important because it lays the foundation on which other MPAI application standards can be implemented. So, it can be said that the description given above does not mark the conclusion of MPAI-AIF, but only the beginning. In fact, work is underway to provide MPAI-AIF with security support. The reference model is an extension of the model in Figure 1.

Figure 2 – Reference model of MPAI-AIF with security support

MPAI will shortly publish a Call for Technologies. In particular, the Call will request API proposals to access Trusted Services and Crypto Services.

We started by extolling the advantages of AI and complaining about the high costs of using the technology. How can MPAI-AIF lower costs and increase the benefits of AI? The answer lies in these expected developments:

  1. AIM implementers will be able to offer them to an open and competitive market.
  2. Application developers will be able to find the AIMs they need in the open and competitive market.
  3. Consumers will enjoy a wide selection of the best AI applications produced by competing application developers based on competing technologies.
  4. the demand for technologies enabling new and more performing AIMs will fuel innovation.
  5. Society will be able to lift the veil of opacity behind which hide many of today’s AI-based monolithic applications.

MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.

Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedInTwitterFacebookInstagram, and YouTube.

Most importantly: join MPAI – share the fun – build the future.