Multimodal Conversation

Multi-modal conversation (MPAI-MMC) aims to enable human-machine conversation that emulates human-human conversation in completeness and intensity by using AI.


Use Cases and Functional RequirementsFramework LicenceCall for TechnologiesTemplate for responses to the Call for TechnologiesApplication NoteClarifications

Clarifications of the Call for Use Cases and Functional Requirements

MPAI-5 has approved the MPAI-MMC Use Cases and Functional Requirements as attachment to the Call for Technologies N173. However, MMC-DC has identified some issues that are worth a clarification. This is posted on the MPAI web site and will be com­mun­icated to those who have informed the Secretariata of their intention to respond.

General issue

MPAI understands that the scope of both N151 and N153 is very broad. Therefore it reiterates the point made in N152 and N154 that:

Completeness of a proposal for a Use Case is a merit because reviewers can assess that the components are integrated. However, submissions will be judged for the merit of what is proposed. A submission on a single technology that is excellent may be considered instead of a submission that is complete but has a less performing technology.

Multimodal Question Answering (Use case #2 in N153)

MPAI welcomes submission that propose a standard set of “type of question intentions” and the means to indicate the language used in the Query Format.

MPAI welcomes proposals that propose a concept format for Reply in addition to or instead of a text format.

The assessment of submissions by Respondents who elect to not consider this point in their submission will not influence the assessment of the rest of their submission

References

  1. MPAI-MMC Use Cases & Functional Requirements; MPAI N153; https://mpai.community/standards/mpai-mmc/#UCFR
  2. MPAI-MMC Call for Technologies, MPAI N154; https://mpai.community/standards/mpai-mmc/#Technologies
  3. MPAI-MMC Framework Licence, N173; https://mpai.community/standards/mpai-mmc/#Licence

Use Cases and Functional Requirements

This document is also available in MS format MPAI-MMC Use Cases and Functional Requirements

1       Introduction.

2       The MPAI AI Framework (MPAI-AIF)

3       Use Cases.

3.1       Conversation with emotion (CWE)

3.1.1       Multimodal Question Answering (MQA)

3.1.2       Personalised Automatic Speech Translation (PST)

4       Functional Requirements.

4.1       Introduction.

4.2       Conversation with Emotion.

4.2.1       Implementation architecture.

4.2.2       AI Modules.

4.2.3       I/O interfaces of AI Modules.

4.2.4       Technologies and Functional Requirements.

4.3       Multimodal Question Answering.

4.3.1       Implementation Architecture.

4.3.2       AI Modules.

4.3.3       I/O interfaces of AI Modules.

4.3.4       Technologies and Functional Requirements.

4.4       Personalized Automatic Speech Translation.

4.4.1       Implementation Architecture.

4.4.2       AI Modules.

4.4.3       I/O interfaces of AI Modules.

4.4.4       Technologies and Functional Requirements.

5       Potential common technologies.

6       Terminology.

7       References.

1        Introduction

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) is an international association with the mission to develop AI-enabled data coding standards. Research has shown that data coding with AI-based technologies is more efficient than with existing technologies.

The MPAI approach to developing AI data coding standards is based on the definition of standard interfaces of AI Modules (AIM). AIMs operate on input data having a standard format to provide output data having a standard format. AIMs can be combined and executed in an MPAI-specified AI-Framework called MPAI-AIF. The MPAI-AIF standard is being developed based on the responses to the Call for MPAI-AIF Technologies (N100) [2] satisfying the MPAI-AIF Use Cases and Functional Requirements (N74) [1].

While AIMs must expose standard interfaces to be able to operate in an MPAI AI Framework, the technologies used to implement them may influence their performance. MPAI bel­ieves that com­peting developers striving to provide more performing proprietary and inter­operable AIMs will promote horizontal markets of AI solutions that build on and further promote AI innovation.

This document is a collection of Use Cases and Functional Requirements for the MPAI Multi­modal Conversation (MPAI-MMC) application area. The MPAI-MMC Use Cases enable human-machine conversation that emulates human-human conversation in com­pleteness and intensity. Currently MPAI has identified three Use Cases falling in the Multimodal Communication area:

  1. Conversation with emotion (CWE)
  2. Multimodal Question Answering (MQA)
  3. Personalized Automatic Speech Translation (PST)

This document is to be read in conjunction with the document MPAI-MMC Call for Technologies (CfT) (N154) [4] as it provides the functional requirements of all the technologies that have been identified as required to implement the current MPAI-MMC Use Cases and Functional Requir­ements. Respon­dents to the MPAI-MMC CfT should make sure that their responses are aligned with the functional requirements expressed in this document.

In the future, MPAI may issue other Calls for Technologies falling in the scope of MPAI-MMC to support identified Use Cases.

It should also be noted that some technologies identified in this document are the same, similar, or related to technologies required to implement some of the Use Cases of the companion document MPAI-CAE Use Cases and Functional Requirements (N151) [3]. Readers of this document are advised that being familiar with the content of the said companion document is a prerequisite for a proper understanding of this document.

This document is structured in 7 chapters, including this Introduction.

Chapter 2 briefly introduces the AI Framework Reference Model and its six Components
Chapter 3 briefly introduces the 3 Use Cases.
Chapter 4 presents the 4 MPAI-MMC Use Cases with the following structure:

1.     Reference architecture

2.     AI Modules

3.     I/O data of AI Modules

4.     Technologies and Functional Requirements

Chapter 5 identifies the technologies likely to be common across MPAI-MMC and MPAI-CAE, a companion standard project whose Call for Technologies is issued simul­taneously with MPAI-MMC’s.
Chapter 6 gives suggested references.
Chapter 7 gives a basic list of relevant terms and their definition

For the reader’s convenience, the meaning of the acronyms of this document is given in Table 1.

Table 1 – Acronyms of used in this document

 

Acronym Meaning
AI Artificial Intelligence
AIF AI Framework
AIM AI Module
CfT Call for Technologies
CWE Conversation with emotion
DP Data Processing
KB Knowledge Base
ML Machine Learning
MQA Multimodal Question Answering
PST Personalized Automatic Speech Translation

2        The MPAI AI Framework (MPAI-AIF)

Most MPAI applications considered so far can be implemented as a set of AIMs – AI, ML and even traditional Data Processing (DP)-based units with standard interfaces assembled in suitable topol­ogies to achieve the specific goal of an application and executed in an MPAI-defined AI Frame­work. MPAI is making all efforts to identify processing modules that are re-usable and up­gradable without necessarily changing the inside logic. MPAI plans on completing the devel­op­ment of a 1st generation AI Framework called MPAI-AIF in July 2021.

The MPAI-AIF Architecture is given by Figure 1.

Figure 1 – The MPAI-AIF Architecture

MPAI-AIF is made up of 6 Components:

  1. Management and Control manages and controls the AIMs, so that they execute in the correct order and at the time when they are needed.
  2. Execution is the environment in which combinations of AIMs operate. It receives external inputs and produces the requested outputs, both of which are Use Case specific, activates the AIMs, exposes interfaces with Management and Control and interfaces with Communic­ation, Storage and Access.
  3. AI Modules (AIM) are the basic processing elements receiving processing specific inputs and producing processing specific outputs.
  4. Communication is the basic infrastructure used to connect possibly remote Components and AIMs. It can be implemented, e.g., by means of a service bus.
  5. Storage encompasses traditional storage and is used to e.g., store the inputs and outputs of the individual AIMs, intermediary results data from the AIM states and data shared by AIMs.
  6. Access represents the access to static or slowly changing data that are required by the application such as domain knowledge data, data models, etc.

3        Use Cases

3.1       Conversation with emotion (CWE)

When people talk, they use multiple modalities. Emotion is one of the key features to understand the meaning of the utterances made by the speaker. Therefore, a conversation system with the capability to recognize emotion can better understand the user and produce a better reply.

This MPAI-MMC Use Case handles conversation with emotion. It is a human-machine conver­sation system where the computer can recognize emotion in the user’s speech and/or text, also using the video information of the face of the human to produce a reply.

Emotion is recognised in the following way and reflected in the speech production side. First, a set of emotion related cues are extracted from text, voice and video. Then, each recognition module for text, voice and video, recognises emotion independently. The emotion recognition module determines the final emotion based on each emotion. The emotion will be transferred to dialog processing module. Then the dialog processing module produces the reply based on the final emotion and meaning from the text and video analysis. Finally, the speech synthesis module produces the speech from the reply in text.

3.1.1      Multimodal Question Answering (MQA)

Question Answering Systems (QA) answer a user’s question presented in natural language. Cur­rent QA system only deals with the case where input is in “text” form or “speech” form. However, more attention is paid these days to the case where mixed inputs such as speech with an image are presented to the system. For example, a user asks a question: “Where can I buy this tool?” showing the picture of the tool. In that case, the QA system should process the question in a text along with the image and should find out the answer to the question.

Question and image are recognised and analysed in the following way and answers are produced in the output speech: The meaning of the question is recognised in the form of text or voice. Image is analysed to find the object name which is sent to the language understanding module. Then, the integrated meaning from the multimodal inputs is generated from the language understanding module. The Intention analysis module determines the intention of the question and the intention is sent to the QA module. The QA module produces the answer based on the intention of the question, and the meaning from the Language understanding module. The speech synthesis module produces the speech from the answer in text.

3.1.2      Personalised Automatic Speech Translation (PST)

Automatic speech translation technology denotes technology that recognizes a voice uttered in a language by a speaker, converts the recognized voice into another language through automatic translation, and outputs a converted voice as text-type subtitles or as a synthesized voice preserving the speaker’s features in the translated speech. Recently, as interest in voice synthesis among main technologies for automatic interpretation increases, research concentrates on personalized voice synthesis, a technology that outputs a target language through voice recognition and automatic translation, as a synthesis voice similar to a tone (or an utterance style) of a speaker.

The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker’s voice includes a speech recognition to generate text data for an original speech signal of an original speaker and extract characteristic information such as pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech. Then the text data produced by the speech recognition module go through the automatic translation module to generate a synthesis-target translation and a speech synthesis module to generate a synthetic sound that resembles the original speaker using the extracted characteristic information.

4        Functional Requirements

4.1       Introduction

The Functional Requirements developed in this document refer to the individual technologies identified as necessary to implement Use Cases belonging to MPAI-MMC application areas using AIMs operating in an MPAI-AIF AI Framework. The Functional Requirements developed adhere to the following guidelines:

  1. AIMs are defined to allow implementations by multiple technologies (AI, ML, DP)
  2. DP-based AIMs need interfaces such as to a Know­ledge Base. AI-based AIMs will typically require a learning process, however, support for this process is not included in the document. MPAI may develop further requirements covering that process in a future document.
  3. AIMs can be aggregated in larger AIMs. Some data flows of aggregated AIMs may not neces­sarily be exposed any longer.
  4. AIMs may be influenced by the companion MPAI-CAE Use Cases and Functional Requ­ir­ements [3] as some technologies needed by some MPAI-MMC AIMs share a significant num­ber of functional requirements.
  5. Current AIMs do not feed information back to AIMs upstream. Respondents to the MPAI-MMC Call for Technologies [5] are welcome to motivate such feed-back data flows and prop­ose assoc­iated requirements.

The Functional Requirements described in the following sections are the result of a dedicated effort by MPAI experts over many meetings where different AIM partitionings have been proposed, discussed and revised. MPAI is aware that alternative partitioning or alternative I/O data to/from AIMs are possible, and those reading this document for the purpose of submitting a response to the MPAI-MMC Call for Technologies (N154) [5] are welcome to propose in their submissions alternative partitioning or alternative I/O data. However, they are required to justify the proposed new partitioning and to determine the functional requirements of the relevant technol­ogies. The evaluation team will study the proposed alternative arrangement and may decide to accept all or part of the proposed new arrangement.

4.2       Conversation with Emotion

4.2.1      Implementation architecture

Possible architectures of this Use Case are given by Figure 2 and Figure 3. The two figures differ in the use of legacy DP technology vs AI technology:

  1. In Figure 2 some AIMs need a Knowledge Base to perform their tasks.
  2. In Figure 3 Knowledge Bases may not be required as the relevant information is embedded in neural network that are part of an AIM.

Intermediate arrangements with only some Knowledge Bases are also possible, but not represented in a figure.

Figure 2 – Conversation with emotion (using legacy DP technologies)

Figure 3 – Conversation with emotion (fully AI-based)

4.2.2      AI Modules

The AI Modules of Conversation with Emotion are given in Table 2

Table 2 – AI Modules of Conversation with Emotion

AIM Function
Language understanding Analyses natural language in a text format to produce its meaning and emotion included in the text
Speech Recognition Analyses the voice input and generates text output and emotion carried by it
Video analysis Analyses the video and recognises the emotion it carries
Emotion recognition Determines the final emotion from multi-source emotions
Dialog processing Analyses user’s Meaning and produces Reply based on the meaning and emotion implied by the user’s text
Speech synthesis Produces speech from Reply (the input text)
Face animation Produces an animated face consistent with the machine-generated Reply
Emotion KB (text) Contains words/phrases with associate emotion. Language understanding queries Emotion KB (text) to obtain the emotion associated with a text
Emotion KB (speech) Contains features extracted from speech recordings of different speakers reading/reciting the same corpus of texts with an agreed set of emotions and without emotion, for a set of languages and for different genders.

Speech recognition queries Emotion KB (speech) to obtain emotions corresponding to the features provided as input.

Emotion KB (video) Contains features extracted from video recordings of different people speaking with an agreed set of emotions and without emotion for different genders.

Video analysis queries Emotion KB (video) to obtain emotions corres­ponding to the features provided as input.

Dialog KB Contains sentences with associated dialogue acts. Dialog processing queries Dialog KB to obtain dialogue acts with associated sentences.

4.2.3      I/O interfaces of AI Modules

The I/O data of AIMs used in Conversation with Emotion are given in Table 3.

Table 3 – I/O data of Conversation with Emotion AIMs

AIM Input Data Output Data
Video analysis Video Emotion

Meaning

Time stamp

Speech recognition Input Speech

 

Response from Emotion KB (Speech)

Text

Emotion

Query to Emotion KB (Speech)

Language understanding Input Text

Recognised Text

Response from Emotion KB (Text)

Text

Emotion

Meaning

Query to Emotion KB (Text)

Emotion recognition Emotion (from text)

Emotion (from speech)

Emotion (from image)

Final Emotion
Dialog processing Text

Meaning

Final emotion

Meaning

Response from Dialogue KB

Reply

Text

Animation

 

Query to Dialogue KB

Speech synthesis Reply Speech
Face animation Animation parameters Video
Emotion KB (text) Query Response
Emotion KB (speech) Query Response
Emotion KB (video) Query Response
Dialog KB Query Response

4.2.4      Technologies and Functional Requirements

4.2.4.1     Text

Text should be encoded according to ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) to support most languages in use [6].

To Respondents

Respondents are invited to comment on this choice.

4.2.4.2     Digital Speech

Speech is sampled at a frequency between 8 kHz and 96 kHz and digitally represented between 16 bits/sample and 24 bits/sample (both linear).

To Respondents

Respondents are invited to comment on these two choices.

4.2.4.3     Digital Video

Digital video has the following features.

  1. Pixel shape: square
  2. Bit depth: 8-10 bits/pixel
  3. Aspect ratio: 4/3 and 16/9
  4. 640 < # of horizontal pixels < 1920
  5. 480 < # of vertical pixels < 1080
  6. Frame frequency 50-120 Hz
  7. Scanning: progressive
  8. Colorimetry: ITU-R BT709 and BT2020
  9. Colour format: RGB and YUV
  10. Compression: uncompressed, if compressed AVC, HEVC

 To Respondents

Respondents are invited to comment on these choices.

4.2.4.4     Emotion

By Emotion we mean an attribute that indicates an emotion out of a finite set of Emotions.

Emotion is extracted and digitally represented as Emotion from text, speech and video.

The most basic emotions are described by the set: “anger, disgust, fear, happiness, sadness, and surprise” [6], or “joy versus sadness, anger versus fear, trust versus disgust, and surprise versus anticipation” [8]. One of these sets can be taken as “universal” in the sense that they are common across all cultures. An Emotion may have different Grades [9,10].

 To Respondents

Respondents are invited to propose:

  1. A minimal set of Emotions whose semantics are shared across cultures.
  2. A set of Grades that can be associated to Emotions.
  3. A digital representation of Emotions and their Grades [11].

This CfT does not specifically address culture-specific Emotions. However, the proposed digital representation of Emotions and their grades should either be capable to accommodate or be extensible to support culture-specific Emotions.

4.2.4.5     Emotion KB (speech) query format

Emotion KB (speech) contains features extracted from speech recordings of different speakers reading/reciting the same corpus of texts with an agreed set of emotions and without emotion, for a set of languages and for different genders.

The Emotion KB (speech) is queried with a list of speech features. The Emotion KB responds with the emotions of the speech.

Speech features are extracted from the input speech and are used to determine the Emotion of the input speech.

Examples of features that have information about emotion are:

  1. Features to detect the arousal level of emotions: sequences of short-time prosody acoustic features (features estimated on a frame basis), e.g., short-term speech energy [15].
  2. Features related to the pitch signal (i.e., the glottal waveform) that depends on the tension of the vocal folds and the subglottal air pressure. Two parameters related to the pitch signal can be considered: pitch frequency and glottal air velocity. E.g., high velocity indicates a speech emotion like happiness. Low velocity is in harsher styles such as anger [17].
  3. The shape of the vocal tract is modified by the emotional states. The formants (characterized by a center frequency and a bandwidth) could be a representation of the vocal tract resonances. Features related to the number of harmonics due to the non-linear airflow in the vocal tract. E.g., in the emotional state of anger, the fast air flow causes additional excitation signals other than the pitch. Teager Energy Operator-based (TEO) features could be an example of measure of the harmonics and cross-harmonics in the spectrum [18].

An example solution of the features could be the Mel-frequency cepstrum (MFC) [19].

To Respondents

Respondents are requested to propose an Emotion KB (speech) query format that satisfies the following requirements:

  1. Capable of querying by specific speech features
  2. Speech features should be:
    1. Suitable for extraction of Emotion information from natural speech containing emotion:
    2. Extensible, i.e., capable to include additional speech features.

When assessing proposed Speech features, MPAI may resort to objective testing.

Note: An AI-based implementation may not need Emotion KB (Speech).

4.2.4.6     Emotion KB (text) query format

Emotion KB (text) contains text features extracted from a text corpus with an agreed set of Emotions, for a set of languages and for different genders.

The Emotion KB (text) is queried with a list of Text features. Text features considered are:

  1. grammatical features, e.g., parts of speech.
  2. named entities, places, people, organisations.
  3. semantic features, e.g., roles, such as agent [21].

The Emotion KB (text) responds by giving Emotions correlated with the text features provided as input.

To Respondents

Respondents are requested to propose an Emotion KB (text) query format that satisfies the fol­lowing requirements:

  1. Capable of querying by specific Text features.
  2. Text features should be:
    1. Suitable for extraction of Emotion information from natural language text containing Emotion.
    2. Extensible, i.e., capable to include additional text features.

When assessing the proposed Text features, MPAI may resort to objective testing.

Note: An AI-based implementation may not need Emotion KB (Text).

4.2.4.7     Emotion KB (video) query format

Emotion KB (video) contains features extracted from the video recordings of different speakers reading/reciting the same corpus of texts with and without an agreed set of emotions and meanings, for different genders.

Emotion KB (video) is queried with a list of Video features. Emotion KB responds with the associated Emotion, its Grade, and Meaning.

To Respondents

Respondents are requested to propose an Emotion KB (video) query format that satisfies the following requirements:

  1. Capable of querying by specific Video features.
  2. Video features should be:
    1. Suitable for extraction of emotion information from a video con­taining the face of a human expressing emotion.
    2. Extensible, i.e., capable of including additional Video features.

When assessing proposed Video features, MPAI may resort to objective testing.

Note: An AI-based implementation may not need Emotion KB (video).

4.2.4.8     Meaning

Meaning is information extracted from input text, speech and video such as question, statement, exclam­ation, expression of doubt, request, invitation [18].

To Respondents

Respondents are requested to propose an extensible list of meanings and their digital representations satisfying the following requir­ements:

  1. The meaning extracted from the input text shall have a structure that includes grammatical information and semantic information.
  2. The digital representation of meaning shall allow for the addition of new features to be used in different applications.

4.2.4.9     Dialog KB query format

Dialog KB contains sentence features with associated dialogue acts. Dialog processing AIM queries Dialog KB to obtain dialogue acts with associated sentence features.

The Dialog KB is queried with sentence features. The sentence features considered are:

  1. Sentences analysed by the language understanding AIM.
  2. Sentence structures.
  3. Sentences with semantic features for the words composing sentences, e.g., roles, such as agent [21].

The Dialog KB responds by giving dialog acts correlated with the sentence provided as input.

To Respondents

Respondents are requested to propose a Dialog KB query format that satisfies the fol­lowing requirements:

  1. Capable of querying by specific sentence features.
  2. Sentence features should be:
    1. Suitable for extraction of sentence structures and meaning.
    2. Extensible, i.e., capable to include additional sentence features.

When assessing the proposed Sentence features, MPAI may resort to objective testing.

Note: An AI-based implementation may not need Dialog KB.

4.2.4.10  Input to speech synthesis (Reply)

Respondents should propose suitable technology for driving the speech synthesiser. Note that “Text with emotion” and “Concept with emotion” are both candidates for consideration.

To Respondents

Text with emotion

A standard format for text with Emotions attached to different portions of the text. An example of how emotion in the text could be added to text is offered by emoticons.

Text should be encoded according to ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) to support most languages in use.

Respondents are requested to comment on the choice of the character set and to propose a solution for emotion added to a text satisfying the following requirements:

  1. It should include a scheme for annotating text with emotion either as text with emotion expressed with text or with additional characters.
  2. It should include an extensible emotion annotation representation scheme for basic emotions.
  3. The emotion annotation representation scheme should be language independent.

Concept with emotion

Respondents are requested to propose digital representation of concept that enables to go straight from meaning and emotion to “concept to speech synthesiser”, as, e.g., in [28].

4.2.4.11  Input to face animation

A face can be animated using the same parameters used to synthesise speech.

To respondents

Respondents are requested to provide the same types of data format as for speech of to propose and justify a different data format.

4.3       Multimodal Question Answering

4.3.1      Implementation Architecture

Possible architectures of this Use Case are given by Figure 4 and Figure 5. In the former case some AIMs need a Knowledge Base to perform their tasks. In the latter case Knowledge Bases may not be required as the relevant information is embedded in neural networks that are part of an AIM. Intermediate arrangements where only some Knowledge Bases are used, are also possible but not represented by a figure.

Figure 4 Multimodal Question Answering (using legacy DP technologies)

Figure 5 Multimodal Question Answering (fully AI-based)

4.3.2      AI Modules

The AI Modules of Multimodal Question Answering are given in Table 4.

Table 4 – AI Modules of Multimodal Question Answering

AIM Function
Language understanding Analyses natural language expressed as text using a language model to produce the meaning of the text
Speech Recognition Analyse the voice input and generate text output
Speech synthesis Converts input text to speech
Image analysis Analyses image and produces the object name in focus
Question analysis Analyses the meaning of the sentence and determines the Intention
Question Answering Analyses user’s question and produces a reply based on user’s Inten­tion
Intention KB Responds to queries using a question ontology to provide the features of the question
Image KB Responds to Image analysis’s queries providing the object name in the image
Online dictionary Allows Question Answering AIM to find answers to the question

4.3.3      I/O interfaces of AI Modules

The AI Modules of Multimodal Question Answering are given in Table 5.

Table 5 – I/O data of Multimodal Question Answering AIMs

 

AIM Input Data Output Data
Speech Recognition Digital Speech Text
Image analysis Image

Image KB response

Image KB query

Text

Language understanding Text

Text

Meaning

Meaning

Question analysis Meaning

Intention KB response

Intention

Intention KB query

QA Meaning

Text

Intention

Online dictionary query

Text

 

 

Online dictionary response

Speech synthesis Text Digital speech
Intention KB Query Response
Image KB Query Response
Online dictionary Query Response
Dialog KB Query Response

4.3.4      Technologies and Functional Requirements

4.3.4.1     Text

Text should be encoded according to ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) to support most languages in use [6].

To Respondents

Respondents are invited to comment on this choice.

4.3.4.2     Digital Speech

Multimodal QA (MQA) requires that speech be sampled at a frequency between 8 kHz and 96 kHz and digitally represented between 16 bits/sample and 24 bit/sample (linear).

To Respondents

Respondents are invited to comment on these two choices.

4.3.4.3     Digital Image

A Digital image is an uncompressed or a JPEG compressed picture [19].

To Respondents

Respondents are invited to comment on this choice.

4.3.4.4     Image KB query format

Image KB contains feature vectors extracted from different images of those objects intended to be used in this Use Case [29].

The Image KB is queried with a vector of image features extracted from the input image repres­enting an object [21]. The Image KB responds by giving the identifier of the object.

To Respondents

Respondents are requested to propose an Image KB query format that satisfies the following requirements:

  1. Capable of querying by specific Image features.
  2. Image features should be:
    1. Suitable for querying the Image KB.
    2. Extensible to include additional image features and additional object types.

When assessing proposed Image features, MPAI may resort to objective testing.

An AI-Based implementation may not need Image KB.

4.3.4.5     Object identifier

The object must be uniquely identified.

To Respondents

Respondents are requested to propose a universally applicable object classification scheme.

4.3.4.6     Meaning

Meaning is information extracted from the input text such as question, statement, exclamation, expression of doubt, request, invitation [18].

To Respondents

Respondents are requested to propose an extensible list of meanings and their digital repres­en­tations satisfying the following requirements:

  1. The meaning extracted from the input text shall have a structure that includes grammatical information and semantic information.
  2. The digital representation of meaning shall allow for the addition of new features to be used in different applications.

4.3.4.7     Intention KB query format

Intention KB contains question patterns extracted from the user questions that denote those intention types. It is the result of the question analysis.

For instance, what, where, from where, for whom, by whom, how… [22].

The Intention KB is queried by giving text as input. Intention KB responds with the type of question intention.

To Respondents

Respondents are requested to propose an Intention KB query format satisfying the following requirements:

  1. Capable of querying by questions with meaning provided by the Language Understanding AIM.
  2. Extensible, i.e., capable to include additional intention features.

Respondents are requested to propose an extensible classification of Intentions and their digital representations satisfying the following requirements:

  1. The intention of the question shall be represented as including question types, question focus and question topics.
  2. The digital representation of intention shall be extensible, i.e., allow for the addition of new features to be used in different applications.

An AI-Based implementation may not need Intention KB.

4.3.4.8     Online dictionary query format

Online dictionary contains structured data that include topics and related information in the form of summaries, table of contents and natural language text [23].

The Online dictionary is queried by giving text as input. The Online dictionary responds with paragraphs where to find answers that have high correlation with the user question.

To Respondents

Respondents are requested to propose an Online dictionary KB query format satisfying the following requirements:

  1. Capable of querying by text as keywords.
  2. Extensible, i.e., capable to include additional text features.

4.4       Personalized Automatic Speech Translation

4.4.1      Implementation Architecture

The AI Modules of a personalized automatic speech translator are configured as in Figure 6.

Figure 6 Personalized Automatic Speech Translation

4.4.2      AI Modules

The AI Modules of Personalized Automatic Speech Translation are given in Table 6.

Table 6 – AI Modules of Personalized Automatic Speech Translation

AIM Function
Speech Recognition Converts Speech into Text
Translation Translates the user text input in source language to the target language
Speech feature extraction Extracts Speech features such as tones, intonation, intensity, pitch, emotion, intensity or speed from the input voice specific of the speaker.
Speech synthesis Produces Speech from the text resulting from translation with the speech features extracted from the speaker of the source language

4.4.3      I/O interfaces of AI Modules

The AI Modules of Personalized Automatic Speech Translation are given in Table 7.

Table 7 – I/O data of Personalized Automatic Speech Translation AIMs

AIM Input Data Output Data
Speech Recognition Digital Speech Text
Translation Text

Speech

Translation result
Speech feature extraction Digital speech Speech features
Speech synthesis Translation result

Speech features

Digital speech

4.4.4      Technologies and Functional Requirements

4.4.4.1     Text

Text should be encoded according to ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) to support most languages in use [6].

To Respondents

Respondents are invited to comment on this choice.

4.4.4.2     Digital Speech

Speech should be sampled at a frequency between 8 kHz and 96 kHz and digitally represented between 16 bits/sample and 24 bit/sample (both linear).

To Respondents

Respondents are invited to comment on these two choices.

4.4.4.3     Speech features

Speech features such as tones, intonation, intensity, pitch, emotion or speed are used to encode speech features of the speaker.

The following features should be included in the speech features to describe the speaker’s voice: pitch, prosodic structures per intonation phrase, vocal intensity, speed of the utterance per word/sentence/intonation phrase, vocal tract characteristics of the speaker of the source language, and additional speech features associated with hidden variables. The vocal tract characteristics can be expressed as characteristic parameters of Mel-frequency cepstral coefficient (MFCC) and glottal wave.

To Respondents

Respondents are requested to propose a set of speech features that shall be suitable for

  1. Extracting voice characteristic information from natural speech containing personal features.
  2. Producing synthesized speech reflecting the original user’s voice characteristics.

When assessing proposed Speech features, MPAI may resort to subjective/objective testing.

4.4.4.4     Language identification

ISO 639 – Codes for the Representation of Names of Languages — Part 1: Alpha-2 Code.

To Respondents

Respondents are requested to comment on this choice.

4.4.4.5     Translation results

Respondents should propose suitable technology for driving the speech synthesiser. “Text to speech” and “concept to speech” are both considered.

To Respondents

Text to speech

Text should be encoded according to ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) to support most languages in use.

Respondents are requested to comment on the choice of character set.

Concept to speech

Respondents are requested to propose digital representation of concept that enables to go straight from translation result to “concept to speech synthesiser”, as, e.g., in [28].

5        Potential common technologies

Table 8 introduces the MPAI-CAE and MPAI-MMC acronyms.

Table 8 – Acronyms of MPAI-CAE and MPAI-MMC Use Cases

Acronym App. Area Use Case
EES MPAI-CAE Emotion-Enhanced Speech
ARP MPAI-CAE Audio Recording Preservation
EAE MPAI-CAE Enhanced Audioconference Experience
AOG MPAI-CAE Audio-on-the-go
CWE MPAI-MMC Conversation with emotion
MQA MPAI-MMC Multimodal Question Answering
PST MPAI-MMC Personalized Automatic Speech Translation

Table 9 gives all MPAI-CAE and MPAI-MMC technologies in alphabetical order.

Please note the following acronyms

KB Knowledge Base
QF Query Format

Table 9 – Alphabetically ordered MPAI-CAE and MPAI-MMC technologies

Notes UC=Use case
UCFR=Use  Cases and Functional Requirements document number
Section=Section of the above document
Technology=name of technology

  

UC UCFR Section Technology
EAE N151 4.4.4.4 Delivery
AOG N151 4.5.4.7 Delivery
CWE N153 4.2.4.9 Dialog KB query format
ARP N151 4.3.4.1 Digital Audio
AOG N151 4.5.4.1 Digital Audio
ARP N151 4.3.4.3 Digital Image
MQA N153 4.3.4.3 Digital Image
EES N151 4.2.4.1 Digital Speech
EAE N151 4.4.4.1 Digital Speech
CWE N153 4.2.4.2 Digital Speech
MQA N153 4.3.4.2 Digital Speech
PST N153 4.4.4.2 Digital Speech
ARP N151 4.3.4.2 Digital Video
CWE N153 4.2.4.3 Digital Video
EES N151 4.2.4.2 Emotion
CWE N153 4.2.4.4 Emotion
EES N151 4.2.4.4 Emotion descriptors
CWE N153 4.2.4.5 Emotion KB (speech) query format
CWE N153 4.2.4.6 Emotion KB (text) query format
CWE N153 4.2.4.7 Emotion KB (video) query format
EES N151 4.2.4.3 Emotion KB query format
MQA N153 4.3.4.4 Image KB query format
CWE N153 4.2.4.11 Input to face animation
CWE N153 4.2.4.10 Input to speech synthesis
MQA N153 4.3.4.7 Intention KB query format
PST N153 4.4.4.4 Language identification
CWE N153 4.2.4.8 Meaning
MQA N153 4.3.4.6 Meaning
EAE N151 4.4.4.2 Microphone geometry information
AOG N151 4.5.4.2 Microphone geometry information
MQA N153 4.3.4.5 Object identifier
MQA N153 4.3.4.8 Online dictionary query format
EAE N151 4.4.4.3 Output device acoustic model metadata KB query format
ARP N151 4.3.4.6 Packager
AOG N151 4.5.4.3 Sound array
AOG N151 4.5.4.4 Sound categorisation KB query format
AOG N151 4.5.4.5 Sounds categorisation
PST N153 4.4.4.3 Speech features
ARP N151 4.3.4.4 Tape irregularity KB query format
ARP N151 4.3.4.5 Text
CWE N153 4.2.4.1 Text
MQA N153 4.3.4.1 Text
PST N153 4.4.4.1 Text
PST 4.4.4.5 Translation results
AOG N151 4.5.4.6 User Hearing Profiles KB query format

The following technologies are shared or shareable across Use Cases:

  1. Delivery
  2. Digital speech
  3. Digital audio
  4. Digital image
  5. Digital video
  6. Emotion
  7. Meaning
  8. Microphone geometry information
  9. Text

Image features apply to different visual objects in MPAI-CAE and MPAI-MMC.

The Speech features in Use Cases of both standards are different. However, respondents may consider the possibility of proposing a unified set of Speech features, e.g., as proposed in [30].

6        Terminology

Table 10 – MPAI-MMC terms

Term Definition
Access Static or slowly changing data that are required by an application such as domain knowledge data, data models, etc.
AI Framework (AIF) The environment where AIM-based workflows are executed
AI Module (AIM) The basic processing elements receiving processing specific inputs and producing processing specific outputs
Communication The infrastructure that connects the Components of an AIF
Dialog processing An AIM that produces a reply based on the input speech/text
Digital Speech Digitised speech as specified by MPAI
Emotion An attribute that indicates an emotion out of a finite set of Emotions
Emotion Grade The intensity of an Emotion
Emotion Recognition An AIM that decides the final Emotion out of Emotions from different sources
Emotion KB (text) A dataset of Text features with corresponding emotion
Emotion KB (speech) A dataset of Speech features with corresponding emotion
Emotion KB (Video) A dataset of Video features with corresponding emotion
Emotion KB query format The format used to interrogate a KB to find relevant emotion
Execution The environment in which AIM workflows are executed. It receives external inputs and produces the requested outputs both of which are application specific
Image analysis An AIM that extracts Image features
Image KB A dataset of Image features with corresponding emotion
Intention Intention is the result of a question analysis that denotes information on the input question
Intention KB A question classification providing the features of a question
Language Understanding An AIM that analyses natural language as Text to produce its meaning and emotion included in the text
Management and Control Manages and controls the AIMs in the AIF, so that they execute in the correct order and at the time when they are needed
Meaning Information extracted from the input text such as syntactic and semantic information
Online Dictionary A dataset that includes topics and related information in the form of summaries, table of contents and natural language text
Question Analysis An AIM that analyses the meaning of a question sentence and determines its Intention
Question Answering An AIM that analyses the user’s question and produces a reply based on the user’s Inten­tion
Speech features Features used to extract Emotion from Digital Speech
Speech feature extraction An AIM that extracts Speech features from Digital speech
Speech Recognition An AIM that converts Digital speech to Text
Speech Synthesis An AIM that converts Text or concept to Digital speech
Storage Storage used to, e.g., store the inputs and outputs of the individual AIMs, data from the AIM’s state and intermediary results, shared data among AIMs
Text A collection of characters drawn from a finite alphabet
Translation An AIM that converts Text in a source language to Text in a target language

7        References

  1. MPAI-AIF Use Cases and Functional Requirements, N74; https://mpai.community/standards/mpai-aif/#Requirements
  2. MPAI-AIF Call for Technologies, N100; https://mpai.community/standards/mpai-aif/#Technologies
  3. MPAI-CAE Use Cases and Functional Requirements, N151; https://mpai.community/standards/mpai-cae/#Requirements
  4. MPAI-CAE Call for Technologies, N152; https://mpai.community/standards/mpai-cae/#Technologies
  5. MPAI-MMC Call for Technologies, N154; https://mpai.community/standards/mpai-mmc/#Technologies
  6. ISO/IEC 10646:2003 Information Technology — Universal Multiple-Octet Coded Character Set (UCS)
  7. Ekman, P. (1999). Basic Emotions. In T. Dalgleish and T. Power (Eds.) The Handbook of Cognition and Emotion pp. 45–60. Sussex, U.K.: John Wiley & Sons, Ltd.
  8. Plutchik R., Emotion: a psychoevolutionary synthesis, New York Harper and Row, 1980
  9. Russell, James (1980). “A circumplex model of affect”. Journal of Personality and Social Psychology. 39 (6): 1161–1178. doi:10.1037/h0077714
  10. Cahn, J. E., The Generation of Affect in Synthesized Speech, Journal of the American Voice I/O Society, 8, July 1990, p. 1-19
  11. https://www.w3.org/TR/2014/REC-emotionml-20140522/
  12. Burkhardt, F., & Sendlmeier, W. F., Verification of Acoustical Correlates of Emotional Speech using Formant-Synthesis, ISCA Workshop on Speech & Emotion, Northern Ireland 2000, p. 151-156.
  13. Scherer, K. R., Ladd, D. R., & Silverman, K., Vocal cues to speaker affect: Testing two models, Journal of the Acoustic Society of America, 76(5), 1984, p. 1346-1356
  14. Kasuya, H., Maekawa, K., & Kiritani, S., Joint Estimation of Voice Source and Vocal Tract Parameters as Applied to the Study of Voice Source Dynamics, ICPhS 99, p. 2505-2512
  15. Mozziconacci, S. J. L., Speech Variability and Emotion: Production and Perception, PhD Thesis, Technical University Eindhoven, 1998
  16. Burkhardt, F., & Sendlmeier, W. F., Verification of Acoustical Correlates of Emotional Speech using Formant-Synthesis, ISCA Workshop on Speech & Emotion, Northern Ireland 2000, p. 151-156.
  17. Cahn, J. E., The Generation of Affect in Synthesized Speech, Journal of the American Voice I/O Society, 8, July 1990, p. 1-19
  18. Hamed Beyramienanlou, Nasser Lotfivand, “An Efficient Teager Energy Operator-Based Automated QRS Complex Detection”, Journal of Healthcare Engineering, vol. 2018, Article ID 8360475, 11 pages, 2018. https://doi.org/10.1155/2018/8360475]
  19. Davis S B. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process. 1980, 28(4):65-74
  20. Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), Reykjavik, Iceland, pp. 3501–3504, May 2014. 2- Moataz El Ayadi, Mohamed S. Kamel, Fakhri Karray. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognition Journal, Elsevier, 44 (2011) 572–587
  21. Mohamed Zakaria Kurdi (2017). Natural Language Processing and Computational Linguistics: semantics, discourse, and applications, Volume 2. ISTE-Wiley.
  22. Semaan, P. (2012). Natural Language Generation: An Overview. Journal of Computer Science & Research (JCSCR)-ISSN, 50-57
  23. Hudson, Graham; Léger, Alain; Niss, Birger; Sebestyén, István; Vaaben, Jørgen (31 August 2018). “JPEG-1 standard 25 years: past, present, and future reasons for a success”. Journal of Electronic Imaging. 27 (4)
  24. Hobbs, Jerry R.; Walker, Donald E.; Amsler, Robert A. (1982). “Natural language access to structured text”. Proceedings of the 9th conference on Computational linguistics. 1. pp. 127–32.
  25. MMP Petrou, C Petrou, Image processing: the fundamentals – 2010, Wiley
  26. Suman Kalyan Maity, Aman Kharb, Animesh Mukherjee, Language Use Matters: Analysis of the Linguistic Structure of Question Texts Can Characterize Answerability in Quora, ICWSM 2017
  27. Xanh HoAnh-Khoa Duong NguyenSaku SugawaraAkiko Aizawa, Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps, COLING 2020
  28. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.433.7322&rep=rep1&type=pdf
  29. Mohamed Elgendy, Deep Learning for Vision Systems, Manning Publication, 2020
  30. Problem Agnostic Speech Encoder; https://github.com/santi-pdp/pase

Use Cases and Functional RequirementsFramework LicenceCall for TechnologiesTemplate for responses to the Call for TechnologiesApplication Note

Framework Licence

This document is also available in MS format MPAI-MMC Framework Licence

1        Coverage

The MPAI Multimodal Conversation (MPAI-MMC) standard as will be defined in document Nxyz of Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI).

MPAI-MMC specifies the input and output interface of AIMs defined for 3 use cases in N153 that satisfy the requirements in N153.

2        Definitions

Term Definition
Data Any digital representation of a real or computer-generated entity, such as moving pictures, audio, point cloud, computer graphics, sensor and actu­ator. Data includes, but is not restricted to, media, manufacturing, auto­mot­ive, health and generic data.
Development Rights License to use MPAI-MMC Essential IPRs to develop Implementations
Enterprise Any commercial entity that develops or implements the MPAI-MMC standard
Essential IPR Any Proprietary Rights, (such as patents) without which it is not possible on technical (but not commercial) grounds, to make, sell, lease, otherwise dispose of, repair, use or operate Implementations without infringing those Proprietary Rights
Framework License A document, developed in compliance with the gener­ally accepted principles of competition law, which contains the conditions of use of the License without the values, e.g., currency, percent, dates etc.
Implementation A hardware and/or software reification of the MPAI-MMC standard serving the needs of a professional or consumer user directly or through a service
Implementation Rights License to reify the MPAI-MMC standard
License This Framework License to which values, e.g., currency, percent, dates etc., related to a specific Intellectual Property will be added. In this Framework License, the word License will be used as singular. However, multiple Licenses from different IPR holders may be issued
Profile A particular subset of the technologies that are used in MPAI-MMC standard and, where applicable, the classes, subsets, options and parameters relevant to the subset

3        Conditions of use of the License

  1. The License will be in compliance with generally accepted principles of competition law and the MPAI Statutes
  2. The License will cover all of Licensor’s claims to Essential IPR practiced by a Licensee of the MPAI-MMC standard.
  3. The License will cover Development Rights and Implementation Rights
  4. The License for Development and Implementation Rights, to the extent it is developed and implemented only for the purpose of evaluation or demo solutions or for technical trials, will be free of charge
  5. The License will apply to a baseline MPAI-MMC profile and to other profiles containing additional technologies
  6. Access to Essential IPRs of the MPAI-MMC standard will be granted in a non-discriminatory fashion.
  7. The scope of the License will be subject to legal, bias, ethical and moral limitations
  8. Royalties will apply to Implementations that are based on the MPAI-MMC standard
  9. Royalties will apply on a worldwide basis
  10. Royalties will apply to any Implementation, with the exclusion of the type of implementations specified in clause 4
  11. An MPAI-MMC Implementation may use other IPR to extend the MPAI-MMC Implementation or to provide additional functionalities
  12. The License may be granted free of charge for particular uses if so decided by the licensors
  13. A license free of charge for limited time and a limited amount of forfeited royalties will be granted on request
  14. A preference will be expressed on the entity that should administer the patent pool of holders of Patents Essential to the MPAI-MMC standard
  15. The total cost of the Licenses issued by IPR holders will be in line with the total cost of the Licenses for similar technologies standardised in the context of Standard Development Organisations
  16. The total cost of the Licenses will take into account the value on the market of the AI Framework technology Standardised by MPAI.

Use Cases and Functional RequirementsFramework LicenceCall for TechnologiesTemplate for responses to the Call for TechnologiesApplication Note

Call for Technologies

This document is also available in MS Word format MPAI-MMC Call for Technologies

1       Introduction.

2       How to submit a response.

3       Evaluation Criteria and Procedure.

4       Expected development timeline.

5       References.

Annex A: Information Form..

Annex B: Evaluation Sheet

Annex C: Requirements check list

Annex D: Technologies that may require specific testing.

Annex E: Mandatory text in responses.

1        Introduction

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) is an international non-profit organisation with the mission to develop standards for Artificial Intelligence (AI) enabled digital data coding and for technologies that facilitate integration of data coding components into ICT systems. With the mechanism of Framework Licences, MPAI seeks to attach clear IPR licensing frameworks to its standards.

MPAI has found that the application area called “Context-based Audio Enhancement” is particul­arly relevant for MPAI standardisation because using context information to act on the input audio content can substantially improve the user experience of a variety of audio-related applications that include entertainment, communication, teleconferencing, gaming, post-produc­tion, restor­ation etc. for a variety of contexts such as in the home, in the car, on-the-go, in the studio etc.

Therefore, MPAI intends to develop a standard – to be called MPAI-MMC – that will provide standard tech­nologies to implement four Use Cases identified so far:

  1. Conversation with emotion
  2. Multimodal Question Answering
  3. Personalized Automatic Speech Translation

This document is a Call for Technologies (CfT) that

  1. Satisfy the MPAI-MMC Functional Requirements of N153 [6] and
  2. Are released according to the MPAI-MMC Framework Licence (N173) [9], if selected by MPAI for in­clusion in the MPAI-MMC standard.

The standard will be developed with the following guidelines:

  1. To satisfy the MPAI-MMC Functional Requirements (N153) [6]. In the future, MPAI may decide to extend MPAI-MMC to support other Use Cases as a part of this MPAI-CAE standard or as a future extension of it.
  2. To use, where feasible and desirable, the same basic tech­nol­ogies required by the companion document MPAI-CAE Use Cases and Functional Requir­ements [3].
  3. To be suitable for implementation as AI Modules (AIM) conforming to the emerging MPAI AI Framework (MPAI-AIF) standard based on the responses to the MPAI-AIF Call for Technologies (N100) [1] MPAI-AIF Functional Requirements (N74) [1].

MPAI has decided to base its application standards on the AIM and AIF notions whose functional requirements have been identified in [1] rather than follow the approach of defining end-to-end systems. It has done so because:

  1. AIMs allow the reduction of large problems to sets of smaller problems.
  2. AIMs can be independently developed and made available to an open competitive market.
  3. An application developer can build a sophisticated and complex system MPAI system with potentially limited know­ledge of all the tech­nologies required by the system.
  4. An MPAI system has a high-level of inherent explainability.
  5. MPAI systems allow for competitive comparisons of functionally equivalent AIMs.

Respondents should be aware that:

  1. Use Cases that make up MPAI-MMC, the Use Cases themselves and the AIM internals will be non-normative.
  2. The input and output interfaces of the AIMs, whose requirements have been derived to support the Use Cases, will be normative.

Therefore, the scope of this Call for Technologies is restricted to technologies required to implement the input and output interfaces of the AIMs identified in N153 [6].

However, MPAI invites comments on any technology or architectural component identified in N153, specifically,

  1. Additions or removals of input/output signals to the identified AIMs with justification of the changes and identification of data formats required by the new input/output signals.
  2. Possible alternative partitioning of the AIMs implementing the example cases providing:
    1. Arguments in support of the proposed partitioning
    2. Detailed specifications of the input and output data of the proposed new AIMs
  3. New Use Cases fully described as in N153.

All parties who believe they have relevant technologies satisfying all or most of the requirements of one or more than one Use Case described in N153 are invited to submit proposals for consid­eration by MPAI. MPAI membership is not a prerequisite for responding to this CfT. However, proponents should be aware that, if their proposal or part thereof is accepted for inclusion in the MPAI-MMC standard, they shall immediately join MPAI, or their accepted technologies will be discarded.

MPAI will select the most suitable technologies based on their technical merits for inclusion in MPAI-MMC. However, MPAI in not obligated, by virtue of this CfT, to select a particular tech­nology or to select any technology if those submitted are found inadequate.

Submissions are due on 2021/04/12T23:59 UTC and should be sent to the MPAI secretariat (secretariat@mpai.community). The secretariat will acknowledge receipt of the submission via email. Submissions will be reviewed according to the schedule that the 7th MPAI General Assembly (MPAI-7) will define at its online meeting on 2021/04/14. For details on how submitters who are not MPAI members can attend the said review please contact the MPAI secretariat (secretariat@mpai.community).

2        How to submit a response

Those planning to respond to this CfT:

  1. Are advised that online events will be held on 2021/02/24 and 2021/03/10 to present the MPAI-MMC CfT and respond to questions. Logistic information on these events will be posted on the MPAI web site.
  2. Are requested to communicate their intention to respond to this CfT with an initial version of the form of Annex A to the MPAI secretariat (secretariat@mpai.community) by 2021/03/16. A potential submitter making a communication using the said form is not required to actually make a submission. A submission will be accepted even if the submitter did not communicate their intention to submit a response by the said date.
  3. Are advised to visit regularly the https://mpai.community/how-to-join/calls-for-technologies/ web site where relevant information will be posted.

Responses to this MPAI-MMC CfT shall/may include:

Table 1 – Mandatory and optional elements of a response

Item Status
Detailed documentation describing the proposed technologies mandatory
The final version of Annex A mandatory
The text of Annex B duly filled out with the table indicating which requirements identified in MPAI N151 [3] are satisfied. If all the requirements of a Use Case are not satisfied, this should be explained. mandatory
Comments on the completeness and appropriateness of the MPAI-MMC functional requirem­ents and any motivated suggestion to amend and/or extend those requir­ements. optional
A preliminary demonstration, with a detailed document describing it. optional
Any other additional relevant information that may help evaluate the submission, such as additional use cases. optional
The text of Annex E. mandatory

Respondents are invited to take advantage of the check list of Annex C before submitting their response and filling out Annex A.

Respondents are mandatorily requested to present their submission at a teleconference meeting to be properly announced to submitters by the MPAI Secretariat. If no presenter od a submission will attend the meeting, the submission will be discarded.

Respondents are advised that, upon acceptance by MPAI of their submission in whole or in part for further evaluation, MPAI will require that:

  • A working implementation, including source code – for use in the development of the MPAI-MMC Reference Software and later publication as an MPAI standard– be made available before the technology is accepted for inclusion in the MPAI-MMC standard. Software may be written in programming languages that can be compiled or interpreted and in hardware description languages.
  • The working implementation be suitable for operation in the MPAI AIF Framework (MPAI-AIF).
  • A non-MPAI member immediately join MPAI. If the non-MPAI member elects not to do so, their submission will be discarded. Direction on how to join MPAI can be found online.

Further information on MPAI can be obtained from the MPAI website.

3        Evaluation Criteria and Procedure

Proposals will be assessed using the following process:

  1. Evaluation panel is created from:
    1. All MMC-DC members attending.
    2. Non-MPAI members who are respondents.
    3. Non respondents/non MPAI member experts invited in a consulting capacity.
  2. No one from 1.1.-1.2. will be denied membership in the Evaluation panel.
  3. Respondents present their proposals.
  4. Evaluation Panel members ask questions.
  5. If required subjective and/or objective tests are carried out:
    1. Define required tests.
    2. Carry out the tests.
    3. Produce report.
  6. At least 2 reviewers will be appointed to review & report about specific points in a proposal if required.
  7. Evaluation panel members fill out Annex B for each proposal.
  8. Respondents respond to evaluations.
  9. Proposal evaluation report is produced.

4        Expected development timeline

Timeline of the CfT, deadlines and response evaluation:

Table 2 – Dates and deadlines

Step Date
Call for Technologies 2021/02/17
CfT introduction conference call 1 2021/02/24T14:00 UTC
CfT introduction conference call 2 2021/03/10T15:00 UTC
Notification of intention to submit proposal 2021/03/16T23.59 UTC
Submission deadline 2021/04/12T23.59 UTC
Evaluation of responses will start 2021/04/14 (MPAI-7)

Evaluation to be carried out during 2-hour sessions according to the calendar agreed at MPAI-7.

5        References

  1. MPAI-AIF Use Cases and Functional Requirements, N74; https://mpai.community/standards/mpai-aif/
  2. MPAI-AIF Call for Technologies, MPAI N100; https://mpai.community/standards/mpai-aif/#Technologies
  3. MPAI-AIF Framework Licence, MPAI N171; https://mpai.community/standards/mpai-aif/#Licence
  4. MPAI-CAE Use Cases & Functional Requirements; MPAI N151; https://mpai.community/standards/mpai-cae/#UCFR
  5. MPAI-CAE Call for Technologies, MPAI N152; https://mpai.community/standards/mpai-cae/#Technologies
  6. MPAI-CAE Framework Licence, MPAI N171; https://mpai.community/standards/mpai-cae/#Licence
  7. Draft MPAI-MMC Use Cases & Functional Requirements; MPAI N153; https://mpai.community/standards/mpai-mmc/#UCFR
  8. Draft MPAI-MMC Call for Technologies, MPAI N154; https://mpai.community/standards/mpai-mmc/#Technologies
  9. MPAI-MMC Framework Licence, MPAI N173; https://mpai.community/standards/mpai-mmc/#Licence

Annex A: Information Form

This information form is to be filled in by a Respondent to the MPAI-MMC CfT

  1. Title of the proposal
  2. Organisation: company name, position, e-mail of contact person
  3. What are the main functionalities of your proposal?
  4. Does your proposal provide or describe a formal specification and APIs?
  5. Will you provide a demonstration to show how your proposal meets the evaluation criteria?

Annex B: Evaluation Sheet

NB: This evaluation sheet will be filled out by members of the Evaluation Team.

Proposal title:

Main Functionalities:

Response summary: (a few lines)

Comments on Relevance to the CfT (Requirements):

Comments on possible MPAI-MMC profiles[1]

Evaluation table:

Table 3Assessment of submission features

Note 1 The semantics of Submission features is provided by Table 4
Note 2 Evaluation elements indicate the elements used by the evaluator in assessing the submission
Note 3 Final Assessment indicates the ultimate assessment based on the Evaluation Elements

 

Submission features Evaluation elements Final Assessment
Completeness of description

Understandability

Extensibility

Use of Standard Technology

Efficiency

Test cases

Maturity of reference implementation

Relative complexity

Support of MPAI use cases

Support of non-MPAI use cases

Content of the criteria table cells:

Evaluation facts should mention:

  • Not supported / partially supported / fully supported.
  • What supported these facts: submission/presentation/demo.
  • The summary of the facts themselves, e.g., very good in one way, but weak in another.

Final assessment should mention:

  • Possibilities to improve or add to the proposal, e.g., any missing or weak features.
  • How sure the evaluators are, i.e., evidence shown, very likely, very hard to tell, etc.
  • Global evaluation (Not Applicable/ –/ – / + / ++)

 New Use Cases/Requirements Identified:

(please describe)

Evaluation summary:

  • Main strong points, qualitatively:
  •  Main weak points, qualitatively:
  • Overall evaluation: (0/1/2/3/4/5)

0: could not be evaluated

1: proposal is not relevant

2: proposal is relevant, but requires significant more work

3: proposal is relevant, but with a few changes

4: proposal has some very good points, so it is a good candidate for standard

5: proposal is superior in its category, very strongly recommended for inclusion in standard

Additional remarks: (points of importance not covered above.)

The submission features in Table 3 are explained in the following Table 4.

Table 4 – Explanation of submission features

Submission features Criteria
Completeness of description Evaluators should

1.     Compare the list of requirements (Annex C of the CfT) with the submission.

2.     Check if respondents have described in sufficient detail to what part of the requirements their proposal refers to.

NB1: Completeness of a proposal for a Use Case is a merit because reviewers can assess that the components are integrated.

NB2: Submissions will be judged for the merit of what is proposed. A submission on a single technology that is excellent may be considered instead of a submission that is complete but has a less performing technology.

Understandability Evaluators should identify items that are demonstrably unclear (incon­sistencies, sentences with dubious meaning etc.)
Extensibility Evaluators should check if respondent has proposed extensions to the Use Cases.

NB: Extensibility is the capability of the proposed solution to support use cases that are not supported by current requirements.

Use of standard Technology Evaluators should check if new technologies are proposed where widely adopted technologies exist. If this is the case, the merit of the new tech­nology shall be proved.
Efficiency Evaluators should assess power consumption, computational speed, computational complexity.
Test cases Evaluators should report whether a proposal contains suggestions for testing the technologies proposed
Maturity of reference implementation Evaluators should assess the maturity of the proposal.

Note 1: Maturity is measured by the completeness, i.e., having all the necessary information and appropriate parts of the HW/SW implementation of the submission disclosed.

Note 2: If there are parts of the implementation that are not disclosed but demonstrated, they will be considered if and only if such components are replicable.

Relative complexity Evaluators should identify issues that would make it difficult to implement the proposal compared to the state of the art.
Support of MPAI-MMC use cases Evaluators should check how many use cases are supported in the submission
Support of non MPAI-MMC use cases Evaluators should check whether the technologies proposed can demonstrably be used in other significantly different use cases.

Annex C: Requirements check list

Please note the following acronyms

KB Knowledge Base
QF Query Format

Table 5 – List of technologies identified in MPAI-MMC N153 [6]

Note: The numbers in the first column refer to the section numbers of N153 [6].

Technologies by Use Cases Response
Conversation with Emotion
4.2.4.1 Text Y/N
4.2.4.2 Digital Speech Y/N
4.2.4.3 Digital Video Y/N
4.2.4.4 Emotion Y/N
4.2.4.5 Emotion KB (speech) query format Y/N
4.2.4.6 Emotion KB (text) query format Y/N
4.2.4.7 Emotion KB (video) query format Y/N
4.2.4.8 Meaning Y/N
4.2.4.9 Dialog KB query format – by tomorrow Y/N
4.2.4.10 Input to speech synthesis (Reply) Y/N
4.2.4.11 Input to face animation Y/N
Multimodal Question Answering
4.3.4.1 Text Y/N
4.3.4.2 Digital Speech Y/N
4.3.4.3 Digital Image Y/N
4.3.4.4 Image KB query format Y/N
4.3.4.5 Object identifier Y/N
4.3.4.6 Meaning Y/N
4.3.4.7 Intention KB query format Y/N
4.3.4.8 Online dictionary query format Y/N
Personalized Automatic Speech Translation
4.4.4.1 Text Y/N
4.4.4.2 Digital Speech Y/N
4.4.4.3 Speech features Y/N
4.4.4.4 Language identification Y/N
4.4.4.5 Translation results Y/N

Respondent should consult the equivalent list in N152 [5]

Annex D: Technologies that may require specific testing

Conversation with Emotion Speech features
Conversation with Emotion Text features
Conversation with Emotion Video features
Multimodal Question Answering Image features
Personalised Automatic Speech Translation Speech features

 Additional technologies may be identified during the evaluation phase.

Annex E: Mandatory text in responses

A response to this MPAI-MMC CfT shall mandatorily include the following text

<Company/Member> submits this technical document in response to MPAI Call for Technologies for MPAI project MPAI-MMC (N153).

 <Company/Member> explicitly agrees to the steps of the MPAI standards development process defined in Annex 1 to the MPAI Statutes (N80), in particular <Company/Member> declares that  <Com­pany/Member> or its successors will make available the terms of the Licence related to its Essential Patents according to the Framework Licence of MPAI-MMC (N171), alone or jointly with other IPR holders after the approval of the MPAI-MMC Technical Specif­ication by the General Assembly and in no event after commercial implementations of the MPAI-MMC Technical Specification become available on the market.

In case the respondent is a non-MPAI member, the submission shall mandatorily include the following text

If (a part of) this submission is identified for inclusion in a specification, <Company>  understands that  <Company> will be requested to immediately join MPAI and that, if  <Company> elects not to join MPAI, this submission will be discarded.

Subsequent technical contribution shall mandatorily include this text

<Member> submits this document to MPAI-MMC Development Committee (MMC-DC) as a con­tribution to the development of the MPAI-MMC Technical Specification.

 <Member> explicitly agrees to the steps of the MPAI standards development process defined in Annex 1 to the MPAI Statutes (N80), in particular  <Company> declares that <Company> or its successors will make available the terms of the Licence related to its Essential Patents according to the MPAI-MMC Framework Licence (N173), alone or jointly with other IPR holders after the approval of the MPAI-MMC Technical Specification by the General Assembly and in no event after commercial implementations of the MPAI-MMC Technical Specification become available on the market.

[1] Profile of a standard is a particular subset of the technologies that are used in a standard and, where applicable, the classes, subsets, options and parameters relevan for the subset


Use Cases and Functional RequirementsFramework LicenceCall for TechnologiesTemplate for responses to the Call for TechnologiesApplication Note

Template for responses to the Call for Technologies

This document is also available in MS Word format Template for responses to the MPAI-MMC Call for Technologies

Abstract

This document is provided as a help to those who intend to submit responses to the MPAI-CAE Call for Technologies. Text in red(as in this sentence) provides guidance to submitters and should not be included in a submission. Text in green shall be mandatorily included in a submission. If a submission does not include the green text, the submission will be rejected.

If the submission is in multiple files, each file shall include the green statement.

Text in white is the text suggested to respondents for use in a submission.

1        Introduction

This document is submitted by <organisation name> (if an MPAI Member) and/or by <organ­is­ation name>, a <company, university etc.> registered in … (if a non-MPAI member) in response to the MPAI-MMC Call for Technol­ogies issued by Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) on 2021/02/17 as MPAI document N154.

In the opinion of the submitter, this document proposes technologies that satisfy the requirements of MPAI-MMC Use Cases & Functional Requirements issued by MPAI on 2020/02/17 as MPAI document N153.

Possible additions

This document also contains comments on the requirements as requested by N153.

This document also contains proposed technologies that satisfy additional requirements as allowed by N153.

<Company and/or Member> explicitly agrees to the steps of the MPAI standards development process defined in Annex 1 to the MPAI Statutes (N80), in particular <Company and or Member> declares that  <Company and or Member> or its successors will make available the terms of the Licence related to its Essential Patents according to the MPAI-MMC Framework Licence (N171), alone or jointly with other IPR holders after the approval of the MPAI-MMC Technical Specif­ication by the MPAI General Assembly and in no event after commercial implem­entations of the MPAI-MMC Technical Specification become available on the market.

< Company and/or Member> acknowledges the following points:

  1. MPAI in not obligated, by virtue of this CfT, to select a particular technology or to select any technology, if those submitted are found inadequate.
  2. MPAI may decide to use the same technology for functionalities also requested in the MPAI-CAE Call for Technologies (N172) and associated Functional Requirements (N171).
  3. A representative of <Company and/or Member> shall present this submission at a CAE-DC meeting communicated by MPAI Secretariat (mailto:secretariat@mpai.community). If no <Company and/or Member> will attend the meeting and present the submission, this submission will be discarded.
  4. <Company and/or Member> shall make available a working implementation, including source code – for use in the development of the MPAI-CAE Reference Software and eventual public­ation by MPAI as a normative standard – before the technology submitted is accepted for the MPAI-CAE standard
  5. The software submitted may be written in programming languages that can be compiled or interpreted and in hardware description languages, upon acceptance by MPAI for further eval­uation of their submission in whole or in part.
  6. <Company> shall immediately join MPAI upon acceptance by MPAI for further evaluation of this submission in whole or in part.
  7. If <Company> does not join MPAI, this submission shall be discarded.

2        Information about the submission

This information corresponds to Annex A on N154. It is included here for submitter’s convenience.

  1. Title of the proposal
  2. Organisation: company name, position, e-mail of contact person
  3. What are the main functionalities of your proposal?
  4. Does your proposal provide or describe a formal specification and APIs?
  5. Will you provide a demonstration to show how your proposal meets the evaluation criteria?

3        Comments on/extensions to requirements (if any)

 

4        Overview of Requirements supported by submission

Please answer Y or N. Detail on the specific answers can be provided in the submission.

Technologies by Use Cases Response
Conversation with Emotion
4.2.4.1 Text Y/N
4.2.4.2 Digital Speech Y/N
4.2.4.3 Digital Video Y/N
4.2.4.4 Emotion Y/N
4.2.4.5 Emotion KB (speech) query format Y/N
4.2.4.6 Emotion KB (text) query format Y/N
4.2.4.7 Emotion KB (video) query format Y/N
4.2.4.8 Meaning Y/N
4.2.4.9 Dialog KB query format – by tomorrow Y/N
4.2.4.10 Input to speech synthesis (Reply) Y/N
4.2.4.10 Input to face animation Y/N
Multimodal Question Answering
4.3.4.1 Text Y/N
4.3.4.2 Digital Speech Y/N
4.3.4.3 Digital Image Y/N
4.3.4.4 Image KB query format Y/N
4.3.4.5 Object identifier Y/N
4.3.4.6 Meaning Y/N
4.3.4.7 Intention KB query format Y/N
4.3.4.8 Online dictionary query format Y/N
Personalized Automatic Speech Translation
4.4.4.1 Text Y/N
4.4.4.2 Digital Speech Y/N
4.4.4.3 Speech features Y/N
4.4.4.4 Language identification Y/N
4.4.4.5 Translation results Y/N

5        New Proposed requirements (if any)

1. Y/N
2. Y/N
3. Y/N

6. Detailed description of submission

6.1       Proposal chapter #1

6.2       Proposal chapter #2

….

7        Conclusions

 


Use Cases and Functional RequirementsFramework LicenceCall for TechnologiesTemplate for responses to the Call for TechnologiesApplication Note

 

MPAI Application Note #6

Multi-Modal Conversation (MPAI-MMC)

Proponent: Miran Choi (ETRI)

Description: Owing to recent advances of AI technologies, natural language processing started to be widely used in various applications. One of the useful applications is the conversational partner which provides the user with information, entertains, chats and answers questions through the speech interface. However, an application should include more than just a speech interface to provide a better service to the user. For example, emotion recognizer and gesture interpreter are needed for better multi-modal interfaces.

Multi-modal conversation (MPAI-MMC) aims to enable human-machine conversation that emulates human-human conversation in completeness and intensity by using AI.

The interaction of AI processing modules implied by a multi-modal conversation system would look approximately as presented in Figure 1, where one can see a language understanding module, a speech recognition module, image analysis module, a dialog processing module, and a speech synthesis module.

Figure 1 – Multi-Modal Conversation (emotion-focused)

Comments: The processing modules of the MPAI-MMC instance of Figure 1 would be operated in the MPAI-AIF framework.

Examples

The example of MMC is the conversation between a human user and a computer/robot as in the following list. The input from the user can be voice, text or image or combination of different inputs. Considering emotion of the human user, MMC will output responses in a text, speech, music depending on the user’s needs.

  • Chats: “I am bored. What should I do now?” – “You look tired. Why don’t you take a walk?”
  • Question Answering: “Who is the famous artist in Barcelona?” – “Do you mean Gaudi?”
  • Information Request: “What’s the weather today?” – “It is a little cloudy and cold.”
  • Action Request: “Play some classical music, please” – “OK. Do you like Brahms?”

Processing modules involved in MMC:

A preliminary list of processing modules is given below:

  1. Fusion of multi-modal input information
  2. Natural language understanding
  3. Natural language generation
  4. Speech recognition
  5. Speech synthesis
  6. Emotion recognition
  7. Intention understanding
  8. Image analysis
  9. Knowledge fusion from different sources such as speech, facial expression, gestures, etc
  10. Dialog processing
  11. Question Answering
  12. Machine Reading Comprehension (MRC)
  13. Speech Synthesis

Requirements:

This is the initial functional requirements that will be developed in the full set in the Functional Requirements (FR) phase..

  1. The standard shall specify the following natural input signals
  • Sound signals from microphone
  • Text from keyboard or keypad
  • Images from the camera
  1. The standard shall specify a user profile format (e.g. gender, age, specific needs, etc.)
  2. The standard shall support emotion-based dialog processing that uses emotion result from the emotion recognition as input and decides the replies based on the user’s intention as output.
  3. The standard should provide means to carry emotion and user preferences in the speech synthesis processing module.
  4. Processing modules should be agnostic to AI, ML or DP technology: it should be general enough to avoid limitations in terms of algorithmic structure, storage and communication and allow full interoperability with other processing modules.
  5. The standard should provide support for the storage of, and access to:
  • Unprocessed data in speech, text or image form
  • Processed data in the form of annotations (semantic labelling). Such annotations can be produced as the result of primary analysis of the unprocessed data or come from external sources such as knowledge base.
  • meta-data (such as collection date and place; classification data)
  • Support for the structured data produced from the raw data.
  1. The standard should also provide support for:
  • The combination into a general analysis workflow of a number of computational blocks that access processed, and possibly unprocessed, data such as input channels, and produce output as a sequence of vectors in a space of arbitrary dimension.
  • The possibility of defining and implementing a novel processing block from scratch in terms of either some source code or a proprietary binary codec
  • A number of pre-defined blocks that implement well-known analysis methods (such as NN-based methods).
  • The parallel and sequential combination of processing modules that comprise different services.
  • The real time processing for the conversation between the user and the robot/computer.

 Object of standard: Interfaces of processing components utilized in multimodal communication.

  • Input interfaces: how to deal with inputs in different formats
  • Processing component interfaces: interfaces between a set of updatable and extensible processing modules
  • Delivery protocol interfaces: Interfaces of the processed data signal to a variety of delivery protocols
  • Framework: the glue keeping the pieces together => mapping to MPAI-AIF

Benefits:

  1. Decisively improve communication between humans and machines and the user experience
  2. Reuse of processing components for different applications
  3. Create a horizontal market of multimodal conversational components
  4. Make market more competitive

 Bottlenecks:

Some processing units should be improved because end-to-end processing has lower performances compared to modular approaches. Therefore, the standard should be able to cover the traditional method as well as hybrid approaches.

 Social aspects:

Enhanced user interfaces will provide accessibility for people with disabilities. MMC can also be used in care giving services for elderly and patients.

Success criteria:

  • How MMC can be extended to different services by combining several processing modules easily and easily.
  • The performance of multi-modality compared to uni-modality in the user interface.
  • Interconnection and integration among different processing modules