<-Introduction Go to ToC Definitions ->
Technical Specification: Multimodal Conversation (MPAI-MMC) V2.3, in the following also called MPAI-MMC V2.3 or simply MPAI-MMC, specifies:
- Data Types for use by MPAI-MMC V2.2 and other MPAI Technical Specifications.
- AI Modules enabling analysis of text, speech, and other non-verbal components used in human-machine and machine-machine conversation applications.
- AI Workflows implementing Use Cases that use AI Modules and Data Types from MPAI-MMC and other MPAI Technical Specifications to provide recognised applications in the Multimodal Conversation domain.
The Use Cases includes in this Technical Specification are:
- Answer to Multimodal Question (MMC-AMQ) providing a text or speech answer to a text or speech question and an image.
- Conversation About a Scene (MMC-CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture.
- Conversation with Personal Status (MMC-CPS), enabling conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned:
- Conversation with Emotion (MMC-CWE), enabling audio-visual conversation with a machine impersonated by a synthetic voice and an animated face.
- Human-Connected Autonomous Vehicle Interaction (MMC-HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds displaying its Personal Status in speech, face, and gesture.
- Multimodal Question Answering (MQA), enabling request for information about a displayed object.
- Text and Speech Translation (MMC-TST) supporting a variety of text and speech translation applications where users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech.
- Virtual Meeting Secretary (MMC-VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments.
The Composite AI Module specified by MPAI-MMC V2.3 is Personal Status Extraction (MMC-PSE) that estimates the Personal Status conveyed by Text, Speech, Face, and Gesture – of an Entity, i.e., a real or digital human.
Note that:
- Each AI Workflow implementing a Use Case normatively defines:
- The Functions of the AIW implementing it and of the AIMs.
- The Connections between and among the AIMs
- The Semantics and the Formats of the input and output data of the AIW and the AIMs.
- Each Composite AIM normatively defines:
- The Functions of the Composite AIM implementing it and of the AIMs.
- The Connections between and among the AIMs
- The Semantics and the Formats of the input and output data of the AIW and the AIMs.
The word normatively implies that an Implementation claiming Conformance to:
- An AIW, shall:
-
- Perform the AIW function specified in the appropriate Section of Chapter 5.
- All AIMs, their topology and connections should conform with the AIW Architecture specified in the appropriate Section of Chapter 5.
- The AIW and AIM input and output data should have the formats specified in the appropriate Sections of Chapter 7.
- An AIM, shall:
-
- Perform the functions specified by the appropriate Section of Chapter 5 or 6.
- Receive and produce the data specified in the appropriate Section of Chapter 7.
- A data Format, the data shall have the format specified in Chapter 7.
Implementers of this Technical Specification should note that:
- The Reference Software of this Technical Specification may be to develop Implementations.
- The Conformance Testing specification may be used to test the conformity of an Implementation to this Standard.
- The level of Performance of an Implementation may be assessed based on the Performance Assessment specification of this Standard.
Users should consider Notices and Disclaimers.
MPAI-MMC V2.3 has been developed by the MPAI Multimodal Conversation Development Committee (MM-DC). MPAI expects to produce future MPAI-MMC Versions extending the scope of the Use Cases and/or add new Use Cases supported by existing of new AI Modules and Data Types within the scope of Multimodal Conversation.