<- Introduction Go t o ToC Definitions ->
Multimodal Conversation (MPAI-MMC) specifies:
- Data Formats for analysis of text, speech, and other non-verbal components as used in human-machine and machine-machine conversation applications.
- Use Cases implemented in the AI Framework using Data Formats from MPAI-MMC and other MPAI standards and providing recognised applications in the Multimodal Conversation domain.
This Technical Specification includes the following Use Cases:
- “Conversation with Personal Status” (CPS), enabling conversation and question answering with a machine able to extract the inner state of the entity it is conversing with and showing itself as a speaking digital human able to express a Personal Status. By adding or removing minor components to this general Use Case, five Use Cases are spawned:
- “Conversation About a Scene” (CAS) where a human converses with a machine pointing at the objects scattered in a room and displaying Personal Status in their speech, face, and gestures while the machine responds displaying its Personal Status in speech, face, and gesture.
- “Virtual Secretary for Videoconference” (VSV) where an avatar not representing a human in a virtual avatar-based video conference extracts Personal Status from Text, Speech, Face, and Gestures, displays a summary of what other avatars say, and receives and act on comments.
- “Human-Connected Autonomous Vehicle Interaction” (HCI) where humans converse with a machine displaying Personal Status after having been properly identified by the machine with their speech and face in outdoor and indoor conditions while the machine responds displaying its Personal Status in speech, face, and gesture.
- “Conversation with Emotion” (CWE), supporting audio-visual conversation with a machine impersonated by a synthetic voice and an animated face.
- “Multimodal Question Answering” (MQA), supporting request for information about a displayed object.
- Three Uses Cases supporting text and speech translation applications. In each Use Case, users can specify whether speech or text is used as input and, if it is speech, whether their speech features are preserved in the interpreted speech:
- “Unidirectional Speech Translation” (UST).
- “Bidirectional Speech Translation” (BST).
- “One-to-Many Speech Translation” (MST).
- The “Personal Status Extraction” Composite AIM that estimates the Personal Status conveyed by Text, Speech, Face, and Gesture – of an Entity, i.e., a real or digital human.
Note that:
- Each Use Case normatively defines:
- The Functions of the AIW implementing it and of the AIMs.
- The Connections between and among the AIMs
- The Semantics and the Formats of the input and output data of the AIW and the AIMs.
- Each Composite AIM normatively defines:
- The Functions of the Composite AIM implementing it and of the AIMs.
- The Connections between and among the AIMs
- The Semantics and the Formats of the input and output data of the AIW and the AIMs.
The word normatively implies that an Implementation claiming Conformance to:
- An AIW, shall:
- Perform the AIW function specified in the appropriate Section of Chapter 5.
- All AIMs, their topology and connections should conform with the AIW Architecture specified in the appropriate Section of Chapter 5.
- The AIW and AIM input and output data should have the formats specified in the appropriate Sections of Chapter 7.
- An AIM, shall:
- Perform the functions specified by the appropriate Section of Chapter 5 or 6.
- Receive and produce the data specified in the appropriate Section of Chapter 7.
- A data Format, the data shall have the format specified in Chapter 7.
Users of this Technical Specification should note that:
- This Technical Specification defines Interoperability Levels but does not mandate any.
- Implementers decide the Interoperability Level their Implementation satisfies.
- Implementers can use the Reference Software of this Technical Specification to develop their Implementations.
- The Conformance Testing specification can be used to test the conformity of an Implementation to this Standard.
- Performance Assessors can assess the level of Performance of an Implementation based on the Performance Assessment specification of this Standard.
- Implementers and Users should consider Annex 2 – Notices and Disclaimers.
The current Version of MPAI-MMC has been developed by the MPAI Multimodal Conversation Development Committee (MM-DC). MPAI expects to produce future MPAI-MMC Versions extending the scope of the Use Cases and/or add new Use Cases within the Multimodal Conversation scope.