<-Data Types       Go to ToC

HMC-CEC uses six groups of capability classes to process a Communication Item:

Receives Communication Items from a Machine or Audio-Visual Scenes from a real space.
Extracts Personal Status from the Modalities (Text, Speech, Face, or Gesture) in the Communication Item received.
Understands The Communication Item from the Modalities and the extracted Personal Status, with or without use of the spatial information embedded in the Communication Item.
Translates Using the set of Modalities available to the Machine.
Generates Response.
Renders The response using available Modalities.

Table 1 defines the Attributes and Sub-Attributes of the HMC-CEC Profiles. The Sub-Attributes are expressed with three characters where the first two representing the medium are followed by O representing Object:

  1. The Audio-Visual Scene represent Text (TXO), Speech (SPO), Audio (AUO), Visual (VIO), and Portable Avatar (PAF) Sub-Attributes, respectively.
  2. The Personal Status, Understanding, Translation, and Display Response represent Text (TXO), Speech (SPO), Face (FCO), and Gesture (GSO), respectively.

The SPC Sub-Attribute of Understanding represents Spatial Information  (SPaCe), i.e., the capability of an HMC-CEC implementation to use also Spatial Information to understand a Communication Item.

Table 1 – Attribute and Sub-Attribute Codes of HMC-CEC.

Attributes Codes Sub-Attribute Codes
Audio-Visual Scene AVS TXO SPO AUO VIO PAF
Personal Status EPS TXO SPO FCO GSO
Understanding UND TXO SPO FCO GSO SPC
Translation TRN TXO SPO FCO GSO
Display Response RES TXO SPO FCO GSO

The JSON file provides the formal specification of MPAI-HMC Profiles.}

<-Data Types       Go to ToC