1     Version

V2.1

2     Functions

Multimodal Emotion Fusion (MMC-MEF):

  1. Receives
    1. Emotion (Text)
    2. Emotion (Speech)
    3. Emotion (Face).
  2. Produces the input Entity’s Emotion.

3      Reference Architecture

Figure 1 depicts the Reference Architecture of the Multimodal Emotion Fusion AIM.

Figure 1 – The Multimodal Emotion Fusion AIM

4      I/O Data

Table 1 specifies the Input and Output Data of the Multimodal Emotion FusionAIM.

Table 1 – I/O Data of the Multimodal Emotion Fusion AIM

Input data From Comment
Emotion (Text) PS-Text Interpretation Emotion in Text
Emotion (Speech) PS-Speech Interpretation Emotion in Speech
Emotion (Face) PS-Face Interpretation Emotion in Face
Output data To Description
Input Emotion Downstream AIM The estimated emotion that fuses all inputs

5     SubAIMs

No SubAIMs.

6     JSON Metadata

https://schemas.mpai.community/MMC/V2.1/AIMs/MultimodalEmotionFusion.json