1. Definition 2. Functional Requirements 3. Syntax 4. Semantics

1. Definition

The focused status data obtained by observing the individual and collective behaviour of Performers in the Virtual Environment by interpreting the descriptors, per the current Cue Point, including behaviours, Emotion, Cognitive State, and Social Attitude, as needed by:

  1. The Action Generation AIM to service the current cue point.
  2. The Cue Point Identification AIM to trigger the next cue point.

Performance Status AIM generates status data for use by Cue Point Interpretation and Action Generation from each Descriptor component (audio, video, …).

2. Functional Requirements

Components from the virtual environment: (data describing any and all “objects” and avatars from the Virtual Environment)

  1. Avatar Descriptors (may be driven by performers or algorithms)
    1. Spatial position of Avatar
      1. Per area/zone
      2. Ground/air
    2. Descriptors of Avatar motion (driven by Script determining which Avatars are of inter)
      1. Position and Orientation (Spatial Attitude, see MPAI-OSD)
      2. Face and Gestures (See MPAI-PAF)
      3. Dancing/stationary
      4. Social clustering
      5. Gaze direction
    3. Avatar audio activity
      1. Performer doing a certain activity, e.g., Laughing, Clapping, Booing, Shouting, Singing
      2. Intensity of the Performer’s activity
      3. Particular phrase/text uttered
  1. Object Descriptors – all non-avatar objects and associated parameters including:
    1. Meshes with materials (terrain, foliage, etc.).
    2. Light sources and their parameters.
    3. Particles and their parameters.
    4. Effects such as fog, volumetric rays, fire, etc.
    5. Position and Orientation (Spatial Attitude) of all meshes, lights, particles, and effects.
    6. Other data (events, triggers, scene changes, etc.)

3. Syntax

4. Semantics