1. Definition | 2. Functional Requirements | 3. Syntax | 4. Semantics |
1. Definition
Descriptors that are based on performance data form the Virtual Environment (objects and avatars) and have a form that is suitable for interpretation (e.g., Face and Gestures) in addition to providing Descriptors to determine cue points.
2. Functional Requirements
- Avatar Descriptors (may be driven by performers or algorithms)
- Spatial position of Avatar
- Per area/zone
- Ground/air
- Descriptors of Avatar motion (driven by Script determining which Avatars are of inter)
- Position and Orientation (Spatial Attitude, see MPAI-OSD)
- Face and Gestures (See MPAI-PAF)
- Dancing/stationary
- Social clustering
- Gaze direction (head and eye tracking).
- Avatar audio activity
-
- Text uttered by Avatar.
- Avatar’s audio activity:
- Speaking
- Laughing
- Clapping
- Booing
- Shouting
- Singing
-
- Spatial position of Avatar
- Object Descriptors – all non-avatar objects and associated parameters including:
- Meshes with materials (terrain, foliage, etc.).
- Light sources and their parameters.
- Particles and their parameters.
- Effects such as fog, volumetric rays, fire, etc.
- Position and Orientation (Spatial Attitude) of all meshes, lights, particles, and effects.
- Other data (events, triggers, scene changes, etc.)
- Biometric descriptors of the performer represented in the Virtual Environment.
- Heart rate and Heart rate variability (HRV).
- Brain state from EEG data (delta, theta, alpha, beta, gamma).
- Galvanic Skin Response (Electrodermal Activity).
- Myoelectric intensity per electrode site.
- Skin temperature.
3. Syntax
4. Semantics