1. Definition | 2. Functional Requirements | 3. Syntax | 4. Semantics |
1. Definition
Descriptors that are based on Participant data form the Real Environment (objects and avatars) and have a form that is suitable for interpretation (e.g., Face and Gestures) in addition to providing Descriptors to determine cue points.
2. Functional Requirements
- Visual behaviour
- Room may be split in two or more sections.
- Participants are captured by one or more video cameras.
- Calibration process to register the video images with seating chart
- Extract descriptors on a per seat or per-section basis
- Seats occupation
- Per seat
- Per section
- Participants motion
- Hands waving left to right/right to left
- Participants standing or sitting
- Participants clapping
- Participants not moving.
- Audio reaction
- Room may be split in two or more sections.
- Participants are captured by one or more microphones.
- Calibration process to register the active area of microphones with seating chart.
- Descriptors on a per-seat or per-section basis:
- Seats occupation (info coming from video)
- Audio activity
- Text uttered by Participant.
- Participant’s audio activity:
- Speaking
- Laughing
- Clapping
- Booing
- Shouting
- Singing
- Choice per seat (mobile app, web interface, controller)
- Voting
- Controller
- Spatial motion
- Relative/absolute
- Value (Spatial Attitude)
- Actuator type:
- knob, slider, button
- values of actuator
- Text includes text string, source, and destination.
- Spatial motion
- Biometric descriptors of the Real Environment participant.
- Heart rate and Heart rate variability (HRV).
- Brain state from EEG data (delta, theta, alpha, beta, gamma).
- Galvanic Skin Response (Electrodermal Activity).
- Myoelectric intensity per electrode site.
- Skin temperature.
- Seats occupation
3. Syntax
4. Semantics