1. Definition | 2. Functional Requirements | 3. Syntax | 4. Semantics |
1. Definition
Descriptors that are based on RE Data In from Participants and Controllers in the Real Environment and have a form that is suitable for Interpretation (e.g., Position and Orientation, Face and Gestures, Controller Data). The theatre may be split into two or more sections. Participants may be captured by one or more video cameras and one or more microphones. A calibration process is needed to align video images with seating chart and to register the active area of microphones with seating chart.
2. Functional Requirements
- Seat occupation
- Participant Description
- Spatial Attitude
- Descriptors (Speech, Face, Body)
- Activity (moving/not moving)
- Audio Level
- Choice per seat (mobile app, web interface, controller)
- Voting
- Controller
- Spatial Attitude
- Actuator type (knob, slider, button)
- Values of actuator
- Text
- Biometric descriptors of the Real Environment participant.
- Heart rate and Heart rate variability (HRV).
- Brain state from EEG data (delta, theta, alpha, beta, gamma).
- Galvanic Skin Response (Electrodermal Activity).
- Myoelectric intensity per electrode site.
- Skin temperature.
3. Syntax
https://schemas.mpai.community/XRV1/V1.0/data/REParticipantDescriptors.json
4. Semantics
Label | Size | Description |
Header | N1 Bytes | RE Participant Descriptors Header |
– Standard-REParticipantDescriptors | 9 Bytes | The characters “XRV-RTD-V” |
– Version | N2 Bytes | Major version – 1 or 2 characters |
– Dot-separator | 1 Byte | The character “.” |
– Subversion | N3 Bytes | Minor version – 1 or 2 characters |
SpaceTime | N6 Bytes | Space-Time info of RE Participant Descriptors |
SeatOccupation | N7 Bytes | According to the Seat Arrangement Data Type. |
ParticipantDescriptions[] | N8 Bytes | Collection of Participant Descriptors |
– ParticipantDescription | N9 Bytes | Set of description data. |
– ParticipantID | N10 Bytes | ID of Participant |
– SpatialAttitude | N11 Bytes | Of Participant |
– Descriptors | N12 Bytes | Set of Descriptors |
– FaceDescriptors | N13 Bytes | Of Participant |
– BodyDescriptors | N14 Bytes | Of Participant |
– UtteredSpeech | N15 Bytes | Of Participant |
– VisualActivity | 1 bit | Moving/not moving |
– AudioActivity | 1 bit | Making sound/not making sound |
– BiometricData | N23 Bytes | Of Participant |
– Controller | N17 Bytes | ID of Controller being described. |
– ActuatorType | N18 Bytes | one of: knob, slider, button |
– Value | N19 Bytes | Value of actuation. |
– Text | N20 Bytes | Text generated. |
– Time | N21 Bytes | Time this data refers to. |
– SpatialAttitude | N22 Bytes | Spatial Attitude of Controller. |
– ControllerData | N24 Bytes | From Participant |
– AppData | N25 Bytes | From Participant |
GroupDescriptions[] | N26 Bytes | Collection of Participant Group Descriptors. |
– GroupDescription | N27 Bytes | Set of description data. |
– GroupID | N28 Bytes | ID of Participant |
– Descriptors | N29 Bytes | Set of Descriptors |
– MotionDescriptors | N30 Bytes | Motion Vector Field of Participant Group. |
– VisualActivity | N31 Bytes | Moving/not moving |
– AudioActivity | N32 Bytes | Making sound/not making sound |
DescrMetadata | N33 Bytes | Descriptive Metadata |