1. Definition 2. Functional Requirements 3. Syntax 4. Semantics

1. Definition

Descriptors that are based on RE Data In from Participants and Controllers in the Real Environment and have a form that is suitable for Interpretation (e.g., Position and Orientation, Face and Gestures, Controller Data). The theatre may be split into two or more sections. Participants may be captured by one or more video cameras and one or more microphones. A calibration process is needed to align video images with seating chart and to register the active area of microphones with seating chart.

2. Functional Requirements

  1. Seat occupation
  2. Participant Description
    1. Spatial Attitude
    2. Descriptors (Speech, Face, Body)
    3. Activity (moving/not moving)
    4. Audio Level
  3. Choice per seat (mobile app, web interface, controller)
    1. Voting
    2. Controller
      1. Spatial Attitude
      2. Actuator type (knob, slider, button)
      3. Values of actuator
      4. Text
  4. Biometric descriptors of the Real Environment participant.
    1. Heart rate and Heart rate variability (HRV).
    2. Brain state from EEG data (delta, theta, alpha, beta, gamma).
    3. Galvanic Skin Response (Electrodermal Activity).
    4. Myoelectric intensity per electrode site.
    5. Skin temperature.

3. Syntax

https://schemas.mpai.community/XRV1/V1.0/data/REParticipantDescriptors.json

4. Semantics

Label Size Description
Header N1 Bytes RE Participant Descriptors Header
– Standard-REParticipantDescriptors 9 Bytes The characters “XRV-RTD-V”
– Version N2 Bytes Major version – 1 or 2 characters
– Dot-separator 1 Byte The character “.”
– Subversion N3 Bytes Minor version – 1 or 2 characters
SpaceTime N6 Bytes  Space-Time info of RE Participant Descriptors
SeatOccupation N7 Bytes According to the Seat Arrangement Data Type.
ParticipantDescriptions[] N8 Bytes Collection of Participant Descriptors
– ParticipantDescription N9 Bytes Set of description data.
  – ParticipantID N10 Bytes ID of Participant
  – SpatialAttitude N11 Bytes Of Participant
  – Descriptors N12 Bytes Set of Descriptors
    – FaceDescriptors N13 Bytes Of Participant
    – BodyDescriptors N14 Bytes Of Participant
    – UtteredSpeech N15 Bytes Of Participant
    – VisualActivity 1 bit Moving/not moving
    – AudioActivity 1 bit Making sound/not making sound
    – BiometricData N23 Bytes Of Participant
    – Controller N17 Bytes ID of Controller being described.
      – ActuatorType N18 Bytes one of: knob, slider, button
      – Value N19 Bytes Value of actuation.
      – Text N20 Bytes Text generated.
      – Time N21 Bytes Time this data refers to.
      – SpatialAttitude N22 Bytes Spatial Attitude of Controller.
      – ControllerData N24 Bytes From Participant
    – AppData N25 Bytes From Participant
GroupDescriptions[] N26 Bytes Collection of Participant Group Descriptors.
– GroupDescription N27 Bytes Set of description data.
  – GroupID N28 Bytes ID of Participant
  – Descriptors N29 Bytes Set of Descriptors
    – MotionDescriptors N30 Bytes Motion Vector Field of Participant Group.
    – VisualActivity N31 Bytes Moving/not moving
    – AudioActivity N32 Bytes Making sound/not making sound
DescrMetadata N33 Bytes Descriptive Metadata