1. Definition | 2. Functional Requirements | 3. Syntax | 4. Semantics |
1. Definition
The focused status data obtained by observing the individual and collective behaviour of Participants in the Real Environment by interpreting the descriptors, per the current Cue Point, including behaviour, Emotion, Cognitive State, and Social Attitude, as needed by:
- The Action Generation AIM to service the current cue point.
- The Cue Point Identification AIM to trigger the next cue point.
Components from the Real Environment as data describing relevant Participants in the Real Environment.
2. Functional Requirements
- Gesture descriptors:
- Raising arms
- Waving
- Jumping
- Pointing to a direction
- Dancing
- Visual activity
- Hands waving left to right/right to left
- Participants standing or sitting
- Participants clapping
- Social clustering:
- Coordinates of cluster centroids.
- Variances along the three principal axes.
- Percentage of total Participants in each cluster.
- Identity of individual Participant within each cluster
- Distance of individual Participant from the centroid.
- Objects in field of vision and gaze direction:
- List of objects/performers either present or represented in the Real Environment or their components that are being observed (granularity of target set by Script).
- Percent of participants observing a particular object/performer/component.
- Participant audio activity
- Number of participants doing a certain activity per area/zone:
- Laughing, Clapping, Booing, Shouting, Singing.
- Uttering a Text.
- Participant intensity of the activity per avatar per area/zone.
- Number of Participants uttering a particular phrase/text per area/zone.
- Number of participants doing a certain activity per area/zone:
3. Syntax
4. Semantics
Label | Size | Description |
Header | N1 Bytes | Header |
– Standard- | 9 Bytes | The characters “XRV-RTS-V” |
– Version | N2 Bytes | Major version – 1 or 2 characters |
– Dot-separator | 1 Byte | The character “.” |
– Subversion | N3 Bytes | Minor version – 1 or 2 characters |
MInstanceID | N4 Bytes | Identifier of M-Instance. |
ParticipantStatusID | N5 Bytes | Identifier of Participant Status. |
SpaceTime | N6 Bytes | Space-Time info of Participant Status. |
GestureAttributes | N7 Bytes | one of: Raising arms, Waving, Jumping, Pointing to a direction, Dancing. |
Visual Activity | N8 Bytes | |
– HandsWaving | N9 Bytes | Hands waving left to right/right to left |
– StandOrSit | N10 Bytes | Participants standing or sitting |
– Clapping | N11 Bytes | Participants clapping |
SocialClustering[] | N12 Bytes | Social Clustering related data. |
– ClusterID | N13 Bytes | ID of Cluster |
– Participants[] | N14 Bytes | Data of Participants in Cluster |
– ParticipantID | N15 Bytes | ID of Participant |
– ParticipantIDPosition | N16 Bytes | Position of Participant |
– NoOfClusterParticipant | N17 Bytes | Number of Participants in Cluster |
– ClusterCentroidCoord | N18 Bytes | Coordinates of ClusterCentroid |
GazeDirection | N19 Bytes | Data relate to Participants’ gaze. |
– Objects[] | N20 Bytes | Present/Represented in RE or their components that are being observed |
– ObjectID | N21 Bytes | ID of individual Object |
– Entities[] | N22 Bytes | Present/represented in RE or their components that are being observed |
– EntityID | N23 Bytes | ID of individual Object |
– Targets | N24 Bytes | Objects/performers/components being observed. |
– TargetID | N25 Bytes | Specific object/performer/component being observed. |
– Location | N26 Bytes | Location including Target. |
– Percentage | N27 Bytes | Percent of participants observing TargetID |
– AudioAttributes | N28 Bytes | oneOf: Speaking, Laughing, Clapping, Booing, Shouting, Singing |
– UtteredSpeech | N29 Bytes | By Participants |
– ActivityByLocation[] | N30 Bytes | |
– LocationID | N31 Bytes | |
– ParticipantIntensity | N32 Bytes | |
– Text | N33 Bytes |