1. Functions 2. Reference Model 3. Input/Output Data
4. Functions of AI Modules 5. Input/output Data of AI Modules 6. AIW, AIMs, and JSON Metadata

1. Functions

Data from both the Real and Virtual Environments of a Live Theatrical  Performance specified by XRV-LTP (see Figure) include audio, video, volumetric or motion capture (mocap), and avatar data from performers, participants, operators, signals from control surfaces and more as defined by the Real Venue (RE)/Virtual Venue (VE) specifications. This Input Data is processed and converted into actionable commands required by the Real (the “stage”) and Virtual (the “metaverse”) Environments – according to their respective Real and Virtual Venue Specifications to enable multisensory experiences (as defined by the Script) in both the Real and Virtual Environments.

2. Reference Model

Figure 1 depicts the Reference Architecture of Live Theatrical Stage Performance.

Figure 1 – Reference Model of Live Theatrical Stage Performance (XRV-LTP) AIW

This is the flow of operation of the XRV-LTP AIW Reference Model:

  1. A range of sensors/controllers collect Data from both the Real and Virtual Environments including audio, video, volumetric or motion capture (mocap) and avatar data from performers, signals from control surfaces and more.
  2. Environment Description extracts features from performers and objects, participants, and operators, conditions the data and provides them as Performance Descriptors (including description of behaviour of performers and objects on the stage or in the metaverse), Participant Descriptors (describing the audience’s behaviour), and Operator Descriptors (Data from the Show Control computer or control surface, consoles for audio, DJ, VJ, lighting and FX, typically commanded by operators) and Show Data (which includes real-time data streams such as MoCap and Volumentric which are used for RE and VE Experience Generation).
  3. The Performance, Participant, and Operator Status Interpretation AIMs determine the components of the Performance, Participant, and Operator Descriptors that are relevant to the current Cue Point provided by the Action Generation AIM.
  4. Action Descriptors Generation uses Performance  Status, Participant Status, and Operator Status to direct actions in both the Real and Virtual Environments via RE and VE Action Descriptors, and provides the current Cue Point to the three Interpretation AIMs.
  5. VE Experience Generation and RE Experience Generation uses Show Data from Environment Descriptors and converts RE and VE Action Descriptors into actionable commands required by the Real and Virtual Environments – according to their Venue Specifications – to enable multisensory experience generation in both the Real and Virtual Environments.

3. Input/Output Data

Table 2 specifies the Input and Output Data.

Table 2 – I/O Data of MPAI-XRV – Live Theatrical Stage Performance

Input Description
RE Data In Input data such as App Data, Audio/VJ/DJ, Audio-Visual, Biometric Data, Controller, Lidar, Lighting/FX, MoCap, Sensor Data, Show Control, Skeleton/Mesh, and Volumetric data.
RE Venue Specification An input to the Environment Description AIM and the Real Experience Generation AIM defining protocols, data formats, and command structures for the specific Real Environment Venue and also includes number, type, and placement of lighting fixtures, special effects, sound and video reproduction resources.
VE Data In App Data, Audio/VJ/DJ, Audio-Visual, Biometric Data, Controller, Lidar, Lighting/FX, MoCap, Sensor Data, Show Control, Skeleton/Mesh, and Volumetric data.
VE Venue Specification An input to the Virtual Experience Generation AIM defining protocols, data formats, and command structures for the specific Virtual Environment Venue and also includes all actionable elements relevant to the Script including number, type, and placement of lights, effects, avatars, objects, animation scripts, and sound and video reproduction resources.
Output Description
VE Data Out Parameters controlling 3D geometry, shading, lighting, materials, cameras, physics, and all A/V experiential elements, including audio, video, and capture cameras/microphones. The actual format used is specified by the Virtual Environment Venue Specification.
VE Commands Commands controlling 3D geometry, shading, lighting, materials, cameras, physics, and all A/V experiential elements, including audio, video, and capture cameras/microphones. Relevant commands are specified by the Virtual Environment Venue Specification.
RE Data Out Parameters controlling all A/V experiential elements including lighting, rigging, FX, audio, video, and cameras/microphones. The actual format used is specified by the Real Environment Venue Specification.
RE Commands Commands controlling for all A/V experiential elements including lighting, rigging, FX, audio, video, and cameras/microphones. The actual format used is specified by the Real Environment Venue Specification.

4. Functions of AI Modules

Table 2 specifies the Function of the AI Modules.

Table 2 – Functions of AI Modules

AI Module
Environment Description Process RE Data In, and VE Data In and converts them into Performance Descriptors, Participant Descriptors, and Operator Descriptors using the RE Venue Specification and VE Venue Specification.
Performance Status Interpretation Interprets the Performance Descriptors to produce the Performance Status used to locate the Performance Environment in the current Cue Point Status of the Script.
Participant Status Interpretation Converts Participants Descriptors to time-dependent statuses that include Sentiment, Expression of choice, and Emergent behaviour.
Operator Command Interpretation Interprets data and commands and generates Interpreted Operator Controls,
Action Descriptor Generation Uses Participants Status, Scene Descriptors, Cue Points, and Interpreted Operator Controls to produce Action Descriptors that describe the Actions necessary to create the complete experience in accordance with the Script and express all aspects of the experience.
VE Experience Generation Processes Action Descriptors and produces actionable commands in the Virtual Environment.
RE Experience Generation Processes Action Descriptors and produces actionable commands in the Real Environment.

5. Input/output Data of AI Modules

Table 2 specifies the Function of the AI Modules.

Table 2 – Functions of AI Modules

AI Module Receives Produces
Environment Description VE Data In
VE Venue Specification
RE Data In
RE Venue Specification
RE Show Data
VE Show Data
RE Performance Descriptors
RE Participant Descriptors
VE Performance Descriptors
VE Participant Descriptors
Operator Descriptors
Performance Status Interpretation RE Performance Descriptors
VE Performance Descriptors
Cue Point Status
RE Performance Status
VE Performance Status
Participant Status Interpretation RE Participant Descriptors
VE Participant Descriptors
Cue Point Status
RE Participant Status
VE Participant Status
Operator Command Interpretation Operator Descriptors
Cue Point Status
Operator Status
Action Generation RE Performance Status
RE Participant Status
VE Performance Status
VE Participant Status
Operator Status
Cue Point Status
RE Action Descriptors
VE Action Descriptors
VE Experience Generation VE Show Data
VE Action Descriptors
VE Venue Specification
VE Commands
VE Data Out
RE Experience Generation RE Show Data
RE Action Descriptors
RE Venue Specification
RE Commands
RE Data Out

6. AIW, AIMs, and JSON Metadata

Table 4 provides the links to the AIW and AIM specifications and to the JSON syntaxes. AIMs/1 indicates that the column contains Composite AIMs and AIMs indicates that the column contains their Basic AIMs.

Table 4 – AIW, AIMs, and JSON Metadata

AIM AIMs Name JSON
XRV-LTP Answer to Multimodal Question X
XRV-END Environment Description X
XRV-PFI Performance Status Interpretation X
XRV-PTI Participant Status Interpretation X
XRV-OCI Operator Command Interpretation X
XRV-ADG Action Descriptor Generation X
XRV-VEG VE Experience Generation X
XRV-REG RE Experience Generation X