<-Profiles         Go to ToC

(Informative)

1      Introduction 2      Use Case Description Language
3      Virtual Lecture 4      Virtual Meeting 5      Hybrid working
3.1  Description 4.1  Description 5.1  Description
3.2  MMM-Script representation 4.2  MMM-Script representation 5.2  MMM-Script representation
3.3  Actions and Items 4.3  Actions and Items 5.3  Actions and Items
6      eSports Tournament 7      Virtual performance 8      AR Tourist Guide
6.1  Description 7.1  Description 8.1  Description
6.2  MMM-Script representation 7.2  MMM-Script representation 8.2  MMM-Script representation
6.3  Actions, Items 7.3  Actions and Items 8.3  Actions and Items
9      Virtual Dance 10    Virtual Car Showroom 11 Drive a Connected Autonomous Vehicle
9.1  Description 10.1     Description 11.1     Description
9.2  MMM-Script representation 10.2     MMM-Script representation 11.2     MMM-Script representation
9.3  Actions and Items 10.3     Actions and Items 11.3     Actions and Items

1       Introduction

This Informative Chapter collects diverse Metaverse Use Cases where Users request to perform Actions on different types of Items. The goal is to verify that the Metaverse elements of this Technical Specification do indeed support a range of representative Use Cases thus confirming validity of the Technical Specification .

Note that, unless disclaimed otherwise, a sentence like “A student attends a lecture held by a teacher in a classroom created by a school manager” s to be read as “A User representing a student attends a virtual lecture in a virtual classroom Authored by a User representing a school manager and MM-Embedded at an M-Location”.

2       Use Case Description Language

Metaverse Use Cases involve a plurality of Processes – Users, Devices, Services, Apps – performing or requested by other Processes to perform Actions on a variety of Items. In a Use Case:

  1. Processes (e.g., Users) are sequentially identified by one subscript.
  2. Items Acted on by a Process are identified by the subscript of the Process performing the Action on the Item followed by a sequential number.
  3. Objects, Scenes, Events, and Personae are prefixed by S, A, V, AV, or AVH to indicate their Speech, Audio, Visual, Audio-Visual, or Audio-Visual-Haptic nature, respectively.
  4. The Locations where the Actions take place are similarly identified by the subscript of the Process performing an Action at the Location followed by a sequential number.
  5. If the Actions are performed at different M-Instances, all Processes, Items, and Locations are prefixed by a sequential capital letter.

For instance:

  1. Useri MM-Embeds Personaj at M-Locationi.k.
  2. Useri MU-Embeds Itemj at U-Locationi.k.
  3. UserA,i MM-Sends Messagei.j to UserB.k.

All Use Cases assume that Actions are performed in an M-Instance. Action specified in the Universe are specifically noted.

The following conventions are used throughout:

MLoc M-Location
SA Spatial Attitude
ULoc U-Location
Persona(AV) Persona whose rendering activates audio-visual perceotion.
Object(AVH) Object whose rendering activates audio-visual-haptic perception

3       Virtual Lecture

3.1      Description

  1. School Manager
    • Authors and embeds a virtual classroom in an M-Instance.
    • Pays teacher.
  2. Teacher
    • Is at home.
    • Embeds a persona of theirs from home close to the classroom’s desk.
    • Embeds and animates a 3D Object in the lecture.
    • Eventually leaves the classroom and returns home.
  3. Student
    • Is at home.
    • Pays lecture fees with the right to make a copy of the Audio-Visual Event.
    • Embeds a persona of theirs in the classroom.
    • Approaches the teacher’s desk to feel the 3D Object with haptic gloves.
    • Stores the lecture’s Audio-Visual Event.
    • Leaves the classroom and returns home.

3.2      MMM-Script representation

Declarations

Declare User1 // School manager //
AVHObject1 //Classroom //
MLoc1.1 // Metaverse Location //
Value1.1 // Lecture consideration //
Declare Service1 // Author Service //
Declare User2 // Teacher’s User //
human2 // Teacher //
AVHPersona2.1 // Teacher’s Persona //
MLoc2.1 // Place of classroom desk //
MLoc2.2 // Place close to object being experimented //
AVHObject2.1 // Object being experimented //
Declare User3 // Student //
AVHPersona3.1 // Student’s Persona //
MLoc3.1 // Student’s home //
MLoc3.2 // Classroom seat //
MLoc3.3 // Location close to Object being experimented //
Value3.1 // Lecture fees //
AVHEvent3.1 // Lecture //

Operation

ProcessA Actions Items Indirect Item or Process ProcessB
User1 Authors DataMdata By Service1 At Service1 Item(AVH)1
MM-Embeds AVHObject1.1 From Service1 At MLoc1.1
User2 Tracks human2 At MLoc2.1 With SA
MM-Embeds AVHPersona2.1 At MLoc2.2 With SA
  MM-Disables AVHPersona2.1 At MLoc2.1
  MM-Embeds AVHObject2.1 At MLoc2.2 With SA
User3 Tracks AVHPersona3.1 At MLoc3.1 With SA
Transacts Value3.1 To User1
MM-Embeds AVHPersona3.1 At MLoc3.2 With SA
MM-Embeds AVHPersona3.1 At MLoc3.3 With SA
MM-Sends AVHEvent To URI
MM-Disables AVHPersona3.1 At MLoc3.3
MM-Embeds AVHPersona3.1 At MLoc3.1 With SA

3.3      Actions and Items

Table 6 gives the list of Actions, Items, and Data Types used by the Virtual Lecture Use Case. The Table also gives the Actions implied by the Track Composite Action (MM-Embed, MM-Animate, MM-Send, MU-Render, UM-Capture, MU-Send, and Identify). The list of these Actions will not be repeated in the next tables.

Table 6 – Virtual Lecture Actions, Items, and Data Types

Actions Items
Author Coordinates
Identify Currency
MM-Disable Experience
MM-Embed AVHObject
MM-Send M-Location
MU-Render Orientation
MU-Send Persona(AVH)
UM-Animate Position
UM-Capture Spatial Attitude
UM-Send U-Location
Track Value
Transact Value

4       Virtual Meeting

4.1      Description

A meeting manager

  1. Authors a meeting room.
  2. Deploys a Process acting as a Virtual Meeting Secretary tasked to produce a summary of the conversations with participants’ Personal Statuses.
  3. The Summary is displayed in the meeting room for participant to comment.

A participant

  1. Attends a meeting held in the room.
  2. Pays to get a translation of the sentences uttered in unknown languages.
  3. Stores the Event.

4.2      MMM-Script representation

Declarations

Declare User1 // Meeting manager //
AVObject1.1 // Meeting room //
MLoc1.1 // Meeting location //
Persona(AV)1.1 // Virtual Meeting Secretary //
Stream1.2 // Virtual Meeting Secretary ‘s animating Persona(AV)1.1  //
MLoc1.2 // Place assigned to Virtual Meeting Secretary //
Summary1.3 // Meeting Summary //
MLoc1.3 // Place for Summary display
Declare Service1 // Authoring Service //
Declare User2 // Meeting participant #1 //
human2 // human participant #1 //
AVPersona2.1 // Participant #1’s Persona //
MLoc2.1 // Participant#1’s home //
MLoc2.2 // Place assigned to Participant#1 //
AVHObject2.1 // Presentation //
MLoc2.3 // Place assigned for presentation display //
Event2.1 // Meeting/s recording //
Address1 // Storage (for recording)
Declare User3 // Meeting participant #2 //
AVPersona1 // Participant #2’s Persona //
MLoc1 // Place assigned to Participant#2 //
SObject1.1 // Speech Object to be Interpreted //
Declare Service2 // Interpretation Service //

Operation

ProcessA Actions Items Indirect Item or Process ProcessB
User1 Authors DataMdata By Service1 At Service1 Item(AV)1
MM-Embeds AVObject1.1 From Service1 At MLoc1.1
Process1 MM-Animates AVPersona1.1 At MLoc1.2 With Stream1.1 With SA
MM-Embeds Summary1.1 At MLoc1.3
User2 Tracks human2 At MLoc2.1 With SA
MM-Embeds AVPersona2.1 At MLoc2.2 With SA
  MM-Disables AVPersona2.1 At MLoc2.1
User3 Tracks AVPersona3.1 At MLoc3.1 With SA
Transacts Value3.1 To User1
MM-Embeds AVPersona3.1 At MLoc3.2 With SA
Interprets AudioObject1.1 By Service1 At User3
MM-Sends Event3.1 To URI
MM-Disables AVPersona3.1 At MLoc3.3
MM-Embeds AVPersona3.1 At MLoc3.1 With SA

4.3      Actions and Items

Table 7 gives the list of Actions and Items used by the Virtual Meeting Use Case. For simplicity, the Actions implied by the Track Action have not been added to the Table.

Table 7 – Virtual Meeting Actions, Items, and Data Types.

Actions Items
Author Coordinates
Interpret AVPersona
MM-Animate AVObject
MM-Disable Event
MM-Embed Orientation
MM-Send Position
Track Spatial Attitude
 Summary

5       Hybrid working

5.1      Description

A company applies a mixed in-presence and remote working policy.

  1. Workers attend Company
    1. Physically (R-Workers).
    2. Virtually (V-Workers).
  2. All Workers
    • Are Authenticated.
    • Are also present in the Virtual office if physically present.
    • Communicate by sharing AV messages (Communication of R-Workers’ Personae is also mapped to the M-Environment).
    • Participate in Virtual meetings where a whiteboard is placed.

5.2      MMM-Script representation

Declarations

Declare User1 // Company manager //
AVObject1.1 // Office //
MLoc1.1 // Office Location //
AVPersona1.1 // Office Gatekeeper //
MLoc1.2 // Place for Gatekeeper //
Declare Process1 //Animates Office Gatekeeper //
Declare User2 // R-Worker //
AVPersona2.1 // R-Worker’s Persona (R-Persona) //
MLoc2.1 // Home (R-Worker) //
MLoc2.2 // Place of R-Worker’s Office desk //
MLoc2.3 // Place in meeting room
AVObject2.1 // Whiteboard //
MLoc2.4 // Place for Whiteboard //
Declare Process2 // Animates Whiteboard //
Declare User3 // V-Worker #1 //
AVPersona3.1 // V-Worker’s Persona (V-Persona) //
MLoc3.1 // V-Worker’s home //
MLoc3.2 // Place for V-Worker’s desk //
SObject3.1 // Speech Object//
MLoc3.3 // Place close to R-Worker’s virtual desk //
MLoc3.4 // Place#7 in meeting room //

Operation

ProcessA Action Item Indirect Item or Process
Manager MM-Embeds Office At MLoc1.1 With SA
MM-Embeds AVPersona1.1 At MLoc1.2 With SA
MM-Animates AVPersona1.1
User2 Tracks AVPersona2.1 At MLoc2.1 With SA
User1 Authenticates human2 At User1
User3 Tracks AVPersona3.1 At MLoc3.1 With SA
MM-Embeds V-Persona At MLoc3.2 With SA
MM-Sends SObject3.1 To User2
MM-Embeds AVPersona3.1 At MLoc3.3 With SA
MM-Disables AVPersona3.1 From MLoc3.2
MM-Embeds AVPersona3.1 At MLoc3.4 With SA
MM-Disables AVPersona3.1 From MLoc3.3
User2 MM-Embeds AVPersona2.1 At MLoc2.2 With SA
MM-Disables AVPersona2.1 From MLoc2.1
MM-Embeds AVObject2.1 At MLoc2.2 With SA
MM-Disables AVPersona2.1 From MLoc2.2
V-Worker MM-Embeds V-Persona At MLoc3.1 With SA
MM-Disables V-Persona From MLoc3.4

5.3      Actions, Items, and Data Types

Table 8 – Hybrid Working Actions and Items

Actions Items
MM-Animate AObject
MM-Disable AVHObject
MM-Embed AVPersona
MM-Send Coordinates
Track M-Location
Orientation
Position
Spatial Attitude

6       eSports Tournament

6.1      Description

  1. Site manager
    • Develops a game landscape.
    • Makes it available to a game manager.
  2. Game manager
    • Deploys autonomous characters.
    • Places virtual cameras and microphones in the landscape.
  3. Captured AV from game landscape is displayed onto a dome screen and streamed online. MMM-Script representation

6.2      MMM-Script representation

Declarations

Declare User1 // Site Manager //
AVHObject1 // Game landscape
MLoc1.1 // Game Location //
Declare Service1 // Author Service //
Declare User2 // Game manager //
Value1 // Game Location Renting Fees //
Personae2.i //Autonomous characters //
M-Loc2.i // Places in Game landscape //
Scene2.1 //Game’s Scene //
Declare Userj // Players //
Personaej.1 // Players’ characters //
M-Locj.1 // Location in Game landscape //
Declare Processi // Animates i-th Autonomous character //
Device1 // Microphone/Camera control
Declare Service2 // Operates Microphone/Camera control //
Declare Device2 //Dome screen //
Declare Devicek // Online Device of human //

Operation

ProcessA Action Item Secondary Item – Process
User1 Authors AVHObject1 By Service1 With Data At Service1
MM-Embeds AVHObject1 From Service1 At MLoc1.1 With SA
User2 Transacts Value1 To User1
MM-Embeds Personae2.i At M-Loc2.i With SA
MM-Animates Personae2.i
Userj Tracks Personaej.1 At M-Loc2.i With SA
User2 Calls Service2 To Device1
User2 MU-Renders Scene2.1 At Device2
  At Devicek

6.3      Actions, Items, and Data Types

Table 9 – eSports Tournament Actions, Items, and Data Types.

Actions Items
Author AVHObject
MM-Animate AVHPersona
MM-Embed AVHScene
MU-Render Coordinates
Track Currency
Transact M-Location
Orientation
Position
Spatial Attitude
U-Location
Value

7       Virtual performance

7.1      Description

  1. Impresario:
    • Acquires Rights to parcel.
    • Authors Auditorium
    • Embeds Auditorium on Parcel.
  2. Participant
    • Buys a ticket for an event with the right to stay close to the performance stage for 5 minutes.
    • Utters a private speech to another participant.
  3. Impresario:
    • Collects participants’ preferences.
    • Interprets participants’ mood (Participants Status).
    • Generates special effects based on preferences and Participants Status.

7.2      MMM-Script representation

Declarations

Declare: User1 // Impresario //
  Value1.1 // Payment for Land Parcel //
  AVObject1 // Auditorium //
  Value1.2 // Payment for Auditorium authoring //
  M-Loc1.1 // Parcel//
  AObject1.i // SFX //
  M-Loc1.i // SFX Places on Auditorium //
  Value1.3 // Consideration for Performance //
PersonalStatus1.i // Status of i-th event participants //
Declare: Service1 // Content Authoring //
Declare: Service2 // Preference Collection //
Declare Service3 // Parcel Service //
Declare: User2 // Performer //
  AVPersona2.1 // Performer’s Persona
  M-Loc2.1 // Performer’s home //
  M-Loc2.2 // Stage in Auditorium //
Declare: User3 // Participant #1//
  AVPersona3.1 // Participant#1’s Persona //
  M-Loc3.1 // Home //
  M-Loc3.2 // Seat#1 in Auditorium //
  Scene3.1 // Scene of Stage //
SObject3.1 // Speech Object ///
  Value3.1 // Ticket#1 to event //
Declare User4   // Participant#2//
  AVPersona1 //Participant#2’s Persona //
  M-Loc1 // Participant#2’s Home //
  M-Loc2 // Seat#2 in Auditorium //
  Value1 // Ticket#2 to event //
Declare: User5 // Land Parcel owner //

Operation

ServiceA Action Item Secondary Item – Process
User1 Transacts Value1.1 To Service3
Authors Auditorium By Service1 With Data At Service1
Transacts Value1.2 To Service1
MM-Embeds AVObject1.1 From Service1 At M-Loc1.1 With SA
Calls Service2 At Service2
User2 Tracks AVPersona2.1 At M-Loc2.1 With SA
MM-Embeds AVPersona2.1 At M-Loc2.2 With SA
MM-Disables AVPersona2.1 From M-Loc2.1
User3 Tracks AVPersona3.1 At M-Loc3.1 With SA
Transacts Value3.1 At User1
MM-Embeds Persona3.1 At M-Loc3.2 With SA
MM-Disables AVPersona3.1 From M-Loc3.1
User4 Tracks AVPersona4.1 At M-Loc4.1 With SA
Transacts Value3.1 At User1
Embeds AVPersona4.1 At M-Loc4.2 With SA
MM-Disables AVPersona4.1 From M-Loc4.1
User3 MM-Sends SObject3.1 To User4
Calls Service2 At Service2
MM-Sends Scene3.1 To User3
User1 Calls Service2 At User1
Interprets PersonalStatus1.i At User1
MM-Embeds AObject1.i At M-Loc1.i With SA
Transacts Value1.3 To User2
User2 MM-Embeds AVPersona2.1 At M-Loc2.1 With SA
MM-Disables AVPersona2.1 From M-Loc2.2
User3 MM-Embeds AVPersona3.1 At M-Loc3.1 With SA
MM-Disables AVPersona3.1 From M-Loc3.2
User4 MM-Embeds AVPersona4.1 At M-Loc4.1 With SA
MM-Disables AVPersona4.1 From M-Loc4.2

7.3      Actions and Items

Table 10 – Virtual Event Actions and Items.

Actions Items
Author AObject
Interpret AVObject
MM-Disable AVPersona
MM-Embed Coordinates
MM-Send Currency
Track M-Location
Transact Orientation
Participants Status
Position
Spatial Attitude
Value

8       AR Tourist Guide

8.1      Description

In this Use Case human3 (AR Tourist Guide Service Provider) engages the following humans:

  1. human1 to cause their User1 to buy a virtual parcel and develop a virtual landscape suitable for a tourist application.
  2. human2 to cause their User2 to develop scenes and autonomous agents for the different places of the landscape.
  3. human4 to create an app that alerts the holder of a smart phone running the app.
  4. human5 holding a smart phone with the app to perceive Entities and interact with Personae MM-Embedded at M-Locations and MM-Animated.

8.2      MMM-Script representation

Declarations

Declare User1 // Virtual Land developer//
  MLoc1.1 // Land Parcel //
  AVObject1.1 // Landscape //
  Value1.1 // Payment for Land Parcel //
Declare Service1 // Authoring Service //
Declare User2 // Object developer //
  AVObject2.i // Objects for landscape //
  MLoc2.i // Where Objects are placed //
  Value2.1  // Payment for AVObjects2.i //
Declare User3 // Tourist application developer //
Value3.1 // Payment for populated landscape //
  Persona3.k // Persona to be MM-Animated //
MLoc3.k // correspondent to ULoc3.k //
ULoc3.k // ULoc where App reacts //
Declare human4 // Software developer //
Map4.1 // Universe Metaverse Map for mobile app //
  Value4.1 // Payment for Map and App //
Declare human5 // holds Device running human4’s App //
User5 // User of human5 //
Declare App1 // Installed on Device1 //
  Message1.1 // From App1 to User4 //
Declare User6 // Land Parcel Rights holder //

Operation

ProcessA Action Item Secondary Item or Process
User1 Transacts Value1.1 To User6
Authors AVObject1.i At Service1
Embeds Object(V)1.1 At MLoc1.1 With SA
 User2 Transacts Value2.1 To User1
Authors AVObject2.1 At Service1
MM-Embeds AVObject2.1 From Service1 At MLoc2.i With SA
 User3 Transacts Value3.1 To User2
Authors AVPersonae3.i At Service1
MM-Embeds AVPersonae3.i From Service1 At MLoc2.i With SA
MM-Animates MLoc Personae3.i
human4 develops Uni-Metaverse Map
develops App
sells Map and App To human3
human5 arrives At U-Loc3.i
App1 MM-Sends Message1.1 To Device
User4 MM-Animates Persona3.i At MLoc3.i
 User5 MU-Renders Animated Persona At U-Loc3.i

8.3      Actions and Items

Table 11 – AR Tourist Guide Actions and Items.

Actions Items
Author Coordinates
Author Currency
MM-Animate Map
MM-Animate Message
MM-Embed M-Location
MM-Send Object(AV)
MU-Render Object(V)
MM-Send Orientation
Transact Persona
Position
Service
Spatial Attitude
U-Location
Value

9       Virtual Dance

9.1      Description

This Use Cases envisages that:

  1. Dance teacher places their virtual secretary Persona animated by an autonomous agent in the dance school.
  2. Student #1:
    • Shows up at school.
    • Greets the secretary.
  3. Virtual secretary reciprocates greetings.
  4. Dance teacher:
    • Places a haptic Persona of theirs in the dance school.
    • Dances with student #1.
  5. Student #2:
    • Is at home.
    • Shows up at school.
  6. Teacher:
    • Places their haptic Persona close to student #2.
    • Places (replaces) another haptic Persona of theirs close to student #1.
    • Animates the new haptic Persona with autonomous agent dancing with student #1.
    • Dances with student #2.

9.2      MMM-Script representation

Declarations

Declare User1 // Dance teacher //
  AVHPersona1.1 // Dancing persona#1 //
  MLoc1.1 // Place#1 (Teacher’s Office) //
  AVHPersona1.2 // School Secretary //
  MLoc1.2 // Place#2 (Dancing School //
AVH Persona1.3 // Dancing persona#2 //
  MLoc1.3 // Place#3 (dancing area) //
  SObject1 // Speech Object#2 (Greetings) //
  MLoc1.4 // Place#4 (dancing area) //
Declare User2 // Dance student #1 //
  AVHPersona2.1 // Student’s Persona/
  MLoc2.1 // Student#1’s home //
  MLoc2.2 // Place#5 in dancing area //
Declare User3 // Dance Student #2 //
  AVHPersona3.1 // Student’s Persona //
  MLoc3.1 // Dance Student#2’s home //
  MLoc3.2 // Place#6 in dancing area //

Operation

ProcessA Action Item Secondary Item or Process
User1 Tracks AVHPersona1.1 At MLoc1.1With SA
MM-Embeds AVHPersona1.1 At MLoc1.2 With SA
MM-Embeds AVHPersona1.2 At MLoc1.3 With SA
MM-Animates AVHPersona1.2
User2 Tracks AVHPersona2.1 At User2’s MLoc2.1With SA
MM-Embeds AVHPersona2.1 At MLoc2.2 With SA
MM-Disables AVHPersona2.1 From MLoc2.1
User1 MM-Embeds AVHPersona1.3 At MLoc1.3 With SA
User3 Tracks AVHPersona3.1 At MLoc3.1 With SA
MM-Embeds AVHPersona3.1 At MLoc3.2 With SA
MM-Disables AVHPersona3.1 From MLoc3.1
User1 Tracks AVHPersona1.1 At MLoc1.4 With SA
MM-Disables AVHPersona1.1 From MLoc1.3
Tracks AVHPersona1.3 At MLoc1.3 With SA

9.3      Actions and Items

Table 12 – Virtual Dance Actions and Items.

Actions Items
MM-Animate AObject
MM-Disable AVHPersona
MM-Embed AVPersona
MM-Send M-Location
Track Orientation
Position
Spatial Attitude

10   Virtual Car Showroom

10.1   Description

This Use Cases envisages that:

  1. A car dealer MM-Embeds an MM-Animated Persona in the car showroom (as attendant).
  2. A customer:
    • MM-Embeds its Persona in the car showroom.
    • Greets the showroom attendant.
  3. The Showroom attendant reciprocates the greeting.
  4. The dealer:
    • UM-Animates the attendant.
    • Converses with the customer.
    • Embeds a 3D AVH model of a car.
  5. The customer
    • Has a virtual test drive.
    • Buys the car.
    • Returns home.

10.2   MMM-Script representation

Declarations

Declare User1 // Car dealer //
  AVPersona1.1 // Car dealer //
  MLoc1.1 // Place#1 (Car dealer’s Office) //
  AVPersona1.2 //Showroom attendant //
  MLoc1.2 // Place#2 (in Showroom) //
  AObject1 .1 //Greetings //
  M-Loc1.2 // Place#3 (in Showroom) //
  AVHModel1.1 // 3D Model of car //
Declare User2 // Customer //
  AV Persona2.1 // Customer’s Persona //
  M-Loc2.1 // Customer’s home //
  M-Loc2.2  // Place#4 in showroom //
  AVH Persona2.1 // User2’s Persona for test driving //
  M-Loc2.3 // Place#5 (in virtual car)
  Value2.1 // Payment for car //
U-Loc2.1 // U-Place#1 (U-Location of Customer) //

Operation

ProcessA Action Item Secondary Item or Process
User1 Tracks AVPersona1.1 At MLoc1.1 With SA
MM-Embeds AVPersona1.2 At MLoc1.2 With SA
MM-Animates AVPersona1.2
User2 Tracks AVPersona2.1 At MLoc2.1 With SA
MMEmbeds AVPersona2.1 At MLoc2.2 With SA
MMDisables AVPersona2.1 From MLoc2.1
User1 MMSends Speech Object To User2
MMEmbeds AVPersona1.1 At MLoc1.2 With SA
MMDisables AVPersona1.1 From MLoc2.1
MMEmbeds AVHModel1.1 At MLoc2.3 With SA
MMAnimates AVHModel1.1
User2 MMEmbeds AVPersona2.1 At MLoc2.3 With SA
MMDisables AVPersona2.1 From MLoc2.2
Transacts Value2.1 To User1
MMEmbeds AVPersona2.1 At MLoc2.1 With SA
MMDisables AVPersona2.1 From MLoc2.3

10.3   Actions and Items

Table 13 – Virtual Car Showroom Actions and Items

Actions Items
MM-Animate Currency
MM-Disable AObject
MM-Embed Orientation
MM-Send AVPersona
Track AVHPersona
Transacts Position
AVHScene
Spatial Attitude
Value

11   Drive a Connected Autonomous Vehicle

11.1   Description

This Use Case considers some of the steps made by a human having rights to an implementation of Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture and Technologies. The Use Case assumes that there are two CAVs: CAVA and CAVB and that the CAVA rights holder (UserA.1) wants to see the CAVB Environment in the CAVB M-Instance and have their Persona joining CAVB‘s cabin:

The human CAVA rights holder Registers with the CAV to access the CAV-created M-Instance by providing:

  1. The requested subset of their Personal Profile.
  2. Two User Processes required to operate a CAV:
    • UserA.1 to operate the Human-CAV Interaction Subsystem.
    • UserA.2 to operate the Autonomous Motion Subsystem.
  3. UserA.1’s PersonaA.1.1 (representing the human CAV rights-holder).

The workflow then progresses as follows:

  1. UserA.1
    • Authenticates the human’s voice.
    • Interprets driving instructions from human.
    • Communicates driving instructions to User2.
  2. UserA.2
    • Gets information about CAVA
    • Gets travel options from Route Planner.
    • Communicates travel options to UserA.1.
  3. UserA.1
    • Produces Speech Object with travel options.
  4. human
    • utters selected option to UserA.1.
  5. UserA.1
    • Interprets driving instructions from human.
    • Communicates driving instructions to UserA.2.
  6. User2
    • Gets the Basic Environment Representation from its ESS.
    • Authenticates its peer UserB.2 in CAVB.
    • Gets elements of CAVB‘s Full Environment Representation from UserB.2 in CAVB.
    • Produces Full Environment Representation.
    • Sends a command to the Ego CAV’s Motion Actuation Subsystem.
  7. UserA.1
    • Authenticates its peer UserB.1 in CAVB.
    • Watches CAVB’s Environment.
    • MM-Animates PersonaA.1.1 in CAVB‘s cabin.

11.2   MMM-Script representation

Declarations

Declare  humanA.1 // CAVA’s rights holder //
Declare  DeviceA.1 // Audiovisual sensor and actuator //
Declare  UserA.1 // CAVA’s HCI //
1.      sceneA.1.1 // Scene at ULoc A.1.1 //
2.      DataMdataA.1.1 // Data and Metadata of scene captured by Device1 //
3.      Object(AV)A.1.1 //AV Object used to Authenticate humanA.1 //
4.      Object(A)A.1.1 // Speech Object #1 requesting Routes //
5.      AMS-HCIMessageA.1.1 // Travel request to User A.2 //
6.      ULoc A.2.1 // Place where CAVA is located //
7.      MLocA.2.1 // M-Location corresponding to ULocA.1.1 //
8.      SceneA.2.1 // Scene at MLoc A.1.1 //
9.      AMS-HCIMessageA.1.2 // Travel response to User A.1 //
10.  Object(A)A.1.2 //Speech Object #2 selecting Route//
11.  HCI-AMSMessageA.1.3/Travel selection to User A.2 //
12.  EgoRemoteHCIMessageA.1.1 //Request to MM-Embed Avatar //
Declare  Route PlannerA.1 // CAV Process //
Declare  Path PlannerA.1 // CAV Process //
Declare  Motion PlannerA.1 // CAV Process //
Declare  Obstacle AvoiderA.1 // CAV Process //
Declare  Command IssuerA.1 // CAV Process //
Declare  UserA.2 // CAVA’s AMS //
1.      SceneA.2.1 // CAVA’s Environment //
2.      EgoRemoteAMSMessageA.1.1 // Request Environment Descriptors //
Declare UserB.2 // CAVB’s AMS //
1.      EgoRemoteAMSMessageA.1.2 // Environment Descriptors //
2.      SceneB.2.1 // CAVB’s scene in ULocB.1.1 – CAVB’s Environment//
Declare  UserB.1 // CAVB’s HCI //
1.      SceneB.1.1 // CAVB’s scene in ULocB.1.2 – cabin//
2.      MLocB.1.1 // M-Location corresponding to ULocB.1.1 //

Operation

ProcessA Action Item Secondary Item or Process
humanA Registers With CAVA
UserA.1 UM-Captures scene At Device
UM-Sends DataMdata From Device To UserA.1
Identifies SceneA.1 At UserA.1
Authenticates Object(AV)A.1.1 At UserA.1
Interprets ObjectA.1.1(A) At UserA.1
MM-Sends HCI-AMSCmdA.1.1 To UserA.2
UserA.2 MM-Sends ESS A’s SceneA.2.1 To Route Planner
MM-Sends AMS-HCIRespA.2.1 To UserA.1
UserA.1 Interprets Object(A)A.1.3 At UserA.1
MM-Sends HCI-AMSCmdA.1.2 To UserA.2
UserA.2 Authenticates UserB.2 At UserA.2
MM-Sends EgoRemoteAMSMessageA.1.1 To UserA.2
UserB.2 MM-Sends EgoRemoteAMSMessageA.1.2 To UserB.2
MM-Sends ESS’s SceneA.2.2 To UserB.2
MM-Sends PathA2.1 To Motion PlannerA
Motion PlannerA MM-Sends TrajectoryA.2.1 To Obstacle AvoiderA
Obstacle AvoiderA MM-Sends TrajectoryA.2.1 To Command IssuerA
Command IssuerA MM-Sends AMS-MASMessageA.2.1 To MASA
MASA MM-Sends AMS-MASMessageA.2.2 To Command IssuerA
UserA.1 Authenticates UserB.1 At UserA.1
MM-Sends EgoRemoteHCIMessageA.1.1 To UserA.2
UserA.2 MM-Sends EgoRemoteHCIMessageA.1.2 To UserA.1

11.3   Actions, Items, and Data Types

Note: The MPAI-CAV specific Items are included. New application-dependent Items and Processes are needed.

Table 14 – Drive a Connected Autonomous Vehicle Actions, Items, and Data Types.

Action Item Data Types
Authenticate AMS-HCIR Message Spatial Attitude
Interpret AMS-MAS Message Coordinates
MM-Embed Environment Representation Orientation
MM-Send Ego-Remote HCI Message Position
MU-Render Ego-Remote AMS Message
Register M-Location
Request Audio Object
Track Audio-Visual Object
UM-Render Path
Persona
Route
Scene
Trajectory

<-Profiles         Go to ToC