1      Introduction

2      Use Case Description Language

3      Virtual Lecture.

3.1  Description

3.2  MMM-Script representation

3.3  Actions, Items, and Data Types

4      Virtual Meeting

4.1  Description

4.2  MMM-Script representation

4.3  Actions, Items, and Data Types

5      Hybrid working

5.1  Description

5.2  MMM-Script representation

5.3  Actions, Items, and Data Types

6      eSports Tournament

6.1  Description

6.2  MMM-Script representation

6.3  Actions, Items, and Data Types

7      Virtual performance

7.1  Description

7.2  MMM-Script representation

7.3  Actions, Items, and Data Types

8      AR Tourist Guide

8.1  Description

8.2  MMM-Script representation

8.3  Actions, Items, and Data Types

9      Virtual Dance

9.1  Description

9.2  MMM-Script representation

9.3  Actions, Items, and Data Types

10    Virtual Car Showroom

10.1     Description

10.2     MMM-Script representation

10.3     Actions, Items, and Data Types

11    Drive a Connected Autonomous Vehicle

11.1     Description

11.2     MMM-Script representation

11.3     Actions, Items, and Data Types

1       Introduction

This Informative Chapter collects diverse Metaverse Use Cases where Users request to perform Actions on different types of Items. The goal of this Chapter is to show that the Metaverse elements of this Technical Specification do indeed support a range of representative Use Cases.

Note that, unless disclaimed otherwise, a sentence like “A student attends a lecture held by a teacher in a classroom created by a school manager” means that “a User representing a student attends a virtual lecture in a virtual classroom Authored by a User representing a school manager and MM-Embedded at an M-Location”.

2       Use Case Description Language

Metaverse Use Cases involve a plurality of Processes – Users, Devices, Services, Apps – performing or requested by other Processes to perform Actions on a variety of Items.

In a Use Case:

  1. Processes (e.g., Users) are sequentially identified by one subscript.
  2. Items Acted on by a Process are identified by the subscript of the Process performing an Action on the Item followed by a sequential number.
  3. The Locations where the Actions take place are similarly identified by the subscript of the Process performing an Action at the Location followed by a sequential number.
  4. If the Actions are performed at different M-Instances, all Processes, Items, and Locations are prefixed by a sequential capital letter.

For instance:

  1. Useri MM-Embeds Personaj at M-Locationi.k.
  2. Useri MU-Renders Entityj at U-Locationi.k.
  3. UserA,i MM-Sends Objecti.j to UserB.k.

All Use Cases assume that Actions are performed in an M-Instance. When they are performed in the Universe, this is specifically mentioned.

The following abbreviations are used throughout:

MLoc M-Location
SA Spatial Attitude
ULoc U-Location

Note:   Persona(AV) is a Persona that can be audio-visually perceived.

Object(AVH) is an Object that can be audio-visual-haptically perceived.

3       Virtual Lecture

3.1      Description

A student attends a lecture held by a teacher in a classroom created by a school manager:

  1. School Manager
    • Authors and embeds a virtual classroom.
    • Pays the teacher.
  2. Teacher
    • Is at home.
    • Embeds a persona of theirs from home at the classroom’s desk.
    • Embeds and animates a 3D Object.
    • Leaves the classroom.
  3. Student
    • Is at home.
    • Pays to attend a lecture and make a copy of their Experience.
    • Embeds a persona of theirs in the classroom.
    • Approaches the teacher’s desk to feel the 3D Object with haptic gloves.
    • Stores their lecture Experience.
    • Leaves the classroom and returns home.

3.2      MMM-Script representation

Declare: User1 // School manager //

  1. Object(V)1 //Classroom //
  2. MLoc1 // Place#1 (Classroom location) //
  3. Value1 // Lecture consideration //

Declare: User2 // Teacher //

  1. Persona(AV)1 // Teacher’s Persona //
  2. MLoc1 // Teacher’s home //
  3. MLoc2 // Place#2 (Classroom desk) //
  4. MLoc3 // Place#3 (Experiment Object) //
  5. Object(AVH)1 // Experiment Object //

Declare: User3 // Student //

  1. Persona(AV)1 // Student’s Persona //
  2. MLoc1 // Student’s home //
  3. MLoc2 // Classroom seat //
  4. Value1 // Lecture fees //
  5. Experience1 // Lecture Experience //
  6. Address1 // Address of Experience storage //
Who ActsOn What Secondary
Manager Authors Classroom By AuthorService With Data At AuthorService
  MM-Embeds Classroom From AuthorService At Place#1
Teacher Tracks Teacher At Teacher’s home With SA
  Tracks Teacher At Place#2 With SA
  MM-Disables Teacher At Teacher’s home
  MM-Embeds Exper. Object At Place#3
Student Tracks Student At Student’s home With SA
Transacts Lecture fees To School manager
Tracks Student At Classroom bench With SA
MM-Disables Student At Student’s home
Teacher MM-Animates Exper. Object At Place#3
Student MM-Sends Exper. Object To Student
MU-Sends Lecture Exper. To Address of Experience storage
Manager Transacts Lecture cons. To Teacher
Teacher MM-Disables Teacher At Place#1
MM-Enables Teacher At Teacher’s home
Student MM-Disables Student At Place#2
MM-Enables Student At Student’s home

3.3      Actions, Items, and Data Types

Table 6 gives the list of Actions, Items, and Data Types used by the Virtual Lecture Use Case. The Table also gives the Actions implied by the Track Composite Action (MM-Embed, MM-Animate, MM-Send, MU-Render, UM-Capture, MU-Send, and Identify). The list of these Actions will not be repeated in the next tables.

Table 6 – Virtual Lecture Actions, Items, and Data Types

Actions Items Data Types
Author Experience Amount
Identify M-Location Coordinates
MM-Animate Object(AVH) Currency
MM-Disable Object(V) Spatial Attitude
MM-Embed Persona(AV) Value
MM-Send U-Location Orientation
MU-Render Value Position
MU-Send
UM-Capture
UM-Send
Track
Transact

4       Virtual Meeting

4.1      Description

A meeting manager

  1. Authors a meeting room.
  2. Deploys a Virtual Meeting Secretary tasked to produce a summary of the conversations, enriched by information about participants’ Personal Statuses.

A participant

  1. Attends a meeting held in the room.
  2. Gets a translation of sentences uttered in languages other than their own.
  3. Makes a presentation using a 3D model.

4.2      MMM-Script representation

Declare: User1 // Meeting manager //

  1. Object(V)1 // Meeting room //
  2. MLoc1 // Meeting location //
  3. Persona(AV)1 // Virtual Meeting Secretary //
  4. MLoc2 // Place#1 in room (for Virtual Meeting Secretary) //
  5. Summary1 // Meeting Summary //
  6. MLoc3 // Place#2 (for Summary display)

Declare: User2 // Meeting participant #1 //

  1. Persona(AV)1 // Participant #1’s Persona //
  2. MLoc1 // Participant#1’s home //
  3. MLoc2 // Place#3 in room (for Participant#1) //
  4. Object(AVH)1 // Presentation //
  5. MLoc3 // Place#4 (For presentation display) //
  6. Event1 // Meeting/s recording //
  7. Address1 // Storage (for recording)

Declare: Process1 // Animation Process //

Declare: User3 // Meeting participant #2 //

  1. Persona(AV)1 // Participant #2’s Persona //
  2. MLoc1 // Place#5 in room (for Participant#2) //
  3. Object(A)1 // Speech Object#1 (P#2) //
  4. Object(A)2 // Speech Object#2 (P#2) //
Who ActsOn What Secondary object
Manager MM-Embeds Meeting room At Meeting room location With SA
MM-Embeds Persona1.1 At Place#1 With SA
MM-Animates Persona1.1
Participant#1 Tracks Persona2.1 (AV) At Participant#1’s home With SA
Tracks Persona2.1 (AV) At Place#3 With SA
MM-Disables Persona2.1 (AV) From Participant#1’s home
Participant#2 Tracks Persona3.1 (AV) At Participant#1’s home With SA
Tracks Persona3.1 (AV) At Place#4 With SA
MM-Disables Participant#2 From Participant#2’s home
Participant#1 Authenticates Participant #2 At Participant#1
Interprets Speech Object#1 At Participant#1
MM-Embeds Presentation At Place#5 With SA
MM-Animates Presentation
Virtual Secretary Interprets Speech Object#2 At Meeting manager
MM-Embeds Summary At Place#2 With SA
Manager MM-Disables Persona1.1 From Place#1
Participant#1 MU-Sends Recording To Storage
MM-Embeds Participant#1 At Participant#1’s home With SA
MM-Disables Participant#1 From Place#3
Participant#2 MM-Embeds Participant#1 At Participant#2’s home With SA
MM-Disables Persona3.1 (AV) From Place#5

4.3      Actions, Items, and Data Types

Table 7 gives the list of Actions, Items, and Data Types used by the Virtual Meeting Use Case. For simplicity, the Actions implied by the Track Action have not been added to the Table.

Table 7 – Virtual Meeting Actions, Items, and Data Types.

Actions Items Data Types
Authenticate Event Coordinates
Interpret Object(AV) Orientation
MM-Animate Object(V) Position
MM-Disable Persona(AV) Spatial Attitude
MM-Embed Summary
MM-Send
Track

5       Hybrid working

5.1      Description

A company applies mixed in-presence and remote working policy.

  1. Some Workers (R-Workers) attend Company physically.
  2. Some Workers (V-Workers) attend Company virtually.
  3. All Workers
    • Are Authenticated.
    • Are present in the Virtual office.
    • Communicate by sharing AV messages (Communication of R-Workers’ Personae is also mapped to the M-Environment).
    • Participate in Virtual meetings.

5.2      MMM-Script representation

Declare: User1 // Company manager //

  1. Object(V)1 // Office //
  2. MLoc1 // Office Location //
  3. Persona(AV)1 // Office Gatekeeper //
  4. MLoc2 // Place#1 (for Gatekeeper) //

Declare: Process1 //Animates Office Gatekeeper //

Declare: User2 // R-Worker //

  1. Persona(AV)1 //R-Worker’s Persona (R-Persona) //
  2. MLoc1 // Home (R-Worker) //
  3. MLoc2 // Place#2 (R-Worker’s Office desk) //
  4. MLoc3 // Place#3 (in meeting room)
  5. Object(AVH)1 // Whiteboard //
  6. MLoc4 // Place#4 (for Whiteboard) //

Declare: Process2 // Animates Whiteboard //

Declare: User3 // V-Worker #1 //

  1. Persona(AV)1 // V-Worker’s Persona (V-Persona) //
  2. MLoc1 // V-Worker’s home //
  3. MLoc2 // Place#5 (V-Worker’s desk) //
  4. Object(A)1 //Speech Object//
  5. MLoc2 // Place#6 (close to R-Worker’s desk) //
  6. 5 // Place#7 (in meeting room) //
Process Acts What Secondary object
Manager MM-Embeds Office At Office Location With SA
MM-Embeds Gatekeeper At Place#1 With SA
MM-Animates Gatekeeper
human2 enters company
R-Worker Tracks R-Persona At Place#2 With SA
Gatekeeper Authenticates R-Persona At Gatekeeper
V-Worker Tracks V-Persona At home With SA
MM-Embeds V-Persona At Place#5 With SA
MM-Sends Speech Object To R-Worker
MM-Embeds V-Persona At Place#6 With SA
MM-Disables V-Persona From Place#5
MM-Embeds V-Persona At Place#7 With SA
MM-Disables V-Persona From Place#5
R-Worker MM-Embeds R-Persona At Place#3 With SA
MM-Disables R-Persona From Place#2
MM-Embeds Whiteboard At Place#4 With SA
MM-Animates Whiteboard
MM-Disables R-Persona From Place#3
V-Worker MM-Embeds V-Persona At home With SA
MM-Disables V-Persona From Place#7

5.3      Actions, Items, and Data Types

Table 8 – Hybrid Working Actions, Items, and Data Types

Actions Items Data Types
Authenticate Object(V) Coordinates
MM-Animate M-Location Orientation
MM-Disable Object(A) Position
MM-Embed Object(AVH) Spatial Attitude
MM-Send Persona(AV)
Track

6       eSports Tournament

6.1      Description

  1. Site manager
    • Develops a game landscape.
    • Makes it available to a game manager.
  2. Game manager
    • Deploys autonomous characters.
    • Places virtual cameras and microphones in the landscape.
  3. Captured AV from game landscape is displayed onto a dome screen and streamed online. MMM-Script representation

6.2      MMM-Script representation

Declare:  User1 // Site Manager //

  1. Object(AVH)1 // Game landscape
  2. MLoc1 // Game Location //

Declare: Service1 // Author Service //

Declare: User2 // Game manager //

  1. Value1 // Game Location Renting Fees //
  2. Personaei //Autonomous characters //
  3. M-Loci // Places in Game landscape //
  4. Scene1 //Game’s Scene //

Declare: Userj // Players //

  1. Personae1 //Players’ characters //
  2. M-Loc1 // Location in Game landscape //

Declare: Process2.i // Animates Autonomous character //

Declare: Service2 // Microphone/Camera control //

Declare: Device1 //Dome screen //

Declare: Devicek // Online Device of human //

Process ActsOn What Secondary object
Site Mgr Authors Game Landscape By AutService With Data At AuthorService
MM-Embeds Game landscape From Service At GameLoc With SA
Game Mgr Transacts Rental Fees To Site Manager
MM-Embeds Auton. characters At Places in Game landscape With SA
MM-Animates Auton. characters
Player Tracks Players’ characters At Places in Game landscape With SA
Dev. ctrl Controls Camera/mike  
Game Mgr MU-Renders Game’s Scene At Dome screen
  At Online devices

6.3      Actions, Items, and Data Types

Table 9 – eSports Tournament Actions, Items, and Data Types.

Actions Items Data Types
Author Object(AVH) Amount
MM-Animate Persona (AVH) Coordinates
MM-Embed Scene(AVH) Currency
MU-Render M-Location Orientation
Track U-Location Position
Transact Value Spatial Attitude

7       Virtual performance

7.1      Description

  1. Impresario:
    • Acquires Rights to parcel.
    • Authors Auditorium
    • Embeds Auditorium on Parcel.
  2. Participant
    • Buys a ticket for an event with the right to stay close to the performance stage for 5 minutes.
    • Utters a private speech to another participant.
  3. Impresario:
    • Collects participants’ preferences.
    • Interprets participants’ mood (Participants Status).
    • Generates special effects based on preferences and Participants Status.

7.2      MMM-Script representation

Declare: User1 // Impresario //

  1. Value1 /Payment for Land Parcel //
  2. Object(V)1 // Auditorium //
  3. Value2 // Payment for Auditorium authoring //
  4. Object(A)i // SFX //
  5. M-Locationi // SFX Places on Auditorium //
  6. Value3 // Consideration for Performance //
  7. Participants Status1 // Status of event participants //

Deckare: Service1 // Content Authoring //

Declare: Service2 // Preference Collection //

Declare: User2 // Performer //

  1. Persona1 // Performer’s Persona
  2. M-Loc1 // Performer’s home //
  3. M-Loc2 // Stage in Auditorium //

Declare: User3 // Participant #1//

  1. Persona1 /Participant#1’s Persona //
  2. M-Loc1 // Home //
  3. M-Loc2 // Seat#1 in Auditorium //
  4. Scene1 // Scene of Stage //
  5. Object(A)1 // Audio Object ///
  6. Value1 // Ticket#1 to event //

User4  // Participant#2//

  1. Persona1 /Participant#2’s Persona //
  2. M-Loc1 // Participant#2’s Home //
  3. M-Loc2 // Seat#2 in Auditorium //
  4. Value1 // Ticket#2 to event //

Declare: User5 // Land Parcel owner //

Who ActsOn What Secondary object
Impresario Transacts Parcel payment To Parcel Service
Authors Auditorium By AutService With Data At AutService
Transacts Authoring Fees To Authoring Service
MM-Embeds Auditorium From AutService At Parcel With SA
Calls Preference Service At Preference Service
Performer Tracks Performer’s Persona At Performer’s home With SA
Tracks Performer’s Persona At Stage With SA
MM-Disables Performer’s Persona From home
Participant#1 Tracks P#1’s Persona At home With SA
Transacts Event’s Ticket#1 At Participant#1
Tracks P#1’s Persona At Seat#1 With SA
MM-Disables P#1’s Persona From home
Participant#2 Tracks P#2’s Persona At home With SA
Transacts Event’s Ticket#2 At Participant#2
Embeds P#2’s Persona At Seat#1 With SA
MM-Disables P#2’s Persona 1 From home
Participant#1 MM-Sends Audio Object To Participant#2
Calls Preference Service Preference Service
MM-Sends Scene3.1 To Participant#1
Impresario Calls Preference Service At Impresario
Interprets Participants Status1.1 At Impresario
MM-Embeds SFXs At Auditorium Places With SA
Transacts Performance Consid. To Performer
Performer MM-Embeds Performer’s Persona At Home With SA
MM-Disables Performer’s Persona From Stage
Participant#1 MM-Embeds P#1’s Persona At Home With SA
MM-Disables Persona(AV)3.1 From Seat#1
Participant#2 MM-Embeds Persona(AV)4.1 At Home With SA
MM-Disables Persona(AV)4.1 From Seat#2

7.3      Actions, Items, and Data Types

Table 10 – Virtual Event Actions, Items, and Data Types.

Actions Items Data Types
Author Object(A) Amount
Interpret Object(AV) Coordinates
MM-Disable Persona(AV) Currency
MM-Embed M-Location Orientation
MM-Send Value Participants Status
Track Position
Transact Spatial Attitude

8       AR Tourist Guide

8.1      Description

In this Use Case human3 (AR Tourist Guide Service Provider) engages the following humans:

  1. human1 to cause their User1 to buy a virtual parcel and develop a virtual landscape suitable for a tourist application.
  2. human2 to cause their User2 to develop scenes and autonomous agents for the different places of the landscape.
  3. human4 to create an app that alerts the holder of a smart phone running the app.
  4. human5 holding a smart phone with the app to perceive Entities and interact with Personae MM-Embedded at M-Locations and MM-Animated.

8.2      MMM-Script representation

Declare: User1 // Virtual Land developer//

  1. MLoc1 // Land Parcel //
  2. Object(V)1 // Landscape //
  3. Value1 // Payment for Land Parcel //

Declare: Service1 // Authoring Service //

Declare: User2 // Object developer //

  1. Object(AV)i // Objects for landscape //
  2. MLoci // correspondent to U-Locations //
  3. Value1 // Payment for Objects(AV)2.i //

Declare: User3 // Tourist application developer //

  1. Personak // Persona to be MM-Animated //
  2. MLock // correspondent to U-Locations //

Declare: human4 // Software developer //

  1. Map // ULoc-MLoc map for mobile app //
  2. Value1 // Payment for Map and App//

Declare: human5 / human holding Device running human4’s App //

Declare: Device1 / Held by human5 /

  1. ULoc1 //

Declare: App1 //Installed on Device1 //

  1. Message1 // From App1 to Device1 //

Declare: User6 // Land Parcel Rights holder //

Who ActsOn What Secondary object
User1 Transacts Value1.1 To Parcel Rights Holder
Authors Tourist Landscape At Parcel Rights Holder
Embeds Object(V)1.1 At Parcel With SA
Transacts Payment for Parcel To Object Developer
User2 Authors Objects for landscapes At Authoring Service
Embeds Objects for landscapes From Service At Landscape With SA
Transacts2.1 Payment for App To Tourist application developer
human4 develops MLoc & ULoc Map
develops App
sells Map and App To human3
App devel. MM-Embeds MLoc Personae At U-Loc correspondance With SA
MM-Animates MLoc Personae
human5 comes To U-Location
App MM-Sends Message To Device
Device MM-Sends Message To App developer
App devel. MM-Animates MLoc Persona At M-Location
MM-Animates MLoc Persona
MU-Renders Animated Persona At U-Location

8.3      Actions, Items, and Data Types

Table 11 – AR Tourist Guide Actions, Items, and Data Types.

Actions Items Data Types
Author Object(AV) Amount
Author Object(V) Coordinates
MM-Animate Map Currency
MM-Animate Message Orientation
MM-Embed M-Location Position
MM-Send Persona Spatial Attitude
MU-Render Service
MM-Send U-Location
Transact Value

9       Virtual Dance

9.1      Description

This Use Cases envisages that:

  1. Dance teacher places their virtual secretary Persona animated by an autonomous agent in the dance school.
  2. Student #1:
    • Shows up at school.
    • Greets the secretary.
  3. Virtual secretary reciprocates greetings.
  4. Dance teacher:
    • Places a haptic Persona of theirs in the dance school.
    • Dances with student #1.
  5. Student #2:
    • Is at home.
    • Shows up at school.
  6. Teacher:
    • Places their haptic Persona close to student #2.
    • Places (replaces) another haptic Persona of theirs close to student #1.
    • Animates the new haptic Persona with autonomous agent dancing with student #1.
    • Dances with student #2.

9.2      MMM-Script representation

Declare: User1 // Dance teacher //

  1. Persona(AVH)1 // Dancing persona#1 //
  2. MLoc1 // Place#1 (Teacher’s Office) //
  3. Persona(AVH)2 // School Secretary //
  4. MLoc2 // Place#2 (Dancing School //
  5. Persona(AVH)3 // Dancing persona#2 //
  6. MLoc3 // Place#3 (dancing area) //
  7. Object(A)1 // Speech Object#2 (Greetings) //
  8. /MLoc4 // Place#4 (dancing area) //

Declare User2 // Dance student #1 //

  1. Persona(AVH)1 //Student’s Persona/
  2. MLoc1 // Student#1’s home //
  3. //MLoc1 // Place#5 in dancing area //

User3 // Dance Student #2 //

  1. Persona(AVH)1 // Student’s Persona //
  2. MLoc1 // Dance Student#2’s home //
  3. //MLoc1 // Place#6 in dancing area //
Who ActsOn What Secondary object
Teacher Tracks Persona#1 At Home With SA
Tracks Persona#1 At Place#1 With SA
MM-Embeds Persona#2 At Place#2 With SA
MM-Animates Persona#2
Student#1 Tracks Sudent#1’s Persona At Student#1’s Home With SA
MM-Embeds Student#1’s Persona At Place#5 With SA
MM-Disables Student#1’s Persona From Home
Teacher Tracks Teacher’s Persona#1 At Place#3 With SA
Student#2 Tracks Student#2’s Persona At Student#2’s Home With SA
Tracks Student#2’s Persona At Place#6 With SA
MM-Disables Student#2’s Persona From Student#2’s Home
Teacher) Tracks Teacher’s Persona#1 At Place#4 With SA
MM-Disables Teacher’s Persona#1 From Place#3
MM-Embeds Teacher’s Persona#3 At Place#3 With SA

9.3      Actions, Items, and Data Types

Table 12 – Virtual Dance Actions, Items, and Data Types.

Actions Items Data Types
MM-Animate M-Location Orientation
MM-Disable Object (A) Position
MM-Embed Persona (AV) Spatial Attitude
MM-Send Persona (AVH)
Track

10   Virtual Car Showroom

10.1   Description

This Use Cases envisages that:

  1. A car dealer MM-Embeds an MM-Animated Persona in the car showroom (as attendant).
  2. A customer:
    • MM-Embeds its Persona in the car showroom.
    • Greets the showroom attendant.
  3. The Showroom attendant reciprocates the greeting.
  4. The dealer:
    • UM-Animates the attendant.
    • Converses with the customer.
    • Embeds a 3D AVH model of a car.
  5. The customer
    • Has a virtual test drive.
    • Buys the car.
    • Returns home.

10.2   MMM-Script representation

Declare: User1 // Car dealer //

  1. Persona(AV)1 //Car dealer //
  2. MLoc1 // Place#1 (Car dealer’s Office) //
  3. Persona(AV)2 //Showroom attendant //
  4. MLoc2 // Place#2 (in Showroom) //
  5. Object(A)1 //Greetings //
  6. M-Loc3 // Place#3 (in Showroom) //
  7. Model(AVH)1 // 3D Model of car //

Declare: User2 // Customer //

  1. Persona(AV)1 // Customer’s Persona //
  2. M-Loc1 //Customer’s home //
  3. M-Loc2 // Place#4 in showroom //
  4. Persona(AVH)1 / User2’s Persona for test driving //
  5. M-Loc3 // Place#5 (in virtual car)
  6. Value1 // Payment for car //
  7. U-Loc1 // U-Place#1 (U-Location of Customer) //
Who ActsOn What Secondary object
Car dealer Tracks Dealer’s Persona#1 At Place#1 With SA
MM-Embeds Dealer’s Persona#2 At Place#2 With SA
MM-Animates Dealer’s Persona#2
Customer Tracks Customer’s Persona At Home With SA
Tracks Customer’s Persona At Place#4 With SA
MM-Disables Customer’s Persona From Home
Car dealer MM-Sends Speech Object To Customer
MM-Embeds Dealer’s Persona At Place#3 With SA
MM-Embeds Car Model At Place#5 With SA
MM-Animates Car Model
Customer Tracks Customer’s Persona At Place#5 With SA
MM-Disables Customer’s Persona From Place#4
UM-Renders Car Model At U-Place#1
Transacts Value2.1 To Dealer
MM-Disables Customer’s Persona From Place#5
Tracks Customer’s Persona At Home With SA

10.3   Actions, Items, and Data Types

Table 13 – Virtual Car Showroom Actions, Items, and Data Types.

Actions Items Data Types
MM-Animate Object (A) Amount
MM-Disable Persona(AV) Currency
MM-Embed Persona(AVH) Orientation
MM-Send Scene (AVH) Position
Track Value Spatial Attitude
Transacts
UM-Animate

11   Drive a Connected Autonomous Vehicle

11.1   Description

This Use Case considers some of the steps made by a human having rights to an implementation of Technical Specification: Connected Autonomous Vehicle (MPAI-CAV) – Architecture [6]. Chapter 7 of Annex 1 – MPAI Basic provides a high-level summary of the specification.

A CAV rights holder Registers with the CAV to access the CAV-created M-Instance by providing:

  1. The requested subset of their Personal profile.
  2. Two User Processes required to operate a CAV:
    • User1 to operate the Human-CAV Interaction Subsystem.
    • User2 to operate the Autonomous Motion Subsystem.
  3. User1’s Personae.

For simplicity, the Use Case assumes that there are two CAVs: CAVA and CAVB and that the CAVA rights holder (UserA.1) wants to see the CAVB Environment in the CAVB M-Instance:

  1. User1
    • Authenticates the human’s voice.
    • Interprets driving instructions from human.
    • Communicates driving instructions to User2.
  2. User2
    • Gets information about CAVA
    • Gets travel options from Route Planner.
    • Communicates travel options to User1.
  3. User1
    • Produces Speech Object with travel options.
  4. human utters selected option to User1.
  5. User1
    • Interprets driving instructions from human.
    • Communicates driving instructions to User2.
  6. User2
    • Gets the Basic Environment Representation from its ESS.
    • Authenticates its peer User2.
    • Gets elements of the Basic Environment Representation from User2.
    • Produces Full Environment Representation.
    • Sends a command to the Ego CAV’s Motion Actuation Subsystem.
  7. User1
    • Authenticates its peer User2.
    • Watches CAVB’s Environment.

11.2   MMM-Script representation

Declare: humanA // CAVA’s rights holder //

Declare: UserA.1 // CAVA’s HCI //

  1. ULoc1.1 // Place where CAVA is located //
  2. MLoc1.1 // M-Location corresponding to ULocA.1.1 //
  3. scene1.1 // Scene at ULoc A.1.1 //
  4. DataMdata1.1 // Data and Metadata of scene captured by Device1 //
  5. Scene1.1 // Scene of MLoc A.1.1 //
  6. Object(A)1.1 // Speech Object #1 //
  7. HCI-AMSCommand1.1 // Travel request to User2 //
  8. HCI-AMSCommand1.2 // Travel request to User2 //
  9. Object(A)1.2 //Speech Object #2 //
  10. HCI-AMSCommand1.2/Travel selection to User2 //

Declare: Device1 // Audiovisual sensor and actuator //

Declare: Route PlannerA.1 // CAV Process //

Declare: Path PlannerA.1 // CAV Process //

Declare: Motion PlannerA.1 // CAV Process //

Declare: Obstacle AvoiderA.1 // CAV Process //

Declare: Command IssuerA.1 // CAV Process //

Declare: UserA.2 // CAVA’s AMS //

  1. AMS-HCIResponse2.1 // Route selection //
  2. Scene2.1 // CAVA’s Environment //

Declare: UserB.2 // CAVB’s AMS //

  1. Scene2.1 // CAVB’s scene in ULocA.1.1 //

Declare: UserB.1 // CAVB’s HCI //

Who ActsOn What Secondary object
humanA Registers With CAVA
UserA.1 UM-Captures scene At Device
UM-Sends DataMdata From Device To UserA.1
Identifies SceneA.1 At UserA.1
Authenticates Object(AV)A.1.1 At UserA.1
Interprets ObjectA.1.1(A) At UserA.1
MM-Sends HCI-AMSCmdA.1.1 To UserA.2
UserA.2 MM-Sends ESS’s SceneA.2.1 To Route Planner
MM-Sends AMS-HCIRespA.2.1 To UserA.1
UserA.1 Interprets Object(A)A.1.3 At UserA.1
MM-Sends HCI-AMSCmdA.1.2 To UserA.2
UserA.2 Authenticates UserB.2 At UserA.2
MM-Sends ESS’s SceneA.2.2 To UserA.2
MM-Sends PathA2.1 To Motion Planner
Motion Planner MM-Sends TrajectoryA.2.1 To Obstacle Avoider
Obstacle Avoider MM-Sends TrajectoryA.2.1 To Command Issuer
Command Issuer MM-Sends AMS-MASCmdA.2.1 To MotionActuationSubsys
MAS MM-Sends MAS-AMS RespA.2.1 To Command Issuer
UserA.1 Authenticates UserB.2 At UserA.1
MM-Sends SceneB.2.1 To UserA.1

11.3   Actions, Items, and Data Types

Note: The MPAI-CAV specific Items are included.

Table 14 – Drive a Connected Autonomous Vehicle Actions, Items, and Data Types.

Action Item Data Types
Authenticate AMS-HCIResponse Spatial Attitude
Interpret AMS-MASCommand Coordinates
MM-Embed Environment Representation Orientation
MM-Send HCI-AMSCommand Position
MU-Render MAS-AMSResponse
Register M-Location
Request Object (A)
Track Path
UM-Render Persona
Route
Scene
Trajectory