In a previous article, we have described the Architecture of the MPAI Connected Autonomous Vehicle (CAV). The CAV’s Environment Sensing Subsystem (ESS) captures data of the environment with a variety of sensors and produces the Basic Environment Representation (BER) that is passed to the Autonomous Motion Subsystem (AMS). This exchanges (subsets of) the BER with other CAVs in range and uses the received information to produce the Full Environment Representation (FER). Then, the AMS can issue commands to the Motion Actuation Subsystem (MAS) to move the CAV toward the destination.

In a previous article, we have described the Architecture of the MPAI Metaverse Model (MPAI-MMM) where a Metaverse Instance (M-Instance) is defined as a set of Processes providing some or all the following functions (terms beginning with small letters are in the Universe and terms beginning witj a large letter are in an M-Instance:

  1. To sense data from U-Locations.
  2. To process the sensed data and produce Data.
  3. To produce one or more M-Environments populated by Objects that can be either digitised or virtual, the latter with or without autonomy.
  4. To process Objects from the M-Instance or potentially from other M-Instances to affect U-Locations (in the Universe) and/or M-Locations (in this or other M-Instances) using Object in ways that are:
    • Consistent with the goals set for the M-Instance.
    • Effected within the capabilities of the M-Instance.
    • Complying with the Rules set for the M-Instance and applicable laws.

At a first glance, it looks like the way a CAV’s BER and FER bear a lot of similarities with the M-Instance of the MMM Architecture as we can see from the comparative .

 

Table 1 – Comparison between M-Instance and CAV

M-Instance

CAV

An M-Instance is a set of Processes providing some or all the following functions: A CAV is a set of Processes (Subsystems and AI Modules) providing the following functions:
1.   To sense data from U-Locations. 1.  To sense data from the environment.
2.  To process the sensed data and produce Data. 2.  To process the sensed data and produce Data processable by the CAV, in particular BERs.
3.  To produce one or more M-Environments populated by Objects that can be either digitised or virtual, the latter with or without autonomy. 3.  To produce one M-Instance populated by Objects
4.  To process Objects from the M-Instance or potentially from other M-Instances to affect U- and/or M-Environments using Objects in ways that are: 4.  To process (subsets of) BERs from the CAV’s M-Instance and potentially from other CAVs’ M-Instances in ways that are:
4.1.  Consistent with the goals set for the M-Instance. 4.1. Consistent with the goals set to the CAVs to reach a destination.
4.2.  Effected within the capabilities of the M-Instance. 4.2. Effected within the CAV’s capabilities (processing but also physical).
4.3.  Complying with the Rules set for the M-Instance and applicable laws. 4.3.  Complying with the Rules (law and traffic regulations).

We need to look more in detail into this “similarities”. Before proceeding, let’s recall two assumptions at the basis of MPAI-MMM – Architecture:

  1. User is a type of Process that represents and acts on behalf of a human. A human may have more than one User in an M-Instance.
  2. Persona is a rendered User.
  3. User may have or acquire the Rights to perform an Action, e.g., to authenticate another User.

To do that, let’s consider the simple case of two CAVs: CAVA and CAVB respectively owned by humanA and humanB, where humanA is friend to humanB. humanA has two Users: UserA.1 who represents humanA in the Human-CAV Interaction (HCI) Subsystem (or M-EnvironmentA.1) and UserA.2 who represents humanA in the Autonomous Motion Subsystem (or M-EnvironmentA.2). Similarly, for humanB.

humanA wants to see the landscape seen by humanB in their CAVB.

This is a simplified description of the workflow (a fuller workflow is in the MPAI=CAV – Architecture standard)

  1. humanA requests User1 (HCI) to take them to a destination.
  2. User1 requests UserA.2 (AMS) to take CAVA to destination.
  3. User2
    • Gets the BER from CAVA’s ESS (or M-Environment3).
    • Computes the Route to Destination.
    • Issues a series of Commands to the MAS.
    • Authenticates its peer User2.
    • Gets a subset of the BER from User2.
    • Produces CAVA’s FER.
  4. User1
    • Authenticates its peer User2.
    • Renders their Persona in CAVB (e.g., using advanced 3D rendering technologies).
    • Converses with humanB.
    • Watches CAVB’s M-Location corresponding to the environment currently traversed by CAVB.

This example is a first demonstration of the compatibility of an M-Instance produced by a CAV implementing the MPAI-CAV – Architecture standard with the MPAI-MMM – Architecture standard.