The MPAI Metaverse Model (MPAI-MMM) – Technologies (MMM-TEC) specification is based on an innovative approach. As in the real world (Universe) we have animate and inanimate things, in an MPAI Metaverse (M-Instance) we have Processes and Items. Processes can animate Items (things) in the metaverse but can also act as a bridge between metaverse and universe. For convenience, MMM-TEC defines four classes of Processes: Apps, Devices, Services, and Users.

Probably, the most interesting one is the User, defined as the “representative” of a human where representation means that the human is responsible for what their Users do in the metaverse. The representation function can be very strict because the human drives everything one of their User does or very loose because the User is a fully autonomous agent (still under the human’s responsibility). As the User is a Process, it cannot be “perceived” except from what it does, but it can render itself in a perceptible form, called Persona that may visually appear as a humanoid. A human can have more than one User, and a User can be rendered with more than one Persona.

Humans can do interesting things in the world, but what interesting things can they do in the metaverse? MMM-TEC answers this question by offering a range of 28 basic Actions, called Process Actions. An important one is Register. By Registering, a human gets the Rights to import (via the UM-Send Action) and deploy (via the Execute Action) Users and render (by e.g., MM-Adding) Personae. UM-Send means sending things from the universe to the metaverse and MM-Add means placing an Avatar and then possibly animating it (MM-Animate) with a stream or rendering it (MU-Actuate) in the universe.

Universe and metaverse are connected, but they should be mutually “protected”. One example of what this means is data from the universe cannot be simply imported into the metaverse, but is first captured (UM-Capture), then identified (Identify) – i.e., converted into an Item – and finallu acted upon, e.g., used to animate an avatar. Also, a User is not entitled to do just anything anywhere in the metaverse because its operation is governed by three basic notions: Rights, expressing the fact that a User (in general, a Process) may perform a certain Process Action; Rules, expressing the fact a Process maymay not, or must perform a Process Action; and P-Capabilities expressing that the Process can perform certain Process Actions.

What if a Process wants to perform a Process Action, has the Rights to perform it, and its performance complies with the Rules, but it cannot, i.e., it does not know how to perform it? MMM-TEC makes use of a philosophy of language notion called Speech Act that is expressed by an individual and contains both information and action. For instance, User MU-Actuates Persona At M-Location At U-Location With Spatial Attitude will mean that the User renders at U-Location in the universe with a certain Position and Orientation the Persona that is placed at an M-Location in the Metaverse. If the User can – i.e., it has the P-Capabilities to – MU-Actuate the Persona, for instance because it is connected to the universe via an appropriate device, and may, i.e., it has the Rights to MU-Actuate, and the planned Process Action complies with the Rules, then the Process Action is performed. However, if the User does not have the necessary P-Capabilities or does not have the Rights to MU-Actuate the Persona, it can ask an Import-Export Service to do this on its behalf. Possibly, the Service will request that a Transaction be made in order to perform the requested Process Action.

As a last point, we should describe how MMM-TEC represents Rights and Rules. MMM-TEC states that Rights are, in general, a collection of Process Actions that the Process can perform. Each of them is preceded by InternalAcquired, or Granted to indicate if the Rights were obtained at the time of Registration, were acquired (e.g., by a Transaction), or are Granted (and then possibly withdrawn) by another Process. Similarly, Rules are expressed by Process Actions each of which is preceded by MayMay not, or Must.

We could add many more details to give a complete description of the MMM-TEC potential. You can directly access the standard here, but now we want to address some of the innovations introduced by MMM-TEC V2.1.

The first is the set of new capabilities provided by the Property Change Process Action. We said that we can MM-Add a Persona and then MM-Animate it. But what if we are preparing a theatre performance and we do not want “to be seen” while rehearsing? Property Change can set the Perceptibility Status of an Item but can also change:

  • The properties of a visual Item in terms of its size, mass, material (i.e., to signal that the object is material or immaterial), gravity (is subject to gravity or not), and texture map.
  • The audio characteristics of an object: Reflectivity, Reverberation, Diffusion, and Absorption.
  • The properties of a light source: Type (Point, Directional, Spotlight, Area), colour, and intensity of the light source.
  • The properties of an audio source: Diffuseness, Directional Patterns, Shape, and Size.
  • The Personal Status (i.e., emotion) of an avatar.

Another important set of functionalities is provided by significant extensions of how a Process in the metaverse can affect the universe. MMM-TEC V2.1 allows a User to MU-Actuate at a U-Location an Item MM-Added at an M-Location. How can this Process Action be performed? We assume that the M-Instance is connected to a special Device that can perform the following in the universe:

  • Pick an existing object.
  • Drive a 3d printer that produces the analogue version of the Item.
  • Render a 2D or a 3D media object.

MMM-TEC V2.1 calls R-Item any physical object in the universe, including the object produced by a 3D printer and the 2D or 3D media object produced. It also defines the following additional Process Actions:

  • MU-Add an R-Item: to place an R-Item (a physical object) somewhere in the universe with a Spatial Attitude.
  • MU-Animate an R-Item: to animate, e.g., a robot, with a stream.
  • MU-Move an R-Item from a U-Location to another U-Location along a Trajectory.

MMM-TEC is rigorous in defining how Process Actions can be performed in an M-Instance, but what about the universe? Do we want Processes to perform actions in the universe in an uncontrolled way?

The answer is clear: the M-Instance does not control the Universe through some supernatural force but through Devices whose operation is conditional on the Rights and P-Capabilities held by the Device to perform the desired Process Actions in the universe. The Process Actions beginning with “MU-” include the Rights of a Device to act on the universe.

V2.1 adds several new use cases to the long list of V2.0. One of these is called “Emergency in Industrial Metaverse”:

  1. An M-Location includes the Digital Twin of a real factory (R-Factory) where the regular operation is separated from emergency operation described by the use case.
  2. An “emergency” User in the Digital Twin (V-Factory):
    1. Has the Rights to actuate and animate an “emergency” robot in the R-Factory.
    2. Can be rendered as a Persona having the appearance of the corresponding robot.
  3. In case of an emergency, the User:
    1. Activates an alarm in the R-Factory.
    2. Actuates its “emergency” robot (Analogue Twin) in the R-Factory.
    3. Animates the robot to solve the problem.
    4. Renders its Persona so that humans can see what is happening in the R-Factory.
  4. When the emergency is resolved, the robot is moved to its repository.

You are invited to register to attend the online presentation on 12 September at 15 UTC and provide your comments to the MPAI Secretariat by 2025/09/28 T23:59 UTC