Function Reference Model Input/Output Data
SubAIMs JSON Metadata Profiles
Reference Software Conformance Testing Performance Assessment

1. Function

The A‑User Control (AUC) AIM:

  1. Serves as the central coordinator for Action execution, AIM orchestration, and system traceability.
  2. Governs the lifecycle of the A-User.
  3. Orchestrates the A-User interaction with
    1. The human User.
    2. The M-Instance.
    3. The M-Instance’s Processes and Items with which the A-User interacts.
  4. Sends Directive messages to AIMs to implement Instructions within the Rights the A‑User holds and the Rules applicable to the M‑Location.
  5. Tracks execution of Directives using Status messages received from A-User AIMs.

The resulting control flow ensures that the A-User operates predictably, transparently, and in alignment with human Commands, and any A-User Instructions, thus supporting life cycle integrity and enabling trust through auditable orchestration.

2. Reference Model

A-User Control:

  1. Triggers the Context Capture AIM to perceive the current M‑Location composed of a User in an M-Location.
  2. Understands scene by sending Directives to Audio Spatial Reasoning, Visual Spatial Reasoning, and Domain Access).
  3. Prompts Prompt Creation and the Basic Knowledge.
  4. Controls the queries made by the Basic Knowledge to Prompt Creation, Domain Access, and User State Refinement.
  5. Triggers Basic Knowledge into requesting A-User Entity State (Personality Alignment).
  6. Issues Formation Directives to the A‑User Formation AIM to produce the speaking Avatar (Persona), which will subsequently be instantiated by the A-User Control in the M-Instance.

Figure 1 gives the input/output data of A-User Control (PGM-AUC).

Figure 1 – Reference Model of A-User Control (PGM-AUC) AIM

The A-User Control AIM exercises its activity by implementing one of eight Instructions:

  1. Perception and Capture Control: Configure the perceptual system for sensing the human in their universe Location, the User in their M-Instance Location, and the M-Location.
  2. Goal Acquisition: Capture and validate multimodal expressions of the human and/or the User to extract the Raw Goal Expression.
  3. Prompting and Knowledge Queries: Prompt to clarify and ground all steps after Raw Goal Expression has been determined.
  4. Goal Interpretation and Intent: Normalise, ground, and interpret the human and/or User goal using Domain and User information.
  5. Policy, Rights and Feasibility: Check the interpreted Goal against M-Instance Rules, User Entity State constraints and domain feasibility.
  6. Plan Construction and Execution: Construct an action plan and orchestrate its execution in the metaverse.
  7. Conflict Management and Escalation: Detect inconsistencies or conflicts and resolve or escalate them as necessary.
  8. Avatar Formation and Rendering: Produce the final natural‑language speech and avatar‑based multimodal output.

 

3. Input/Output Data

The A‑User Control (PGM-AUC) AIM exchanges data types with specific purposes with the other A-User AIMs. For example, Audio Reasoning Directive is sent to Audio Spatial Reasoning and Formation Status is received from A‑User Formation.

Table 1 gives Input and Output Data of A-User Control (PGM-AUC) AIM. See below for Mapping to Unified Schema.

Table 1 – Input and Output Data of A-User Control (PGM-AUC) AIM

Input Description
Human Command From a human in the real world.
Process Action Response From a Process that has received a Process Action Request.
Context Capture Status Scene-level context and User presence.
Audio Action Status Audio spatial feasibility, occlusion, reachability flags, etc.
Visual Action Status Visual spatial constraints and scene anchoring, etc.
Prompt Plan Status Prompt readiness, alignment status, semantic goal framing, etc.
BK Response Trace Enriched response metadata and traceability, etc.
DA Action Status Execution feasibility and constraint validation, etc.
User State Status Current engagement, affective tone, override flags.
Personality Alignment Status Expressive alignment, persona framing, modulation constraints, etc.
Formation Status Avatar formation success, avatar state, expressive output status.
Output Description
Action Performed by A-User on the M-Instance.
Process Action Request Request made by A-User to an M-Instance Process.
Context Capture Directive Instructions for perceptual acquisition.
Audio Action Directive Audio-related actions and sequences.
Visual Action Directive Visual-related actions and sequences.
Prompt Creation Directive Prompt generation or refinement.
BK Query Directive Request for knowledge retrieval or response shaping.
DA Action Directive Request for domain execution.
User State Status Request to modulate User State based on interaction feedback.
Personality Alignment Directive Request for expressive modulation or Personality reconfiguration.
Formation Directive Request for avatar formation, spatial output, expressive delivery, etc.
Human Command Status A-User Control response to Human Command.

4. SubAIMs

No SubAIMs.

5. JSON Metadata

https://schemas.mpai.community/PGM1/V1.0/data/AUserControl.json

6. Profiles

No Profiles.

7. Reference Software

8. Conformance Testing

9. Performance Assessment