Go to ToC         Terms and Definitions–>


MPAI, Moving Picture, Audio, and Data Coding by Artificial Intelligence – the international, unaffiliated, non-profit organisation developing standards for AI-based data coding – is publishing a Call for Technologies related to the architecture and the data exchanged by the components of the architecture.

MPAI intends to develop a Technical Specification for the architecture of a Connected Autonomous Vehicle (CAV), to be called Technical Specification – Connected Autonomous Vehicle – Architecture. MPAI defines a CAV as a system that:

  1. Moves in an environment like the one depicted in Figure 1.

Figure 1 – An environment of CAV operation

  1. Has the capability to autonomously reach a target destination by:
    • Understanding human utterances, e.g., the human’s request to be taken to a certain location.
    • Planning a Route.
    • Sensing the external Environment and building Representations of it.
    • Exchanging such Representations and other Data with other CAVs and CAV-aware entities, such as, Roadside Units and Traffic Lights.
    • Making decisions about how to execute the Route.
    • Acting on the CAV motion actuation to implement the decisions.

The CAV architecture is composed of four Subsystems depicted in Figure 2.

  1. Human-CAV Interaction (HCI).
  2. Environment Sensing Subsystem (ESS)
  3. Autonomous Motion Subsystem (AMS).
  4. Motion Actuation Subsystem (MAS).

Figure 2 – The CAV Subsystems

MPAI does not intend to include the mechanical parts of a CAV in the planned Technical Specification: Connected Autonomous Vehicle – Architecture. MPAI only intends to refer to the interfaces of the Motion Actuation Subsystem with such mechanical parts.

The functions of the Subsystems are summarily described in Table 1 and specified in each of the Chapters 4-5-6-7.

Table 1 – The Functions of the MPAI-CAV Subsystems

Subsystem name Function
Human-CAV Interaction (HCI)
  1. Recognises the humans having rights to the CAV.
  2. Receives and passes to the AMS instructions about the target destination.
  3. Interacts with humans by assuming the shape of an avatar.
  4. Activate other Subsystems as required by humans.
  5. Provides the Full Environment Representation received from the AMS for passengers to use.
Environment Sensing Subsystem (ESS)
  1. Acquires and processes information from the Environment.
  2. Produces the Basic Environment Representation
  3. Sends the Basic Environment Representation to the AMS.
Autonomous Motion Subsystem (AMS)
  1. Computes the Route to destination based on information received from the HCI.
  2. Receives the Basic Environment Representation of the ESS and of other CAVs in range.
  3. Creates the Full Environment Representation.
  4. Issues commands to the MAS to drive the CAV to the intended destination.
Motion Actuation Subsystem (MAS)
  1. Sends its Spatial Attitude and other Environment information to the ESS.
  2. Receives/actuates motion commands in the Environment.
  3. Sends feedback to the AMS

The following high-level workflow illustrates a CAV operation example and the role of CAV Subsystems:

  1. A human with appropriate credentials requests the CAV, via Human-CAV Interaction, to take the human to a given Pose.
  2. Human-CAV Interaction authenticates the human, interprets the request, communicates with the HCIs of other CAVs on matters that directly impact the human passengers, and passes commands to the Autonomous Motion Subsystem. The human may subsequently integrate/correct their instructions.
  3. Autonomous Motion Subsystem:
    1. Requests Environment Sensing Subsystem to provide the current Pose.
    2. Computes the Route and may offer options to authenticated humans.
  4. Environment Sensing Subsystem computes and sends the Basic Environment Representation to the Autonomous Motion Subsystem.
  5. Autonomous Motion Subsystem:
    1. Receives the Basic Environment Representations from the Environment Sensing Subsystem
    2. Exchanges the Basic Environment Representation with other CAVs and computes the Full Environment Representation.
    3. Makes decision on how to best move the CAV to reach the destination, e.g., by avoiding a car suddenly appearing on the horizon.
    4. Issues appropriate commands to the Motion Actuation Subsystem.
  6. While the CAV moves, the humans in the cabin may:
    1. Interact and hold conversation with other humans on board and the Human-CAV Interaction Subsystem.
    2. Issue commands.
    3. Requests the Full Environment Representation to render the environment.
      Interact with (humans in) other CAVs.

MPAI assumes that each of the four Subsystems of a CAV is an implementation of MPAI Technical Specification: AI Framework (MPAI-AIF) V2 [2]. A AI Framework (AIF) V2 executes an AI Workflow composed of AI Modules in a secure environment. Annex 3 – Chapter 1 provides a concise description of the AI Framework.

Each of the four Chapters 4-5-6-7 addresses a Subsystem (corresponding to an AI Workflow of Annex 3 – Chapter 1) providing the following:

  1. The Function of the Subsystem.
  2. The input/output data of the Subsystem.
  3. The topology of the Components (AI Modules) of the Subsystem.
  4. For each AI Module of the Subsystem:
    • The Function.
    • The input/output data.

A fifth Chapter includes the elements of the so-called Communication Device enabling a CAV to communicate with other CAVs.

Note that this document:

  1. Does not make any assumption regarding the Location carrying out the processing required by Subsystem or AI Modules.
  2. Assumes that information processing, collection, and storage is performed according to the laws of the Location.

This Technical Report has been developed by the Connected Autonomous Vehicles group of the Requirements Standing Committee. MPAI may publish more versions of this Technical Report and intends to publish a Technical Specification where the AIM and AIW I/O Data Formats will all be specified.


Go to ToC         Terms and Definitions–>