| Function | Ref. Model | I/O Data | SubAIMs | JSON MData | Profiles | Ref. Software | Conformance | Performance |
1 Functions
The Connected Autonomous Operation (CAV-CAO) AIM:
- Converses with humans by understanding their utterances, e.g., “take me home” or “show me the environment you see”.
- Senses the environment where it is located or traverses.
- Plans a Route enabling the CAV to reach a requested destination.
- Builds digital representations of the environment.
- Exchanges elements of the environment representation with other CAVs and CAV-aware entities.
- Makes decisions about how to execute the Route.
- Actuates the motion of a CAV to implement the decisions.
The Connected Autonomous Operation (CAV-CAO) AIM:
| Receives | Text Object | Text data from User |
| Audio Object | Environment audio | |
| Visual Object | Environment Visual | |
| LiDAR Object | Response LiDAR | |
| RADAR Object | Response RADAR | |
| Offline Map Object | From Autonomous Motion Subsystem | |
| Ultrasound Object | Response Ultrasound | |
| GNSS Object | GNSS data | |
| Point of View | User-selected Point of View | |
| Spatial Data | Spatial data from sensors | |
| Weather Data | Weather data from sensors | |
| Ego-Remote HCI Message | Message received from Remote HCI | |
| Ego-Remote AMS Message | Message received from Remote AMS | |
| Brake Response | Brake Response to MAS | |
| Motor Response | Motor Response to MAS | |
| Wheel Response | Wheel Response sent to MAS | |
| Produces | Text Object | Text data to User |
| Speech Object | Speech to User | |
| Audio Object | Audio to User/Environment | |
| Visual Object | Visual to User | |
| LiDAR Object | LiDAR from Ego CAV | |
| RADAR Object | RADAR from Ego CAV | |
| Ultrasound Object | Ultrasound from Ego CAV | |
| AMS Data | AMS Data to external application | |
| Ego-Remote HCI Message | Message sent to Remote HCI | |
| Ego-Remote AMS Message | Message sent to Remote AMS | |
| Brake Command | MAS Command sent to Brakes | |
| Brake Command | MAS Command sent to Motors | |
| Brake Command | MAS Command sent to Wheels |
2 Reference Model
Figure 1 depicts the Reference Model of the Connected Autonomous Operation (CAV-CAO) AIM.

Figure 1 – Reference Model of the Connected Autonomous Operation (CAV-CAO) AIM
3 I/O Data
Table 1 specifies the Input and Output Data of the Connected Autonomous Operation (CAV-CAO) AIM.
Table 1 – I/O Data of the Connected Autonomous Operation (CAV-CAO) AIM
| Input Data | Description |
| Text Object | Text data from User. |
| Audio Object | Environment Audio. |
| Visual Object | Environment Visual. |
| LiDAR Object | LiDAR generated by CAV received from Environment. |
| RADAR Object | RADAR generated by CAV received from Environment. |
| Ultrasound Object | Ultrasound generated by CAV received from Environment. |
| GNSS Object | GNSS data. |
| Point of View | User-selected Point of View. |
| Spatial Data | Spatial data from sensors |
| Weather Data | Weather data from sensors |
| Ego-Remote HCI Message | Message from Remote HCI |
| Ego-Remote AMS Message | Message from Remote AMS |
| Brake Response | Brake Response to MAS |
| Motor Response | Motor Response to MAS |
| Wheel Response | Wheel Response sent to MAS |
| Output Data | Description |
| Text Object | Text data to User |
| Speech Object | Speech to User |
| Audio Object | Audio to User/Environment |
| Visual Object | Visual to User |
| LiDAR Object | LiDAR generated by CAV. |
| RADAR Object | RADAR generated by CAV. |
| Ultrasound Object | Ultrasound generated by CAV. |
| AMS Data | AMS Data to external application |
| Ego-Remote HCI Message | Message to Remote HCI |
| Ego-Remote AMS Message | Message to Remote AMS |
| Brake Command | MAS Command sent to Brakes |
| Motor Command | MAS Command sent to Motors |
| Wheel Command | MAS Command sent to Wheels |
4 SubAIMs
4.1 Reference Model
The Connected Autonomous Operation (CAV-CAO) AIM is a Composite AIM depicted in Figure 2.

Figure 2 – Reference Model of Connected Autonomous Operation (CAV-CAO) Composite AIM
4.2 Operation
The operation of a CAV unfolds according to the following workflow, which is a representative description of the functions performed.
Table 1 – High-level CAV operation
| Entity | Action |
| Human | Requests the HCI, to take them to a destination. |
| HCI | 1. Authenticates human(s). |
| 2. Interprets the request of humans. | |
| 3. Issues commands to the AMS. | |
| AMS | 1. Requests ESS to provide the current Point of View. |
| ESS | 1. Computes and sends the Basic Environment Descriptors (BED) to the AMS. |
| AMS | 1. Computes and sends Route(s) to HCI. |
| HCI | 1. Sends travel options to Human. |
| Human | 1. May integrate/correct their instructions. 2. Issues commands to HCI. |
| HCI | 1. Communicates Route selection to AMS. |
| AMS | 1. Sends the BED to the AMSs of other CAVs. 2. Computes the Full Environment Descriptors (FED). 3. Decides best motion to reach the destination. 4. Issues appropriate commands to the MAS. |
| MAS | 1. Executes the Command. 2. Sends response to the AMS. |
| Human | 1. Interacts and holds conversations with other humans on board the HCI. 2. Issues commands to the HCI. 3. Requests HCI to render the FED. 4. Navigates the FED. 5. Interacts with humans in other CAVs. |
| HCI | Communicates with the HCIs of Remote CAVs on matters related to human passengers. |
4.3 Functions of AI Modules
Table 2 describes the high-level functions of all CAV AI Workflows.
Table 3 – Functions of CAV AI Workflows
| AIW | Function |
| Human-CAV Interaction | Recognises human owner/renter, responds to humans’ commands and queries, converses with humans, manifests itself as a perceptible entity, exchanges information with the Autonomous Motion Subsystem in response to humans’ requests, and communicates with other CAVs or CAV-Aware entities. |
| Environment Sensing Subsystem | Senses the environment’s Electromagnetic and Acoustic information, receives Ego CAV’s Spatial Attitude and Weather Data from own ESS, requests location-specific Data from Offline Map(s), produces the best estimate of the Ego CAV Spatial Attitude, sensor-specific Scene Descriptors and Alerts to AMS, Basic Environment Descriptors (BED), passes the BEDs to HCI and AMS), and requests/receives elements of the Full Environment Descriptors (FED) to/from Remote AMSs. |
| Autonomous Motion Subsystem | Converses with HCI (and HCI with humans) to provide a Route, requests and provides FED subsets to selected Remote CAVs, produces FED, generates Paths, Trajectory, checks Trajectory implementation considering Alerts from ESS’s technology-specific Scene Descriptions, issues commands to and processes responses from MAS, stores Data received/produced in AMS Memory. |
| Motion Actuation Subsystem | Transmits Weather Data and Spatial Data-based Spatial Attitude of the CAV to ESS, receives AMS-MAS Messages from AMS, translates AMS-MAS Message into Brake, Motor, and Wheel Commands, packages and sends Brake, Motor, and Wheel Responses from its Brake, Motor, and Wheel to AMS. |
4.4 I/O Data of AI Modules
Table 3 gives the AI Workflows of the Human-CAV Interaction depicted in Figure 1.
Table 4 – AI Workflows of Connected Autonomous Vehicle
4.5 AIMs and JSON Metadata
The AIMs composing the Connected Autonomous Operation (CAV-CAO) Composite AIM are given in Table 2:
Table 2 – AIMs composing the Connected Autonomous Operation (CAV-CAO) Composite AIM
| AIM1 | AIM2 | Names | JSON |
| CAV-CAO | Connected Autonomous Operation | Link | |
| MMC-HCI | Human-CAV Interaction | Link | |
| CAV-ESS | Environment Sensing Subsystem | Link | |
| CAV-AMS | Autonomous Motion Subsystem | Link | |
| CAS-MAS | Motion Actuation Subsystem | Link |
5 JSON Metadata
https://schemas.mpai.community/CAV2/V1.1/AIMs/ConnectedAutonomousOperation.json
6 Profiles
No Profiles