Connected Autonomous Vehicles

A standard addressing 3 components of a Connected Autonomous Vehicle: 1) Autonomous Motion, 2) Human-to-CAV interaction and 3) CAV-to-environment interaction


Use Cases and Functional Requirements – Application Note

Use Cases and Functional Requirements

Contents

1       Introduction. 2

2       The MPAI approach to standardisation. 2

3       Use Cases. 3

3.1       Human-to-CAV interaction. 4

3.1.1       Reference architecture. 4

3.1.2       Input and output data. 5

3.1.3       AI Modules. 5

3.2       Autonomous Motion. 6

3.2.1       Reference architecture. 6

3.2.2       Input and output data. 6

3.2.3       AI Modules. 7

3.3       CAV-to-environment interaction. 13

3.3.1       Reference architecture. 13

3.3.2       Input and output data. 14

3.3.3       AI Modules. 15

4       Technologies and Functional Requirements. 15

4.1       Introduction. 15

4.2       Human-Cav Interaction. 15

4.3       Autonomous Motion. 15

4.3.1       Summary of CAV Autonomous Motion data. 15

4.3.2       Environment sensor data. 17

4.3.3       Onboard device data. 17

4.3.4       User input data. 17

4.3.5       Offline map. 17

4.3.6       State. 17

4.3.7       Goal 17

4.3.8       Route. 17

4.3.9       Occupancy Grid Map. 17

4.3.10     Online map. 17

4.3.11     Traffic signals. 17

4.3.12     Traffic rules. 17

4.3.13     Pose. 17

4.3.14     Velocity. 17

4.3.15     World representation. 17

4.3.16     Path. 17

4.3.17     Trajectory. 17

4.3.18     Output data. 17

4.4       CAV-environment interaction. 18

4.4.1       CAV identity. 18

4.4.2       Attitude-Path-Trajectory. 18

4.4.3       Spatial attributes. 18

4.4.4       World representation. 18

4.4.5       Distance. 18

4.4.6       Events. 18

5       References. 18

Annex 1 – Terminology. 19

Annex 2 – ETSI Technical Report 21

1        Introduction

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) is an international association with the mission to develop AI-enabled data coding standards. Research has shown that data coding with AI-based technologies is generally more efficient than with existing technol­ogies. Compression is a notable example of coding as is feature-based description.

The MPAI approach to developing AI data coding standards is based on the definition of standard interfaces of AI Modules (AIM). The Modules operate on input and output data with standard formats. AIMs can be combined and executed in an MPAI-specified AI-Framework according to the emerging MPAI-AIF standard being developed based on the responses to Call for MPAI-AIF Technologies.

By exposing standard interfaces, AIMs are able to operate in an MPAI AI Framework. However, their performance may differ depending on the technologies used to implement them. Therefore, MPAI believes that competing developers striving to provide more performing proprietary still inter­operable AIMs will naturally create horizontal markets of AI solutions that build on and further promote AI innovation.

This document, titled Connected Automotive Vehicles (MPAI-CAV), contains use cases for Human-CAV interaction, the “AI-based Performance Prediction” Use Case and associated Functional Requirem­ents. The MPAI-CUI standard uses AI substantially to extract the most relevant information from the indus­trial data, with the aim of assessing the performance of a company and predicting the risk of bankruptcy long before it may happen.

It should be noted that the AI-based Performance Prediction Use Case will be non-normative. The internals of the AIMs will also be non-normative. However, the input and output interfaces of the AIMs whose requirements have been derived to support the Use Cases will be normative.

This document includes this Introduction and

Chapter 2: outlines the MPAI approach to standardisation

Chapter 3: describes the Use Cases giving for each the reference architecture, the input and output data and the AI Modules

Chapter 4: provides the requirements for all technologies identified

The Terms are defined in Annex 1.

2        The MPAI approach to standardisation

MPAI standards target components and systems enabled by data coding technologies, especially, but not necessarily, using AI. MPAI subdivides an Implementation of an MPAI-specified Use Case into functional components called AI Modules (AIM). AIMs and AI systems implementing a Use Case are both called Implementations.

MPAI assumes Implementations use Artificial Intelligence (AI) or Machine Learning (ML) or traditional Data Processing (DP) or a combination of these. The implementation technologies can be hardware or software or mixed hardware and software.

An AI system implementing a Use Case is an aggregation of interconnec­ted AIMs executed inside an AI Frame­work (AIF). MPAI is developing and plans on releasing such an AI Framework (MPAI-AIF) standard in July 2021.

The 2 basic elements of the MPAI standardisation are represented in Figure 1 and Figure 2.

Figure 1 – The MPAI AI Module (AIM) Figure 2 – The MPAI AI Framework (AIF)

Figure 1 shows a video coming from a camera shooting a human face. The function of this AIM (green block) is to detect the emotion on the face and the meaning of the sentence the human is uttering. The AIM can be implemented with a neural network or with DP technologies. In the latter case, the AIM accesses a knowledge base external to the AIM.

The MPAI approach to developing AI data coding standards is based on the definition of standard interfaces of AI Modules (AIM) combined and executed in an MPAI-specified AI-Framework (MPAI-AIF). AIMs operate on input data with standard formats and produce output data with standard formats. MPAI is silent on how an AIM produces output data from input data, with the constraint that an MPAI-standardised AIM must execute the normatively specified function.

By exposing standard interfaces, AIMs can interoperate in the MPAI AI Framework. However, their performance may differ depending on the technologies used to implement them.

MPAI believes that competing developers striving to provide more performing proprietary while still interoperable AIMs will naturally create horizontal markets of AI solutions that build on and further promote AI innovation.

Each Use Case normatively defines

A user of the standard can normatively reference one of the following three

3        Use Cases

The MPAI-CAV use cases relate to the 3 main subsystem ins a Connected Autonomous Vehicles. It develops and describes three Refer­ence Models:

3.1       Human-to-CAV interaction

3.1.1      Reference architecture

Humans and CAVs interact in several ways:

The CAV collects data generated by humans inside the vehicle for possible action. In general, such data are anonymised, if meant for later, e.g., statistical use. Any data specific of a human shall be deleted at the end of the travel.

Figure 3 is the reference model of Human-CAV interaction. A combination of Conversation with Emotion and Multimodal QA covers Human-CAV interaction needs.

Figure 3 – Human-CAV interaction Reference Model

Depending on the technology used (legacy  or AI), the AIMs in Figure 3 may need to access ex­ternal Know­ledge Bases to perform their functions.

3.1.2      Input and output data

Input Speech
Input Video
Output Synthetic speech
Output Animated video

3.1.3      AI Modules

The AI Modules of Human-CAV interaction are given in Table 1.

Table 1 – AI Modules of Human-CAV interaction

AIM Function
Speech recognition Analyse the voice input and generate text output
Video Analysis 1 Produces the name of the object in focus
Video Analysis 2 Extracts emotion from human face
Language understanding Analyses natural language expressed as text using a language model to produce the meaning of the text
Emotion recognition Fuses Speech and Video emotions
Question analysis Analyses the meaning of the sentence and determines the Intention
Question & Dialog processing Analyses user’s neaning and/or question and produces a reply based on user’s Inten­tion
Speech synthesis Converts input text to speech
Question Answering Analyses user’s question and produces a reply based on user’s Inten­tion
Intention KB Responds to queries using a question ontology to provide the features of the question
Image KB Responds to Image analysis’s queries providing the object name in the image
Online dictionary Allows Question Answering AIM to find answers to the question

3.2       Autonomous Motion

3.2.1      Reference architecture

When properly instructed, the Autonomous Motion subsystem executes the instructions: go to a pose, change target pose and park. It does that by

The Autonomous Motion subsystem should be designed in such a way that different levels of autonomy, e.g., those indicated by SAE International [1], are possible depending on the amount and level of available functionalities.

The MPAI-CAV reference model is given by Figure 4.

Figure 4 – MPAI-CAV Autonomous Motion Reference Model

With the exception of Route Planner, the AIMs located at the bottom of Figure 4 process typically high-speed data received from the physical environment or from devices inside the CAV (e.g., gyroscope). These signal sources are represented as white boxes. The AIMs at the top typically operate on the basis of processed lower-speed data received from the AIMs at the bottom. The order of the AIMs from left to right roughly corresponds to a sequential order in which AIMs take action after receiving an instruction.

3.2.2      Input and output data

Input 1.     Captured by sensors 1.1.  Global Navigation Satellite System (GNSS)

1.2.  Light Detection and Ranging (LIDAR).

1.3.  Radio Detection and Ranging (RADAR).

1.4.  Cameras (2/D and 3D).

1.5.  Ultrasound.

1.6.  Microphones.

1.7.  Wheel encoder.

2.     Onboard devices 2.1.  Inertial Measurement Unit (IMU)

2.2.  Odometer, etc.

3.     Structured 3.1.  Other CAVs

3.2.  Static transmitters

3.3.  Offline maps.

3.4.  Entertainment.

Output 1. Steering wheel actuation
2. Throttle actuator
3. Brake actuator
  • Road wheel-related sensors:
    • tyre pressure (to be aware of CAV’s reaction to commands).
  • Inertial Measurement Unit (IMU) contains
    • Accelerometer
    • Gyroscope
  • Offline maps are created using satellite or onboard sensors data collected over multiple passes or or crowd-sourced to a fleet of cars, annotated and curated.

3.2.3      AI Modules

The AI Modules of Autonomous Motion are given in Table 2.

Table 2 – AI Modules of Autonomous Motion

AIM Function
Route Planner
Vehicle Localiser estimates the current CAV State in the Offline Maps
Occupancy Grid Map Creator represents the environment as a grid structure of binary values
Environment Recorder processes and records a subset of data
Online Map Creator creates a map with geometrical and topological properties
Moving Objects Tracker detects and tracks position and velocity of moving obstacles in the environment comprising the CAV
Traffic Signal Recogniser detects and recognises signs to enable the CAV to cor­rectly decide in conformance with the traffic rules
World Representation Creator creates an internal representation of the environment
Path Planner generates a set of Paths, considering 1) the current Route, 2) the CAV State, 3) the World Representation, and 4) the traffic rules
Behaviour Selector to set a Goal to be reached with a Driving Behavior, avoiding collisions with static and moving objects within the decision horizon time frame
Motion Planner define a Trajectory, from the current CAV State to the current Goal following the Behavior Selector’s Path as close as possible, satisfying CAV’s kinematic and dynamic constraints, and passengers’ comfort
Obstacle Avoider defines a new Trajectory that avoids obstacles
Command and Control makes the car execute the Trajectory as best as the environment allows

3.2.3.1     Vehicle Localiser

  1. Purpose: to estimate the current CAV State in the Offline Maps.
  2. Input:
  3. Output:
  4. Notes:

3.2.3.2     Route Planner

TBD

3.2.3.3     Occupancy Grid Map Creator

TBD

3.2.3.4     Environment Recorder

TBD

3.2.3.5     Online Map Creator

TBD

3.2.3.6     Moving Objects Tracker

TBD

3.2.3.7     Traffic Signal Recogniser

TBD

3.2.3.8     World Representation Creator

TBD

3.2.3.9     Path Planner

TBD

3.2.3.10  Behaviour Selector

TBD

3.2.3.11  Motion Planner

TBD

3.2.3.12  Obstacle Avoider

TBD

3.2.3.13  Command and Control

TBD

3.3       CAV-to-environment interaction

3.3.1      Reference architecture

Figure 5 depicts the environment applicable to MPAI-CAV.

Figure 5 – The MPAI-CAV Environment

CAVs can communicate via radio with other CAVs and other information sources.

CAVs can improve their perception capabilities by exchanging information about what they sense with other entities:

The following categories of vehicular communication are part of the literature or industry effort:

V2V Vehicle-to-Vehicle communication between vehicles to exchange information about the speed and position of surrounding vehicles
V2I Vehicle-to-Infrastructure communication between vehicles and road infrastructure.
V2X Vehicle-to-Everything communication between a vehicle and any entity that may affect, or may be affected by, the vehicle
V2R Vehicle-to-Roadside communication between a vehicle and  Road Side Units (RSUs).
V2P Vehicle-to-Pedestrian communications between a vehicle and (multiple) pedestrian device(s) and to other vulnerable road users, e.g., cyclists, in close proximity
V2S Vehicle-to-Sensors communication between a vehicle and its sesnors on board
V2D Vehicle-to-Device communication between a vehicle and any electronic device that may be connected to the vehicle itself
V2G Vehicle-to-Grid communication with the power grid to sell demand response services by either returning electricity to the grid or by throttling their charging rate
V2N Vehicle-to-Network broadcast and unicast communications between vehicles and the V2X management system and also the V2X AS (Application Server)
V2C Vehicle-to-Cloud communication with data centers and other devices connected to the internet

Technologies exist that support at least some aspects of the communivation types of the table:

3.3.2      Input and output data

3.3.2.1     CAVs within range

MPAI is developing a different payload as indicated in Table 3. This relies on a common world volumetric model. CAVs communicate in broadcast mode with other CAVs.

Table 3 – MPAI-CAV Interaction with Environment data

  Data type Description
V CAV identity Digital equivalent of plate number, including CAV model
V Attitude-Path-Trajectory See definitions
O Spatial attributes Position, velocity, acceleration, bounding box and semantics of objects in the environment
V World representation CAV’s world representation. (original or after fusion?)
V Distance Estimated distance between the CAV and all other CAVs.
E Events E.g., Works, Traffic jams, Number of cars at a traffic light etc.

The typical size of a lidar scan is say 17 Mpoint ~550 MB

3.3.2.2     Other vehicles (not CAVs)

Other vehicles can be scooters, motorcycles, bicycles, other non-CAV vehicles.

They transmit their position as derived from GPS?

3.3.2.3     Pedestrians

Their smartphones can transmit their coordinates as available from GPS

3.3.2.4     Fixed equipment

Fixed equipment are traffic lights, bus stops, road side units.

Traffic lights can transmit TBD

Road side transmitters can transmit TBD

3.3.3      AI Modules

TBD

4        Technologies and Functional Requirements

4.1       Introduction

The Functional Requirements refer to the individual technologies identified as necessary to implement Use Cases belonging to given MPAI-CAV application area using AIMs operating in an MPAI-AIF AI Framework. The Functional Requirements developed adhere to the following guidelines:

4.2       Human-CAV Interaction

4.3       Autonomous Motion

4.3.1      Summary of CAV Autonomous Motion data

The table gives, for each AIM (1st column), the input dats (2nd column) from the AIM (3rd column) and the output data (4th column).

Table 4 – MPAI-CAV Autonomous Motion data

CAV AIM Input From Output
Route Planner State Vehicle Localiser Route

Estimated time

Vehicle Localiser Sensor data Input Data State
Odometry Onboard devices
Offline Maps Input Data
Sensor Data Other CAVs
Final Goal User
OGM Creator Various Data Input Data Occup­ancy Grid Map
Environment Recorder State Vehicle Localiser
OGM OGM Creator
Data (TBD) Other CAVs
Online Map Creator State Vehicle Localiser Online Map
Offline Maps Input Data
Occup­ancy Grid Map OGM Creator
Various Data Other CAVs
Traffic Signal. Detector State Vehicle Localiser Traffic signals

Traffic rules

Sensor data Input Data
Offline Maps Input Data
Online map Mapper
Various Data Other CAVs
Moving Objects Tracker State Vehicle Localiser Moving objects’ poses and velocities
Online Map Mapper
Various Data Other CAVs
World Representation Creator State Vehicle Localis. World Representation
Array of traffic signals Traffic Signalis. Detector
Static object poses OGM Creator
Moving object’s poses and velocities Moving Objects Tracker
Path Planner Route Route Planner Set of Paths
State Input Data
Traffic Rules Traffic Signalis. Detector
Behaviour Selector Pose Vehicle Localis. Path
Pose & velocity of moving objects
Motion planner Path Behaviour Selector Trajectory
Obstacle Avoider Trajectory Motion Planner Trajectory
Command and Control Trajectory Obstacle Avoider Actuation of steering weel

Actuation of throttle

Actuation of brakes

4.3.2      Environment sensor data

4.3.2.1     Global Navigation Satellite System (GNSS)

TBD

4.3.2.2     Light Detection and Ranging (LIDAR)

TBD

4.3.2.3     Radio Detection and Ranging (RADAR)

TBD

4.3.2.4     Cameras (2/D and 3D)

TBD

4.3.2.5     Ultrasound

TBD

4.3.2.6     Microphones

TBD

4.3.2.7     Wheel encoder

TBD

4.3.3      Onboard device data

4.3.3.1     Odometer

TBD

4.3.3.2     Accelerometer

TBD

4.3.3.3     Road wheel sensor

TBD

4.3.4      User input data

TBD

4.3.5      Offline map

TBD

4.3.6      State

TBD

4.3.7      Goal

TBD

4.3.8      Route

TBD

4.3.9      Occupancy Grid Map

TBD

4.3.10   Online map

TBD

4.3.11   Traffic signals

TBD

4.3.12   Traffic rules

TBD

4.3.13   Pose

TBD

4.3.14   Velocity

TBD

4.3.15   World representation

TBD

4.3.16   Path

TBD

4.3.17   Trajectory

TBD

4.3.18   Output data

4.3.18.1  Steering wheel actuation

TBD

4.3.18.2  Throttle actuation

TBD

4.3.18.3  Brake actuation

TBD

4.4       CAV-environment interaction

4.4.1      CAV identity

TBD

4.4.2      Attitude-Path-Trajectory

TBD

4.4.3      Spatial attributes

TBD

4.4.4      World representation

TBD

4.4.5      Distance

TBD

4.4.6      Events

TBD

5      References

Annex 1 – Terminology

 

Term Acron. Definition
Advanced Driver Assistance System ADAS Electronic systems that assist drivers in driving and parking functions
Aggregate Programming AP (Paradigm) prescribes that each AIM M periodically and asynch­ronously evaluates a program P (the same for all devices) by per­forming the following steps
AI Framework AIF
AI Module AIM A computational entity with a defined (and fair and ethical) purpose (local or networked, single or multi processor) that exposes a set of MPAI interfaces that can be implemented as  HW signals, SW APIs, protocols). Whatever is inside an AIM is not relevant. It can be connectionless or connection oriented.
Collective Awareness CA Periodic exchange of status information between ITS-Ss (ETSI)
Collective Perception CP Sharing the perceived environment of an ITS-S based on perception sensors (ETSI)
Collective Perception Message CPM Enables a CAV to share information about detected objects with other CAVs (ETSI)
Collective Perception Service CPS Enables CAVs to share information about other road users and obstacles that were detected by its perception sensors (ETSI).
Cooperative Awareness Message CAM Messages exchanged in the ITS network between ITS-Ss to create and maintain awareness of each other and to support cooperative performance of vehicles using the road network (ETSI)
Command and Control CAC The AIM converting AOD’s decisions into actual commands and controls.
Communication The infrastructure that connects the Components of an AIF and distributed AIMs
Component An element of the AIF Reference Model
Connected and Autonomous Vehicle CAV A vehicle capable to reach an assigned target by planning a route and acting on the CAV after sensing and interpreting the environment and possibly exchanging information with other CAVs.
Computational Field CF A distributed data structure that associates a value to each AIM. Each value is stored in the corresponding AIM, which can therefore read it
Decision horizon The estimated time between the current state and the
Driving behaviour A collection of behaviours, such as lane keeping, intersection handling, traffic light handling, etc.
Execution Component where AIM workflows are executed. It receives external inputs and produces the requested outputs both of which are application specific
Goal =(,) is the pair and associated velocity.
Inertial Measurement Unit IMU Inertial positioning devices such as accelerometer, gyroscope, odometer
Machine Learning ML
Management and Control MAC Components that manages and controls the AIMs in the AIF, so that they execute in the correct order and at the time when they are needed
Occupancy Grid Map OGM A representation of the environment as evenly spaced grids of 1/0 (presence/absence) representing an obstacle at that location computed using sensor data and CAV’s State.
Offline Map An offline-created map of a location with annotation
Online map An online-created map Merging Offline Maps and the Occupancy Grid Map computed online using sensors’ data

and the current car’s State.

Path = {1 , 2 , …, ||} is a sequence of CAV Poses = (,,) in the Offline Maps.
Pose 2D coordinates of the CAV in the Offline Maps with its orien­tation p = (,,)
Remission Grid Map A grid map of reflectance intensity distribution of the environ­ment measured by a LIDAR scanner
Route A sequence of Way Points
State The set of: pose, linear and angular velocity, acceler­ation etc. characterising the CAV at a given time
Storage A Component used, e.g., to store inputs and outputs of the indiv­idual AIMs, data from the AIM’s state and intermediary results, shared data among AIMs etc.
Trajectory A sequence of commands = (, , ), where is the des­ired velocity at time t, is the desired steering angle at t, and is the duration of . Other definitions of Trajectory exist.
Way Point WP A point given as a coordinate pair (, ), in an Offline Map

Annex 2 – ETSI Technical Report

ETSI specifies the Collective Perception Service (CPS) in its Technical Report [6]. The CPS includes the format and generation rules of the Collective Perception Message (CPM).

The CPM message format is (H=header, C=container, M=mandatory, O=optional).

Table 5 –  ETSI Collective Perception Message format

PDU header H M protocol version, message ID and Station ID.
Management C M transmitter type (e.g., vehicle or RSU) and position.
Station Data C O transmitter heading, velocity, or acceleration etc.
Sensor Information C O
Perceived Object C O A CPM can report up to 128 detected objects
Free Space Addendum C O free space areas/volume within the sensor detection areas

Every 0.1s a CPM is generated if one of the 3 conditions is satified

ETSI makes use of a common coordinate system. A vehicle can communicate its absolute coordinates roll, pitch and yaw (Attitude).

Different CPM generation rules have been investigated [9].


Use Cases and Functional Requirements – Application Note

MPAI Application Note #9 – MPAI-CAV – Connected Autonomous Vehicles

Proponents: Giorgio Audrito (University of Turin), Leonardo Chiariglione (CEDEO), Gérard Chollet, Miran Choi (ETRI), Ferruccio Damiani (University of Turin), Gianluca Torta (University of Turin)

Description: This use case addresses the Connected Autonomous Vehicle (CAV) domain and the 3 main operating instances of a CAV:

  1. Autonomous Motion, i.e., the operation of the portion of a CAV that enables its autonomous motion
  2. Human-to-CAV interaction, i.e., the operation of the portion of a CAV that responds to hum­ans’ commands and queries and senses humans’ activities
  3. The CAV-to-environment interaction, i.e., the operation of the portion of a CAV that commun­icates with other CAVs and sources of information.

Comments:

Significant research and experimentation has been carried out in the domain addressed by this Application Note. However,

  1. While there is a high level of knowledge and result sharing about the algorithms studied and experimented, e.g., in the several challenges, and there is a rough commonality in the Auton­omous Motion reference models, no attempt has been done to formalise such a reference model and identify the (classes of) data types in and out of the CAV subsystems.
  2. There has been no significant effort to identify and classify human commands and queries to CAVs and the level of passenger activity in the CAV passenger compartment.
  3. While there are significant studies and even a standard addressing CAV-to-CAV interaction, the communication payload considered is not directly connected with the use and relevance of the data that flow inside the CAV.

Examples:

A preliminary study carried out by the MPAI-CAV Requirements group has identified the fol­lowing subsystems (AIMs, in the MPAI language)

  1. Vehicle Localiser
  2. Route Planner
  3. Occupancy Grid Map Creator
  4. Environment Mapper
  5. Moving Objects Tracker
  6. Traffic Signalisation Detector
  7. World Representation Creator
  8. Path Planner
  9. Behavior Selector
  10. Motion Planner
  11. Obstacle Avoider
  12. Command and Control

A first identification of of input/output data has already been achieved.

A similar work is under way for the Human-to-CAV interaction.

Initial work to identify the CAV-to-environment interaction is under way.

Object of standard:

  1. Reference models for the 3 CAV components: 1) Autonomous Motion, 2) Human-to-CAV interac­tion and 3) CAV-to-environment interaction
  2. Functionalities of AIMs of 1) Autonomous Motion and formats of data between AIMs
  3. Functionalities of AIMs of 2) Human-to-CAV interac­tion and formats of data between AIMs, taking into account other MPAI projects
  4. Messages and data format formats of CAV-to-environment interaction.

Benefits: The standard would help

  1. development and maturation of technologies required for high performance Autonomous Motion AIMs.
  2. create synergies between CAV-specific and wider use human-machine interaction.
  3. develop CAV-to-environment protocols that are focused on the actua needs of CAVs.

Bottlenecks: actual experimentation will require large amounts of data available from market players.

Social aspects: availability of superior technologies, especially in the Autonomous Motion  com­ponent, will accelerate the development of a much needed application.

Success criteria: the progress of technology triggered by the MPAI Reference Models.

References:

[1] MPAI N242: MPAI-CAV Reference Models

[2] ETSI TR 103 562 V2.1.1 (2019-12), Analysis of the Collective Perception Service (CPS);

Release 2