6       Functional Requirements

6.1       Human-CAV Interaction. 21

6.1.1       I/O Data summary. 21

6.1.2       Audio. 22

6.1.3       Verbal Interaction. 22

6.1.4       Concept Expression (Face) 23

6.1.5       Concept Expression (Speech) 23

6.1.6       Emotion. 23

6.1.7       Face identity. 23

6.1.8       Face Objects. 23

6.1.9       Full World Representation. 24

6.1.10     Full World Representation Viewer commands. 24

6.1.11     Intention. 24

6.1.12     Meaning. 24

6.1.13     Object Identifier 24

6.1.14     Speaker Identity. 24

6.1.15     Text 25

6.1.16     Video. 25

6.2       Environment Sensing Subsystem.. 25

6.2.1       I/O Data summary. 25

6.2.2       Microphone Array Audio. 26

6.2.3       Audio Objects. 26

6.2.4       Basic World Representation. 26

6.2.5       GNSS Coordinates. 27

6.2.6       GNSS Data. 27

6.2.7       Lidar Data. 27

6.2.8       Moving Object Tracker Data. 28

6.2.9       Offline Maps. 28

6.2.10     Radar Data. 28

6.2.11     State. 29

6.2.12     Traffic Signalisation. 29

6.2.13     Ultrasound Data. 29

6.2.14     Video Camera data. 30

6.2.15     Visual Objects and Scene (Camera) 30

6.2.16     Visual Objects and Scene (Lidar) 30

6.2.17     Visual Objects and Scene (Radar) 30

6.2.18     Ultrasound Objects and Scene (Ultrasound) 30

6.3       CAV to Everything. 30

6.3.1       Summary of CAV to Everything data. 30

6.3.2       Basic World Representation. 31

6.3.3       CAV Identifier 31

6.3.4       Events. 31

6.3.5       Full World Representation. 31

6.3.6       Path. 32

6.3.7       State. 32

6.3.8       Trajectory. 32

6.4       Autonomous Motion Subsystem.. 32

6.4.1       Summary of Autonomous Motion Subsystem data. 32

6.4.2       Basic World Representation. 32

6.4.3       Command/Response. 33

6.4.4       Full World Representation. 33

6.4.5       Goal 33

6.4.6       Offline map. 33

6.4.7       Path. 33

6.4.8       Pose. 33

6.4.9       Route. 33

6.4.10     State. 34

6.4.11     Traffic rules. 34

6.4.12     Traffic Signals. 34

6.4.13     Trajectory. 34

6.4.14     Velocity. 34

6.5       Motion Actuation Subsystem.. 34

6.5.1       Summary of Motion Actuation Subsystem data. 34

6.5.2       Accelerometer data. 35

6.5.3       Brakes Command. 35

6.5.4       Brakes Feedback. 35

6.5.5       Command from AMS. 35

6.5.6       Feedback to AMS. 35

6.5.7       Motion Data. 35

6.5.8       Odometer Data. 35

6.5.9       Other Environment Data. 36

6.5.10     Road Wheel Direction Command. 36

6.5.11     Road Wheel Direction Feedback. 36

6.5.12     Road Wheel Motor Command. 36

6.5.13     Road Wheel Motor Feedback. 36

6.5.14     Speedometer 36

6        Functional Requirements

Functional Requirements developed in this document refer to individual technologies identified as necessary to implement MPAI-CAV Use Cases using AIMs operating in an MPAI AI Framework (AIF). They adhere to the following guidelines:

MPAI has issued Calls for Technologies for the MPAI-MMC [3] and MPAI-CAE [4] standards and acquired a set of first-generation technologies related to some of the data types listed below. MPAI is ready to consider new technologies related to the data Formats requested in this Chapter if:

  • They support new requirements and/or to enhance capabilities.
  • The need to support such new enhanced capability requirements are documented.

1.1       Human-CAV Interaction

1.1.1      I/O Data summary

For each AIM (1st column), Table 12 gives the input (2nd column) and the output data (3rd column).

Table 12 – I/O data of Human-CAV Interaction AIMs

AIM Input Data Output Data
Speech Separation Input Audio Separated Speech
Internal AV Scene Input Video Face Objects
Speaker Recognition Separated Speech Speaker ID
Speech Recognition Separated Speech Emotion (Speech)

Text (Speech)

Object and Gesture Analysis Input Video Object ID

Emotion (Gesture)

Meaning (Gesture)

Face Analysis Face Objects Emotion (Face)

Meaning (Face)

Face Identification Face Objects Face ID
Full World Representation Viewer Full World Representation

Viewer Command

FWRV Audio

FWRV Video

Emotion Fusion Emotion (Speech)

Emotion (Face)

Emotion (Gesture)

Fused Emotion
Language Understanding Text (Speech)

Input Text

Object ID

Text  (Language Understanding)

Meaning (Text)

Question analysis Meaning (Text)

Meaning (Gesture)

Meaning (Face)

Fused Meaning

Intention

Question and dialogue processing Input Text

Speaker ID

Fused emotion

Text (Speech)

Fused Meaning

Intention

Face ID

Face Objects

Command/Request

Feedback/Response

Concept (Speech)

Output Text

Concept (Face)

Speech synthesis Concept (Speech) Output Speech
Face animation Concept (face) Output Video

1.1.2      Audio

Monochannel Audio is the digital representation of an analogue audio signal sampled at a frequency between 8-192 kHz with a number of bits between 8 bits/sample and 32 bits/sample and a quantisation that is linear or companded.

To respondents

Respondents are invited to comment on these definitions and/or provide specific restrictions suitable to CAV-HCI.

To respondents

Respondents are requested to propose a coded representation of the above commands coordinated with the requirements of the with the Autonomous Motion Subsystem Responses. Proposals of coded representation of additional commands are welcome.

1.1.3      Verbal Interaction

Some commands given to the Autonomous Motion Subsystem are:

  1. Go to a Waypoint.
  2. How long does it take to get there.
  3. Park close to a Waypoint.
  4. Drive faster.
  5. Drive slowly.
  6. Display Full World Representation.

Some of the responses of the Autonomous Motion Subsystem are:

  1. Enumeration of possible routes with major features of each route.
  2. Enumeration of possible parking places with major features of each place.
  3. Announcement of obstacles preventing the expeditious accomplishment of the Command.
  4. Announcement that the desired Waypoint has been reached.

To respondents

Respondents are requested to propose a coded representation of the above Commands/Responses. Proposals of coded representation of additional responses are welcome.

1.1.4      Concept Expression (Face)

MPAI-MMC [3] specifies a Lips Animation format.

To Respondents

In this call, MPAI is looking for a technology that can animate head and face of the avatar with the purpose to represent:

  1. Motion of head when speaking.
  2. Motion of face muscles and eyeballs.
  3. Turning of gaze to a particular person.
  4. Emotion of the associated spoken sentence.
  5. Meaning of the associated spoken sentence.

1.1.5      Concept Expression (Speech)

MPAI-MMC [3] specifies Text With Emotion as Reply (speech) format.

To Respondents

Respondents are requested to propose a “Concept to Speech” format with the following requir­ements:

  1. Capability to represent Emotions varying in time in the synthesised Speech.
  2. Capability to represent Meanings varying in time in the CAV reply.

1.1.6      Emotion

MPAI-MMC [3] specifies an extensible 3-level Basic Emotion Set.

To respondents

Respondents are requested to comment on the suitability of the technology standardised in [3] for the purpose of supporting human dialogue with a CAV. In case this is considered unsuitable, respondents are requested to motivate their assessment and provide an extension of the MPAI Basic Emotion Set or a new solution.

1.1.7      Face identity

The Face Identity shall be able to represent the identity of a limited number of faces.

To respondents

Respondents are requested to propose a face identification system suitable for a limited number of faces.

Proposals of a face identification usable in the context of a company renting CAVs to customers are welcome.

1.1.8      Face Objects

In order for the HCI Subsystem to have a full understanding of what is happening in the passenger cabin (e.g., to have a more natural audio-visual interaction with the passengers, recording of what happened in the cabin etc.), the HCI Subsystem needs to represent the data acqu­ired from the cabin. The current use is:

  1. To extract the face of a passenger for the purpose of extracting Emotion and Identity.
  2. To determine the exact location of a passenger in the cabin in order to animate the CAV’s Avatar Face in such a way that the Avatar gazes into the eyes of the passenger it is talking to.

To respondents

Respondents are invited to propose a Face Objects format satisfying the above requirements to be used as input to Face Analysis, Face Identification and Question and Dialogue Processing.

1.1.9      Full World Representation

The Full World Representation requirements are developed in the context of Autonomous Motion Subsystem requirements.

To respondents

Respondents are invited to comment.

1.1.10   Full World Representation Viewer commands

The requirements of FWR interaction will be developed once the FWR requirements are defined.

To respondents

Respondents are invited to comment.

1.1.11   Intention

MPAI-MMC [3] specifies a digital representation format for Intention.

To respondents

Respondents are requested to comment on the suitability of the technology standardised in [3] for CAV purposes.

1.1.12   Meaning

MPAI-MMC [3] specifies a digital representation format for Meaning.

To respondents

Respondents are requested to comment on the suitability of the technology standardised in [3] for CAV purposes.

1.1.13   Object Identifier

MPAI-MMC [3] specifies a digital representation format for Object Identifier to be used to identify objects held in the hand of a person.

To respondents

Respondents are requested to comment on the suitability of the technology standardised in [3] for CAV purposes.

1.1.14   Speaker Identity

The current Speaker Identity requirements demand the ability to identify a limited number of Speakers.

To respondents

Respondents are requested to propose a Speaker Identification methods suitable for a limited number of speakers.

Proposals of a Speaker Identification method usable in a content of a company renting CAVs to customers are welcome.

1.1.15   Text

MPAI-MMC [3] specifies ISO/IEC 10646, Information technology – Universal Coded Character Set (UCS) [5] as digital Text representation to support most languages in use.

To respondents

Respondents are invited to comment on this choice.

1.1.16   Video

Video is intended for use in the passenger cabin. MPAI-MMC [3] specifies Video as:

  1. Pixel shape: square
  2. Bit depth: 8 or 10 bits/pixel
  3. Aspect ratio: 4/3 or 16/9
  4. 640 < # of horizontal pixels < 1920
  5. 480 < # of vertical pixels < 1080
  6. Frame frequency 50-120 Hz
  7. Scanning: progressive
  8. Colorimetry: ITU-R BT709 or BT2020
  9. Colour format: RGB or YUV
  10. Compression, either:
    1. Uncompressed;
    2. Compressed according to one of the following standards: MPEG-4 AVC [6], MPEG-H HEVC [7], MPEG-5 EVC [8]

To respondents

Respondents are invited to comment on MPAI’s choice for 2D Video.

Respondents are also requested to propose a data format for an array of cameras having video+depth as the baseline format or other 3D Video data formats.

1.2       Environment Sensing Subsystem

1.2.1      I/O Data summary

For each AIM (1st column), Table 13 gives the input (2nd column) and the output data (3rd column). The following 3-digit subsections give the requirements of the data formats in columns 2 and 3.

Table 13 – Environment Sensing Subsystem data

AIM or Subsystem Input Output
Vehicle Localiser GNSS Coordinates State
Pose-Velocity-Acceleration
Offline Maps
Environment Recorder State
Basic World Representation
Other Environment Data
GNSS Coordinate Data Extractor GNSS data Global coordinates
Radar Data Processor Radar data Visual Objects and Scene
Lidar Data Processor Lidar data Visual Objects and Scene
Ultrasound Data Processor Ultrasound data Visual Objects and Scene
Camera Data Processor Camera data Visual Objects and Scene
Microphone Sound Data Processor Microphone data Sound Objects and Scene
Traffic Signalisation Detector Visual Objects and Scene Traffic signals

Traffic rules

Moving Objects Tracker Visual Objects and Scene Moving objects’ states
Basic World Representation Fusion State Basic World Representation
Offline maps
Visual Objects and Scenes
Static and moving objects
Traffic signals

1.2.2      Microphone Array Audio

Microphones are used to capture the external sound, (e.g., for noise suppression inside the passen­ger cabin, but also to add the sound dimension to the Full World Representation by using the Audio from the Environment captured by the external Microphone Array.

MPAI-CAE specifies Interleaved Multichannel Audio [4].

To Respondents

Respondents are requested to comment on the usability of the specified technology for MPAI-CAV and/or propose an Audio Array Format suitable to create a 3D sound field representation of the Environment to be added to the Basic World Representation and used inside the passenger cabin, e.g., to cancel Environment noise.

1.2.3      Audio Objects

The sound field of the Environment is captured by the external Microphone Array Audio, and Objects are extracted and added to the Basic and eventually Full World Representation after receiving information from other CAVs in range.

To Respondents

Respondents are requested to propose an Audio Objects Format that provides information about audio objects identified in the Environment with their semantics and the degree of accuracy with which objects have been represented.

1.2.4      Basic World Representation

Data from different information sources, e.g., CAV’s Environment sensors and Offline maps are combined to one comprehensive Basic World Representation (BWR) [31]. The BWR ensures that all CAV functions base their decisions on the same knowledge base, thus ensuring consistency of system operation.

The requirements of the BWR are:

  1. All perceived objects that impact the path decision process in the Decision Horizon Time shall be represented in the BWR
  2. Each object in the BWR shall be described by
    1. Its ID.
    2. Its State.
    3. Its physical characteristics, e.g., static or dynamic.
    4. Its bounding box (as a minimum) and its full shape if known.
    5. Its semantics (e.g., other CAVs or other objects).
    6. An accuracy estimate.
  3. The ground (roads etc.) shall be described with all traffic signalisations, including roads and lane geometry, topology, and lane-specific traffic rules.
  4. The BWR shall have the ability to scale as to the level of structuredness of the Environment increases.
  5. The BWR shall have a scalable representation that allows fast access to critical data.
  6. The BWR shall include the Audio Objects of 2.3.

To Respondents

Respondents are requested to propose a Basic World Representation data format satisfying the requirements. Proposals with justified extended requirements will be considered.

1.2.5      GNSS Coordinates

To Respondents

Respondents are requested to provide a format for the coordinates and the accuracy of the data.

1.2.6      GNSS Data

Global Navigation Satellite Systems (GNSS) provide spatial information with different accuracies. GNSS can only be relied on when reception conditions are above a certain level. This excludes GNSS in tunnels or urban canyons.

Some data formats are:

  1. GPS Exchange Format (GPX) provides an XML schema providing a common GPS data format that can be used to describe waypoints, tracks, and routes.
  2. World Geodetic System (WGS) includes the definition of the coordinate system’s fundamental and derived constants, the ellipsoidal (normal) Earth Gravitational Model (EGM), a description of the associated World Magnetic Model (WMM), and a current list of local datum transfor­mations.
  3. International GNSS Service (IGS) SSR is a format used to disseminate real-time products to support the IGS (igs.org) Real-Time Service. The messages support multi-GNSS and include corrections for orbits, clocks, DCBs, phase-biases and ionospheric delays. Extensions are planned to also cover satellite attitude, phase centre offsets and variations and group delay variations.

To Respondents

Respondents are requested to propose a single GNSS data format that is capable to represent the features of all GNSS types.

1.2.7      Lidar Data

Radio Detection and Ranging (RADAR), LiDAR and Ultrasound are active sensors based on “time-of-flight”, i.e., they measure distance and speed based on the time it takes for a signal to hit an object and be reflected back.

Unlike Radar, however, it operates in the µm range – ultraviolet, visible, or near infrared light. It sends an electromagnetic signal and receives the reflected signal back. These are the features of a typical eye-safe LiDAR:

  1. Has a frequency of ~200 THz and a wavelength ~1.5 µm (the visible range is 0.4 to 0.75 µm).
  2. Measures the range in each pixel (called also voxels).
  3. Pixel grayscale is measured by the intensity variation of the reflected light.
  4. The colour of an object can be measured by using more than one wavelength.
  5. Velocity can be measured using the Doppler shift in frequency due to motion, or by measuring the position at different times.
  6. Micro-motion can be measured using the Doppler shift measured with a coherent LiDAR.
  7. Produces 100 kpoints/frame or 1.35 Mbytes: 32*3 bits (coordinates) +16 bits (ref­lectance). Today 200 kpoints/frame are reasonable.
  8. Angular resolution is 0.1º and the vertical field is 40º.
  9. A Lidar scan captured at 25 fps generates 270 Mbit/s, i.e., 33.75 Mbytes/s.

The LAS (LASer) format is a binary file format for LiDAR point cloud data specified by the American Society for Photogrammetry and Remote Sensing (ASPRS) [23].

Pcap isa well-established data format for Lidar scans [24, 25, 26]. Other formats are listed in [28]. E57 is one of them.

To Respondents

Respondents are invited to provide a LiDAR data format that facilitates identification, tracking and digital representation of objects to produce Visual Objects and Scene (Lidar) as required by 6.2.16.

1.2.8      Moving Object Tracker Data

Moving Object Tracker receives the Visual Objects and Scene data from the different sources – Lidar, Radar, Cameras, Ultrasound, Environment Sound – and provides a list of Visual Objects where each Object has the following associated data:

  1. Spatial coordinates
  2. Bounding Boxes
  3. Coordinated of vertices of Bounding Boxes
  4. Velocity and Acceleration
  5. Accuracy of the data provided.

To Respondents

Respondents are requested to propose a format for the Objects and their list that is friendly to the Basic World Representation format.

1.2.9      Offline Maps

An Offline Map or HD maps or 3D maps is a roadmap with cm-level accuracy and a high environ­mental fidelity reporting the positions of pedestrian crossings, traffic lights/signs, barriers etc. at the time the Offline Map has been created.

Worth noting are:

  1. Navigation Data Standards [30] calls itself “The worldwide standard for map data in autom­ot­ive eco-systems”. Their NDS specification covers data model, storage format, interfaces, and protocols.
  2. SharedStreets [34] Referencing System is a global non-proprietary system for describing streets.

To Respondents

Respondents are requested to propose an Offline Map Format. The Format should support different levels of conformance.

1.2.10   Radar Data

Radar operates in the mm range. It can detect vehicles (CAVs and trucks) because they typically reflect Radar signals while objects that are smaller and have less reflectance, e.g., pedestrians and motorcycles have a poor reflectance. In a busy environment, the reflections of big vehicles can overcome a motorcycle’s and a child next to a vehicle can go undetected, while a can may produce an image out of proportion to its size.

The main features of Radar are:

  1. Measures distance.
  2. Is independent of environment.
  3. ImaLow resolution (objects detected, not classified).
  4. Short range radar in the 25 GHz band, distance is computed.
  5. Long range radar in the 76-77 GHz, detects objects and measures speed @ ≤ 250 m. Typical ranges of long-range radar (LRR) systems are 80-200 m. The antenna is small because the wavelength is ~3.5-4 mm. Atmospheric absorption limits interference with other systems. A multitask 94-GHz pulse Doppler radar has 25-cm radial and 1.5 degrees angular resolution

Radar sensors build a representation of the environment based on the observation of complex, scattered radio waves, from which information of an object’s distance and velocity can be derived.

Known Radar data formats include [27]:

  1. OPERA BUFR format (Paulitsch et al., 2010).
  2. hdf5 formats (Michelson et al., 2011).
  3. NetCDF files generated by the commercial EDGE software.
  4. hdf5 files generated by the commercial GAMIC software.
  5. German Weather Services quantitative local scan format (DX).
  6. Quantitative composite format (RADOLAN, see German Weather Service, 2004).

To Respondents

Respondents are invited to propose a format of Radar images that facilitates identification, tracking and representation of objects to produce Visual Objects and Scene (Radar) as required by 6.2.17.

1.2.11   State

State is the set of the following CAV attributes at a given time:

  1. Pose, Velocity and Acceleration
  2. Orientation, Angular Velocity and Angular Acceleration.

To Respondents

Respondents are requested to propose a State Format suitable for use in CAVs.

1.2.12   Traffic Signalisation

Traffic Signalisation types are:

  1. Traffic signs
  2. Road signs
  3. Placement signs
  4. Acoustic signs
  5. Traffic lights

To Respondents

Respondents are requested to propose a set of Traffic Signalisation Descriptors.

1.2.13   Ultrasound Data

These are the main features of Ultrasound:

  1. Operates at 20 kHz.
  2. Is independent of environment.
  3. Images have low resolution.
  4. Works on a limited range (≤ 10 m)

To Respondents

The Ultrasound File Format initiative has defined the Ultrasound File Format (UFF) format [22].

Respondents are invited to propose an Ultrasound Format that facilitates identification, tracking and representation of sound objects to produce Visual Objects and Scene (Radar) as required by 6.2.18.

1.2.14   Video Camera data

The expected output is a scene representation used by Moving Object Tracker, Traffic Signalisation Recogniser and Basic World Representation.

To Respondents

Respondents are invited to provide a data Format for RGB-D cameras.

1.2.15   Visual Objects and Scene (Camera)

The expected output is a scene representation used by Moving Object Tracker, Traffic Signalisation Recogniser and Basic World Representation.

To Respondents

Respondents are invited to provide a Format for scenes captured by cameras. The format should be sufficiently generic to be capable to be used for – or adapted for use in – scenes captured by Radar, Lidar and Ultrasound devices.

1.2.16   Visual Objects and Scene (Lidar)

The expected output is a scene representation used by Moving Object Tracker, Traffic Signalisation Recogniser and Basic World Representation.

To Respondents

Respondents are invited to provide a Format for scenes captured by Lidars. The format should be sufficiently generic to be capable to be used for – or adapted for use in – scenes scenes captured by Radar, Video and Ultrasound devices.

1.2.17   Visual Objects and Scene (Radar)

The expected output is a scene representation used by Moving Object Tracker, Traffic Signalisation Recogniser and Basic World Representation.

To Respondents

Respondents are invited to provide a Format for scenes captured by Radars. The format should be sufficiently generic to be capable to be used used for – or adapted for use in – scenes captured by Lidar, Video and Ultrasound devices.

1.2.18   Ultrasound Objects and Scene (Ultrasound)

The expected output is a scene representation used by Moving Object Tracker, Traffic Signalisation Recogniser and Basic World Representation.

To Respondents

Respondents are invited to provide a Format for scenes captured by Ultrasound. The format should be sufficiently generic to be capable to be used for – or adapted for use in – scenes scenes captured by Lidar, Radar and Video devices.

1.3       CAV to Everything

1.3.1      Summary of CAV to Everything data

Table 15 gives, for each AIM (1st column), the input data (2nd column) and the output data (3rd column).

Table 14 –CAV to Everything data

CAV AIM Input Output
General Data Communication CAV identity and model CAV identity and model
General Data Communication State-Path-Trajectory State-Path-Trajectory
General Data Communication Basic World Representation Basic World Representation
General Data Communication Full World Representation Full World Representation
General Data Communication Messages Messages
General Data Communication Basic World Representation Basic World Representation
CAV Proxy Data from General Data Communication

Data from AIMs

Data to General Data Communication

Data to AIMs

AMS Basic World Representation Basic World Representation
HCI Data from General Data Communication Data to General Data Communication

1.3.2      Basic World Representation

As in Environment Sensing Subsystem.

To Respondents

No response requested here. Comments welcome.

1.3.3      CAV Identifier

The CAV identification system should carry the following information

  1. Country where the CAV has been registered
  2. Registration number in the country
  3. CAV manufacturer identifier
  4. CAV model identifier

To Respondents

MPAI requests proposals for universal CAV identification system. Justified proposals for inclus­ion of additional data in the CAV Identifier are welcome

1.3.4      Events

Events is used to provide CAV with information that is useful for its travel.

Examples are:

  1. Road blocked at waypoint x,y,z
  2. Traffic jam at waypoint x,y,z

To Respondents

MPAI requests proposals for events, their semantics and coded representation.

1.3.5      Full World Representation

Defined in Autonomous Motion Subsystem.

To Respondents

No response requested here. Comments welcome.

1.3.6      Path

Defined in Autonomous Motion Subsystem.

To Respondents

No response requested here. Comments welcome.

1.3.7      State

Defined in Autonomous Motion Subsystem.

To Respondents

No response requested here. Comments welcome.

1.3.8      Trajectory

Defined in Autonomous Motion Subsystem.

To Respondents

No response requested here. Comments welcome.

1.4       Autonomous Motion Subsystem

1.4.1      Summary of Autonomous Motion Subsystem data

Table 15 gives, for each AIM (1st column), the input (2nd column) and the output data (3rd column).

Table 15 – CAV Autonomous Motion Subsystem data

CAV/AIM Input Output
Route Planner Pose

Destination

Route

Estimated time

Full World Representation Fusion State Full World Representation
Offline Maps
Basic World Representations
Other Environment Data
Path Planner State Set of Paths
Route
Traffic Rules
Behaviour Selector State Path
Route
Full World Representation
Motion planner Path Trajectory
Obstacle Avoider Full World Representation Trajectory Trajectory
Command Feedback Command

1.4.2      Basic World Representation

Defined in Environment Sensing Subsystem.

To Respondents

No response requested here. Comments welcome.

1.4.3      Command/Response

Defined in Human-to-CAV subsystem.

To Respondents

No response requested here. Comments welcome.

1.4.4      Full World Representation

The elements of the FWD are:

  1. Appropriate portion of the offline map.
  2. Physics of the environment: weather, temperature, air pressure, ice and water on the road).
  3. For each object: ID, position, velocity, acceleration bounding box (more than a box, if available), semantics, flags (e.g., warning).
  4. For CAVs, the Path and bounding box or the shape of the body, if available.
  5. Road structure.
  6. Local traffic signalisation.
  7. Scalable representation such that the following is possible:
    1. Fast access to different data depending on the AIM who needs to access.
    2. Deliberative and reactive actions.
  8. The estimated accuracy of each data element based on the CAV’s measurements and the Basic World Representations received from CAVs in range.

1.4.5      Goal

A particular State.

To Respondents

No response requested. Comments welcome.

1.4.6      Offline map

Defined in Environment Sensing Subsystem.

To Respondents

No response requested here. Comments welcome.

1.4.7      Path

A sequence of Poses in the Offline Map

To Respondents

No response requested here. Comments welcome.

1.4.8      Pose

The position of an object, including the CAV, in the Environment.

To Respondents

A format to represent Pose is requested.

1.4.9      Route

A sequence of Waypoints.

To Respondents

A Route Format compatible with a proposed Offline Map Format is requested.

1.4.10   State

Defined in Environment Sensing Subsystem.

To Respondents

No response requested here. Comments welcome.

1.4.11   Traffic rules

The traffic rules should be digitally represented to realise a route [33].

To Respondents

MPAI requests a digital representation of traffic rules satisfying the following requirements:

  1. Produce the traffic rules from a given set of traffic signals
  2. Produce the traffic signals from the traffic rules.

A Traffic Ontology is a possible solution.

1.4.12   Traffic Signals

Format to represent traffic signals on a road and around it.

To Respondents

MPAI requests a Traffic Signals Format capable to represent

  1. All traffic signalisations required
  2. The specific local version of traffic signalisation
  3. The coordinates of the traffic signals

1.4.13   Trajectory

The Path and the States that allows a CAV to start from a State and reach another State in a given amount of time without violating Traffic Rules and affeccting passengers’ comfort.

To Respondents

A digital representation of Trajectory is requested.

1.4.14   Velocity

Defined in Environment Sensing Subsystem.

To Respondents

No response requested here. Comments welcome.

1.5       Motion Actuation Subsystem

1.5.1      Summary of Motion Actuation Subsystem data

Table 15 gives, for each AIM (1st column), the input data (2nd column) from which AIM (column) and the output data (3rd column).

Table 16 –Motion Actuation Subsystem data

CAV/AIM Input Output
Command Interpreter Command from AMS

Road Wheel Motor Feedback

Road Wheel Direction Feedback

Brakes Feedback

Feedback to AMS

Road Wheel Motor Command

Road Wheel Direction Command

Brakes Command

Pose, Velocity, Acceleration Data Generation Accelerometer

Speedometer

Odometer

Motion Data

1.5.2      Accelerometer data

An accelerometer is an electronic sensor that measures the acceleration forces acting on a CAV. An accelerometer measures proper acceleration, i.e., the acceleration of a body in its own instantaneous rest frame, not to be confused with coordinate acceleration, i.e., acceleration in a fixed coordinate system. Therefore, an accelerometer at rest on the surface of the Earth measures an acceleration straight upwards of g ≈ 9.81 m/s2. In free fall (falling toward the centre of the Earth at ≈ 9.81 m/s2) measures zero.

To Respondents

Respondents are requested to propose a single Accelerometer data format.

1.5.3      Brakes Command

The result of the interpretation of AMS Command to Brakes.

To Respondents

Respondents are requested to propose a set of command messages.

1.5.4      Brakes Feedback

The feedback of Brakes to Command Interpreter.

To Respondents

Respondents are requested to propose a set of feedback messages.

1.5.5      Command from AMS

The Command issued by the AMS

To Respondents

Respondents are requested to propose a set of high-level command messages.

1.5.6      Feedback to AMS

The Feedback of Command Interpreter summarising the Feedbacks.

To Respondents

Respondents are requested to propose a set of high-level feedback messages

1.5.7      Motion Data

To Respondents

Respondents are requested to propose a Motion Data Format bearing in mind that Motion Data will be used to create the CAV State by adding GNSS information.

1.5.8      Odometer Data

An odometer converts as the distance travelled the number of wheel rotations times the tire circumference (π x tire diameter) from the start up to the point being considered.

To Respondents

Respondents are requested to propose a single Odometer Data Format.

1.5.9      Other Environment Data

The set of Environment data such as temperature, air pressure, humidity etc.

To Respondents

Respondents are requested to propose a set Environment Data Formats.

1.5.10   Road Wheel Direction Command

The result of the interpretation of AMS Command to Road Wheel Direction.

To Respondents

Respondents are requested to propose a set of Road Wheel Direction Commands

1.5.11   Road Wheel Direction Feedback

The feedback of Road Wheel Direction to Command Interpreter.

To Respondents

Respondents are requested to propose a set of Road Wheel Direction Feedbacks

1.5.12   Road Wheel Motor Command

The result of the interpretation of AMS Command to Road Wheel Motor.

To Respondents

Respondents are requested to propose a set of Road Wheel Motor Commands

1.5.13   Road Wheel Motor Feedback

The feedback of Road Wheel Motor to Command Interpreter.

To Respondents

Respondents are requested to propose a set of Road Wheel Motor Feedbacks

1.5.14   Speedometer

An electronic sensor that measures the instantaneous speed of a CAV.

To Respondents

Respondents are requested to propose a single Speedometer data format.