<-General aspects of MPAI-CAV Architecture Go to ToC References->
Table 2 and Table 16 define the CAV-specific Terms and the general MPAI Terms, respectively.
Table 3 – Terms and Definitions
Term | Definition |
Accelerometer Data | Data related to the acceleration forces acting on a CAV produced by the electronic sensor accelerometer. |
Alert | Elements in an Environment Representation that should be treated with priority by the Obstacle Avoider AIM. |
AMS-MAS Command | The AMS Command instructing the Motion Actuation Subsystem to change the Ego CAV’s Spatial Attitude SAA at time tA to Spatial Attitude SAB at time tB. |
AMS-HCI Response | Response generated by the Motion Actuation Subsystem during and after the execution of an HCI-AMS Command. |
AMS-MAS Command | A Command issued by the AMS to the MAS designed to drive the CAV to reach a Goal. |
Audio | Digital representation of an analogue audio signal sampled at a frequency between 8-192 kHz with a number of bits/sample between 8 and 32, and non-linear and linear quantisation. |
Audio Data | The serialised output of a microphone array capturing the target Environment to create the Audio Scene Description used to incorporate Environment Audio information in the Basic and Full Environment Representation. |
Audio Object | Digital Representation of Audio information with its metadata. |
Audio Scene | The Audio Objects of an Environment with Spatial Object metadata. |
Audio Scene Descriptors | Descriptors enabling the description of the outdoor and indoor sound field in terms of individually Identified Audio Objects with a Spatial Attitude. |
Audio-Visual Object | Coded representation of Audio-Visual information with its metadata. An Audio-Visual Object can be a combination of Audio-Visual Objects. |
Audio-Visual Scene (AV Scene) | The Audio-Visual Objects of an Environment with Object Spatial Attitude. |
Audio-Visual Scene Descriptors | Descriptors enabling the description of the outdoor and indoor Audio-Visual Scene in terms of Audio-Visual Objects having a common time-base, associating co-located audio and visual objects if both are available, and supporting the physical displacement and interrelation (e.g., occlusion) of Audio and Visual Objects over time. |
Avatar | An animated 3D object representing a real or fictitious person in a virtual space rendered to a physical space. |
Avatar Model | The Model of a human that a user selects to impersonate the CAV’s HCI as rendered by the Personal Status Display AIM. |
Basic Environment Representation (BER) | A Digital Representation of the Environment that integrates the Ego CAV’s Spatial Attitude, the Scene Descriptions produced by the available Environment Sensing Technology-specific Road Topology, and Other Environment Data. |
Body Descriptors | Descriptors representing the motion and conveying information on the Personal Status of the body of a human or an avatar. |
Brakes Command | The result of the interpretation of AMS-MAS Command to the Brakes. |
Brakes Response | The Response of Brakes to the AMS Command Interpreter. |
Camera Data | Serialised data provided by a variety of sensor configurations operating in the visible frequency range. |
Camera Scene Descriptors | Descriptors produced by the Camera Scene Description AIM using Camera Data and previous Basic Environment Representations. |
CAV-Aware entity | Physical entities possessing some of the sensing and communication capabilities of a CAV without being a CAV, e.g., Roadside Units and Traffic Lights. |
CAV Centre | The point in the CAV selected as represented by coordinates (0,0,0). |
CAV Identifier | A code uniquely identifying a CAV carrying information, such as Country where the CAV has been registered, Registration number in that country, CAV manufacturer identifier, CAV model identifier. |
Cognitive State | An element of the internal status of a human or avatar reflecting their understanding of the Environment, such as “Confused” or “Dubious” or “Convinced”. |
Connected Autonomous Vehicle | A vehicle able to autonomously reach a Pose by:
1. Understanding human utterances in the Subsystem (HCI). 2. Planning a Route (AMS). 3. Sensing and building a Representations of the external Environment (ESS). 4. Exchanging such Representations and other Data with other CAVs and CAV-aware entities (AMS). 5. Making decisions about how to execute the Route (AMS). 6. Acting on the MAS. |
Data | Information in digital form. |
Data Format | The standard digital representation of Data. |
Decision Horizon | The time within which a decision is assumed will be implemented. |
Descriptor | Coded representation of a feature of text, audio, speech, or visual. |
Digital Representation | Data corresponding to a physical entity. |
Ego CAV | The object in the representation of an environment that the CAV recognises itself. |
Emotion | An element of the internal status of a human or avatar resulting from their interaction with the Environment or subsets of it, such as “Angry”, and “Sad”. |
Environment | The portion of the external world of current interest to the CAV. |
Environment Representation | A digital representation of the Environment produced by an Environment Sensing Technology in CAV. |
Environment Sensing Technology (EST) | One of the technologies used to sense the Environment by the Environment Sensing Subsystem, e.g., RADAR, Lidar, Video, Ultrasound, and Audio, The Offline Map is considered as an EST. |
Face | The portion of a 2D or 3D digital representation corresponding to the face of a human. |
Face Descriptors | Descriptors representing the motion and conveying information on the Personal Status of the face of a human or an avatar. |
Face ID | The Identifier of a human belonging to a group of humans inferred from analysing the face of the human. |
Factor | One of Cognitive State, Emotion, and Social Attitude |
Full Environment Representation (FER) | A digital representation of the Environment using the Basic Environment Representations of the ego CAV and of other CAVs in range or Roadside Units. |
Full Environment Representation Audio | The A output of the Full Environment Representation Viewer. |
Full Environment Representation Commands | Commands issued by a CAV passenger to the HCI to enable navigation in the Full Environment Representation, e.g., select a Point of View, zoom in/out, control sound level. |
Full Environment Representation Visual | The Visual output of the Full Environment Representation Viewer. |
Gesture | A movement of the body or part of it, such as the head, arm, hand, and finger, often a complement to a vocal utterance. |
Global Navigation Satellite System (GNSS) | One of the systems providing global navigation information such as GPS, Galileo, Glonass, BeiDou, Quasi Zenith Satellite System (QZSS) and Indian Regional Navigation Satellite System (IRNSS). |
Goal | The Spatial Attitude planned to be reached at the end of a Decision Horizon. |
HCI-AMS Command | High-level instructions issued by HCI to AMS to instruct it to reach a final destination or messages that HCI has received from the HCI of other CAVs. |
Identifier | A Label that is uniquely associated with a human, an avatar, or an object. |
Inertial Measurement Unit | An inertial positioning device, e.g., odometer, accelerometer, speedometer, gyroscope etc. |
Instance ID | Instance of a class of Objects and the Group of Objects the Instance belongs to. |
LiDAR Data | Serialised data provided by a LiDAR sensor, an active time-of-flight sensor operating in the µm range – ultraviolet, visible, or near infrared light (900 to 1550 nm). |
LiDAR Scene Descriptors | Descriptors produced by the LiDAR Scene Description AIM using LiDAR Data and previous Basic Environment Representations. |
Machine Avatar | The rendered face and body of an Avatar produced by the Personal Status Display |
Machine Speech | The rendered synthetic speech generated by the Personal Status Display. |
Map Scene Descriptors | Descriptors produced by the Map Scene Description AIM using Offline Map Data and previous Basic Environment Representations. |
MAS-AMS Response | The Response of AMS Command Interpreter integrating the Response from Brakes, Wheel Directions, and Wheel Motors. The MAS-AMS Responses contain the value of a Spatial Attitude’s at an intermediate Pose with the corresponding Time. |
Meaning | Information extracted from an input text such as syntactic and semantic information. |
Microphone Array Geometry | Audio Data captured by an array of microphones arranged as specified by the Microphone Array Geometry and providing sensing characteristics of the microphone(s) used (e.g., cardioid), sampling frequency, number of bits/sample etc. |
Modality | One of Text, Speech, Face, or Gesture. |
Model | A Data Format representing an object with their features ready to be animated. |
Object | A data structure representing an object sensed by an EST and produced by an EST-specific Scene Description. Elements characterising and object are:
1. Timestamp. 2. Identifier of the Scene Description AIM that has generated the Object. 3. Alerts 4. Spatial Attitude of the Object and its estimated accuracy measured from the CAV Centre. 5. Bounding box. 6. Object type (2D, 2.5D, and 3D). |
Object ID | The Identifier uniquely associated with a particular class of Objects, e.g., hammer, screwdriver, etc. |
Odometer Data | The distance from the start up to the current Pose measured by the number of wheel rotations times the tire circumference (π x tire diameter). |
Offline Map | A previously created digital map of an Environment and associated metadata. |
Offline Map Data | Data provided by an Offline Map in response to a given set of coordinate values. |
Orientation | The set of the 3 roll, pitch, yaw angles indicating the rotation around the principal axis (x) of a CAV, its y axis having an angle of 90˚ counterclockwise (right-to-left) with the x axis and its z axis (perpendicular to and out of the ground). See Figure 5. |
Other Environment Data | Additional Data acquired by the Motion Actuation Subsystem and complementing the spatial data such as weather, temperature, air pressure, humidity, ice and water on the road, wind, fog etc. |
Path | A sequence of Poses = (xi, yi, zi, αi, βi, γi). |
Personal Status | The ensemble of information internal to a person expressed by 3 Factors (Cognitive State, Emotion, Social Attitude) conveyed by one or more Modalities (Text, Speech, Face, and Gesture Modalities). |
Pose | Position and Orientation of the CAV. |
Position | The current coordinates of a CAV as obtained from the CAV’s sensors. |
RADAR Data | Serialised data provided by a RADAR sensor, an active time-of-flight sensor operating in the 24-81 GHz range. |
RADAR Scene Descriptors | Descriptors produced by the RADAR Scene Description AIM using RADAR Data and previous Basic Environment Representations. |
Refined Text | Text resulting from the refinement of Text produced by a Speech Recognition AIM by the Language Understanding AIM. |
Road State | Data about the state of the road the CAV is traversing inferred by the AMS from internally available information or received from an external source via a communication channel such as detours and road conditions. |
Road Topology | A data structure containing the Position of the Road Signs (Traffic Poles, Road Signs, Traffic Lights) and a Taxonomy-based semantics of the Road Signs. |
Roadside Unit | A wireless communicating device located on the roadside providing information to CAVs in range. |
Route | A sequence of Way Points. |
Scene Description | The organised collection of Descriptors that enable an object-based description of a scene. |
Scene Description Format | The combination of EST-specific 2D, 2.5D, or 3D Scene Descriptors used by an EST Scene Description in an EST-specific time window. |
Scene Descriptors | The individual attributes of the coded representation of the objects in a scene, including their location. |
Shape | The digital representation of the volume occupied by a CAV. |
Social Attitude | An element of the internal status of a human or avatar related to the way they intend to position themselves vis-à-vis the Environment or subsets of it, e.g., “Confrontational”, “Respectful”. |
Spatial Attitude | CAV’s Position, Orientations and their velocities and accelerations at a given time. |
Speaker ID | The Identifier of a human belonging to a group of humans inferred from analysing the speech of the human. |
Speech | Digital representation of analogue speech sampled at a frequency between 8 kHz and 96 kHz with 8, 16 and 24 bits/sample, and non-linear and linear quantisation. |
Speech Model | The collection of Speech Descriptors characteristic of a speaker used to generate the synthetic speech of the Personal Status Display. |
Speedometer Data | The speed of the CAV as measured by the electronic sensor that measures the instantaneous speed of the CAV. |
Subsystem | One of the 4 components making up the CAV. |
Text | A series of characters drawn from a finite alphabet represented using a Character Set. |
Traffic Rules | The digital representation of the traffic rules applying to an Environment as extracted from the local Traffic Signals based on the local traffic rules. |
Traffic Signals | The digital representations of the traffic signals on a road and around it, their Spatial Attributes, and the semantics of the traffic signals. |
Trajectory | A sequence of Spatial Attitudes si (s1,s2,…si) and the expected time each Spatial Attitude will be reached. |
Ultrasound Data | Serialised data provided by an ultrasonic sensor, an active time-of-flight sensor typically operating in the 40 kHz to 250 kHz range, measuring the distance between objects within close range. |
Ultrasound Scene Descriptors | Descriptors produced by the Ultrasound Scene Description AIM using Ultrasound Data and previous Basic Environment Representations. |
Video | Data generated by a camera. |
Viewpoint | The Spatial Attitude of a user looking at the Environment. |
Visual Object | Coded representation of Visual information with its metadata. |
Visual Scene | The Visual Objects of an Environment with Spatial Object metadata. |
Visual Scene Descriptors | Descriptors enabling the description of the outdoor and indoor visual scene in terms of individually Identified Visual Objects with a Spatial Attitude. |
Waypoint | A point on an Offline Map. |
Wheel Direction Command | The result of the interpretation of AMS-MAS Command to the Wheel Direction. |
Wheel Direction Feedback | The Response of Wheel Direction to the AMS Command Interpreter. |
Wheel Motor Command | The result of the interpretation of AMS-MAS Command to the Wheel Motor. |
Wheel Motor Response | The Response of the Wheel Motor to AMS Command Interpreter. |
<-General aspects of MPAI-CAV Architecture Go to ToC References->
© Copyright MPAI 2022-23. All rights reserved