
Component standards versus monolithic standards
MPAI has recently published two main new standards with a request for Community Comments. In MPAI lingo this means that the standards are mature, but MPAI asks the Community to review the drafts before publication.
The two standards are Connected Autonomous Vehicle – Technologies (CAV-TEC) V1.0 and MPAI Metaverse Model – Technologies (MMM-TEC) V2.0. They are not “new” MPAI standards as they have already been published in earlier version, but the new versions represent significant improvements.
You may ask “Why is this topic handled in a single paper when we are talking about two standards that have – apparently – so little in common”? If you ask this question, you may want to continue reading and discover an essential aspect of MPAI standardisation. A standard for an application domain is (almost) never a monolith but is often made of components shared with standards of other domains, often unrelated.
Let’s first have a scan of the two standards.
Connected Autonomous Vehicle (CAV-TEC V1.0) is a standard for the ICT (Information and Communication Technology) part of a vehicle that can move itself in the physical world to reach a destination. The standard assumes that a CAV is composed of four functionally separated but interconnected subsystems:
- The human-CAV Interaction (HCI): enables a human to establish a dialogue with the CAV to issue a variety of commands to it, such as moving to a destination or have conversations where the human – and the CAV – can express their internal status – e.g., emotion – whether real or fictitious.
- The Environment Sensing Subsystem (EES): leverages the sensors on board to create the Basic Scene Descriptors (BED), the most accurate digital representation of the external environment possible with the available sensors.
- The Autonomous Motion Subsystem (AMS): receives the BED and improves its accuracy by exchanging portions of their Environment Descriptors with CAVs in range. Then it analyses or reviews the situation and decides how to issue commands to implement the route decided by the human.
- The Motion Actuation Subsystem converts a general request to move the CAV potentially by a few meters to specific commands for brakes, motors, and wheel and to report about the implementation of the command.
The four subsystems are implemented as AI Workflows per the AI Framework standard. Each AI Workflow includes several AI Modules that exchange Data specifies by CAV-TEC V1.0 or by other MPAI standards. Most AIMs and data types of:
- HCI: is specified not by CAV-TEC but by Multimodal Conversation (MPAI-MMC) V2.3 . The rest is specified by Object and Scene Description (MPAI-OSD) V1.3, Portable Avatar Format (MPAI-PAF) V1.4, Data Types, Formats, and Attributes (MPAI-TFA) V1.3, and CAV-TEC.
- ESS, AMS, and MAS: are specified by CAV-TEC, MPAI-OSD, MPAI-PAF, and MPAI-TFA.
A concise description of the operation of an implementation of CAV-TEC is available here.
MPAI Metaverse Model (MMM-TEC V2.0) specifies a virtual space composed of processes is provided operating on an ICT platform and executing or requesting other processes to execute actions on items, i.e., MMM-TEC V2.0-specified data types. Processes may be rendered as avatars.
MMM-TEC does not use AIMs but only data types. It defines 27 actions and a language enabling processes to communicate using speech acts. Many are specified by MPAI-MMC, MPAI-OSD, and MPAI-PAF.
Example of actions are MM-Embed applied on an avatar or an object to place it somewhere in the metaverse, UM-Capture applied to media information in the physical world acquired for use in the metaverse, Identify applied to data captured to convert it to an item with an identifier, MM-Anim applied to an item, e.g., an avatar, to animate it, and MU-Render applied to an item in the metaverse to render it in the universe. A concise description of the operation of an implementation of MMM-TEC is available here.
As already said, both draw a sizeable part of their data types (and CAV-TEC of its AIMs) from three other standards: MPAI-OSD V1.3, MPAI-PAF V1.4, and MPAI-TFA V1.3. Some of these AIMs and data types are new in the mentioned versions of the three standards. They were developed in response to the CAV-TEC and MMM-TEC needs.
Why not develop them in CAV-TEC and MMM-TEC directly, then? Because the three standards address a specific area of standardisation that is also required by many other MPAI standards: objects and scenes, 3D graphics, and qualifiers. Therefore, MPAI-OSD V1.3, MPAI-PAF V1.4, and MPAI-TFA V1.3 are also published with a request for Community Comments.
Anybody is invited to send comments on any of the five standards to the MPAI Secretariat by April 13 at 23:59 UTC.