The MPAI Manifesto (N130), approved at MPAI’s 4th General Assembly, identifies the main features of MPAI’s mission built on the the principles laid down in the MPAI Statutes (N80).
The last paragraph of the Manifesto reads:
Finally, although it is a technical body, MPAI is aware of the revolutionary impact AI will have on the future of human society. MPAI pledges to address ethical questions raised by its technical work with the involvement of high-profile external thinkers. The initial significant step is to enable the understanding of the inner working of complex AI systems.
The paragraph stresses the fact that an Artificial Intelligence (AI) system – a system that uses AI to achieve goals that often could not be achieved or could only be achieved poorly with traditional technologies – can be tested for performance before it can be accepted by a user. This is important because, unlike other data processing-based standards, the performance of AI systems often depends on how the system has been trained. Ethical requirements indicate a range of non-technical requirements that AI systems shall satisfy beyond those of technical performance.
Ethical is an attribute that many researchers are investigating trying to attach it to AI systems. This is an area that may or may not provide practical results if an AI system is taken as a black box.
This document contains the following chapters:
- describes the MPAI approach and the normative elements of an MPAI standard.
- introduces the notions of Conformance and Performance
- defines the MPAI ecosystem and its components.
- introduces Identifiers as ecosystem enablers.
- defines Performance.
- specifies how and by whom Performance will be Tested.
- acknowledges the current state of development of the subject being investigated.
and two Annexes
- formalises the terminology used in this document.
- outlines a possible structure of Identifiers.
MPAI standards target components and systems enabled by data coding technologies, especially, but not necessarily, using AI. MPAI subdivides an Implementation of an MPAI-specified Use Case into functional components called AI Modules (AIM). AI systems implementing a Use Case and AIMs standardised by an MPAI standard are both called Implementations.
MPAI assumes Implementations use Artificial Intelligence (AI) or Machine Learning (ML) or traditional Data Processing (DP) or a combination of these. The implementation technologies can be hardware or software or mixed hardware and software.
An AI system implementing a Use Case is an aggregation of interconnected AIMs executed inside an AI Framework (AIF). MPAI is developing and plans on releasing such an AI Framework (MPAI-AIF) standard in July 2021.
The 2 basic elements of MPAI standardisation are represented in Figure 1 and Figure 2.
|Figure 1 – The MPAI AI Module (AIM)||Figure 2 – The MPAI AI Framework (AIF)|
Figure 1 shows a an input video from a camera shooting a human face into the green block. The function of the AIM is to detect the emotion on the face and the meaning of the sentence the human is uttering. The AIM can be implemented with a neural network or with DP technologies. In the latter case, the AIM will probably access a knowledge base external to the AIM.
MPAI standards typically include several Use Cases.
An implementation of an AI system according to the MPAI-AIF standard is depicted Figure 2. The input data enter the Execution area of the AIF where the workflow is executed under the supervision of Management and Control. AIMs communicate via the AIF’s Communication and Storage infrastructure and may access static or slowly changing data sources (e.g., those of Figure 1). The result of workflow execution is provided as output data.
The hierarchy of MPAI standardisation is
- Data format: any type of static (time independent) or dynamic (time dependent) data that is used in an AI system.
- AI Module: a subsystem that is characterised by
- the function performed by the AIM
- the data entering and leaving the AIM as standardised in point 1.
- Use Case: an AI system that implements an MPAI-specified Use Case characterised by
- the function performed by the AI system
- the data entering and leaving the Ai system as standardised in point 1.
- the topology and connection of the AIMs in the AI system.
MPAI defines two types of adherence to a standard:
- Conformance of an implementation. This means that the implementation has been tested and declared to be technically correct. Conformance may be claimed for:
- the Data format
- the AIM
- the Use Case.
- Performance of an implementation. This means that the implementation (“Implementation”) has been tested and declared to be the ethical. Performance may be claimed for
- the AIM
- the Use Case.
The MPAI ecosystem is constituted by Implementers developing and Users utilising Implementations, and Implementations.
The 3 terms are defined as follows:
- Implementers offer Implementations
- Users utilise Implementations
- Implementations are components, devices, applications and services realised according to one MPAI standard.
Implementers (professional market) make components (i.e., AIMs) and offer them to other Implementers who make devices or applications or services using acquired and internally-developed AIMs.
Users (consumer market) utilise Implementations (devices, applications or services). Because of the peculiarity of AI systems, Users should have a level of guarantee of the performance of Implementations.
An Implementation developed according to an MPAI Use Case, e.g., Conversation with Emotion, runs a workflow that implements interconnected AIMs. The standard containing the Use Case, i.e., Multimodal Communication, specifies the input and output data of all AIMs of the Use Case, the topology of the AIMs and their connections.
The table below introduces the 3 key elements – AIF, Use Case and AIMs – that enable the creation of a governable ecosystem.
|AIF||is specified by MPAI with its AIMs, their connections and topology. The execution of a workflow should be protected, e.g., the AIF should not:
1. maliciously peek into the data that are exchanged by the AIMs,
2. have other external connections with malicious purposes, such as stealing data.
If the workflow is implemented according to the standard, the AIMs must be connected according to the topology specified in the standard and they cannot have external data sinks that are not in the standard.
|Use Case||is specified by MPAI. The User should be able to know that an Implementation s/he is utilising implements an identified Use Case that is being run in the AIF.|
|AIMs||are specified by MPAI. A User should be able to know that the AIMs used to implement the Use Case are Implementations of the AIMs specified by the Use Case that is being run.|
It might be useful to consider the following additional element.
|AI tool||is a tool, e.g., a neural network, that is used by an AIM. A User need not be granted visibility to the inside of the neural networks used by the AIMs. However, the User might wish to know what version of the neural network is currently being used in an AIM and contributes to producing the output data of the AIF.|
The typical infrastructure enabling the ecosystem described in the preceding section is based on identifiers. The MPAI infrastructure enabling the MPAI ecosystem is based on Identifiers:
- The AIF made by Implementer F has an Identifier from which it is possible to know:
- the identity of the Implementer
- the version of the MPAI-AIF standard and the Profile
- the version of the Implementation of the AIF
- The Use Case is identified by
- the MPAI standard
- the Profile of the standard
- the Version of the standard
- The AIM made by Implementer M has an Identifier from which it is possible to know:
- the identity of the Implementer
- the version of the MPAI standard and the Profile
- the version of the Implementation
- Given the assumptions, an Implementer of an AIM may decide to assign the Identifier of an AI tool, on condition that it is unique to the AI tool.
Note that Users do NOT require Identification.
The AIF implementing the Use Case needs identifiers, e.g., in the case of a distributed system. Such private, identifiers are internal to the AIF and need not be exposed outside of the AIF.
The public Identifiers help recognise an entity as part of a system. They do not intend to be a constraint, rather, a freely chosen tool that allows an Implementation to be recognised as a member of the MPAI ecosystem. The following examples illustrate the case:
- An Implementer can use an AIF or another environment to implement a Use Case. In the latter case the AIF is not part of the MPAI ecosystem.
- An Implementer can implement Use Cases that may or may not be MPAI-defined. In the latter case, even if the AIF is part of the ecosystem, and some standard AIMs may be used, the Implementation is not part of the MPAI ecosystem.
- An Implementer can use standard AIMs or non standard AIMs or AIM-unrelated technologies. In the second and third case, the Implementation is not part of the MPAI ecosystem.
- An Implementer can decide to attach Identifiers or identifiers or no identifiers at all to the components and to the AI system. In the second and third case, the implementation is not part of the MPAI ecosystem.
- An Implementer can assign its own internal identifiers to the components that implement the Use Case in a workflow.
MPAI will take measures to prevent implementers from attaching Identifiers to entities that do not belong to the MPAI ecosystem.
Therefore use of Identifiers is optional and an Implementer can
- Use the MPAI specified technologies with private identifiers or without any identifier. Users will know that what they are using is not part of the MPAI ecosystem.
- Attach Identifiers to make their Implementations part of the MPAI ecosystem. Users will know that what they are using is part of the MPAI ecosystem.
It should be mentioned that IPR holders may decide to develop Framework Licences and issue licences that are conditional on implementations being Implementations.
An Implementations should have the attribute of Performance. MPAI uses the word Performance to indicate:
- Reliability: the Implementation performs as specified by the standard, profile and version the Implementation refers to, e.g., within the application scope, stated limitations, and for the period of time specified by the Implementer.
- Robustness: the ability of the Implementation to cope with data outside of the stated application scope with an estimated degree of confidence.
- Fairness: the training set and/or network is open to testing for bias and unanticipated results so that the extent of applicability of the system can be assessed.
- Replicability: the Performance of an Implementation as Tested by an entity can be replicated, within an agreed level, by another entity.
The four definitions are meant to apply to data outside of the training set.
The governance of the ecosystem is ensured by a process that executes Conformance Tests mandatorily on all implementations to preserve the necessary credibility that an implementation is technically correct.
Performance Tests on an implementation may be required only by certain users – end users and companies – to be able to declare that an implementation is an Implementation because its Performance is above a certain threshold. The result of Performance Testing may not necessarily be binary but have levels. The definition of Performance is specific to an application domain and defined in the context of that domain.
An MPAI standard always requires Conformance Testing. This may be integral to the standard or in a separate standard.
These are the elements of the process of Performance Testing:
- Each MPAI standard shall contain the following Performance Testing elements in a separate standard to avoid confusion that Performance Testing of an implementation is mandatory.
- defines the standard-specific Performance
- specifies the Testing process
- identifies the Means – procedures, tools, data sets, etc. – to be used
- identifies the information that a Testing Entity shall provide in support of their results.
- MPAI – defines and provides – or – approves and indicates – the Means used to Test the Performance of an Implementation.
- The Testing Entities may be individual Implementers and Testing Laboratories appointed by MPAI and sole holders of MPAI-granted name spaces used to assign Identifiers to successfully Tested Implementations.
- MPAI may not be a Testing Entity.
- If the Entity is not the Implementer, the Entity shall have the right to request information identified in 1.d from the Implementer.
- Appointment of a Testing Entity applies for a particular domain, is permanent but may be revoked.
Annex 2 presents initial ideas of the structure of Identifiers.
Individual Users are free to assess the Performance of an Implementation and post the result, e.g., to a reputation system.
In some cases MPAI will be able to specify Performance Testing Means for Implementations. However, MPAI cannot guarantee that any standard AIM or Use Case can be Tested for Performance.
Two possibilities to remove or limit the impact of the previous paragraph are:
- make it mandatory that an AIM be testable at the time it is proposed
- two classes of AIMs be introduced: Testable or unTestable (at a given point in time).
|AI Framework||The environment where AIM-based workflows are executed.|
|AI Module||The basic processing elements receiving processing-specific inputs and producing processing-specific outputs.|
|AI system||A system that uses AI to achieve goals that often could not be achieved or could only be poorly achieved with traditional technologies.|
|Applicability||The function of a Use Case or of an AIM as defined by the relevant standard|
|Component||An AIM or a tool (e.g., a neural network) used by the AIM|
|Ethical requirement||A non-technical requirement that an AI system shall satisfy beyond those of technical conformance.|
|Explainability||The ability to trace the output of an Implementation back to the inputs that have produced it.|
|Fairness||The attribute of an Implementation whose extent of applicability can be assessed by making the training set and/or network open to testing for bias and unanticipated results.|
|Identifier||A name that identifies any of the following
1. an Implementer
2. (a Component of) an Implementation
3. A standard, its profiles and its versions.
|Implementation||An implementation of a Use case or an AIM whose Performance has been Tested to be above a level defined in the relevant Standard.|
|MPAI ecosystem||The ensemble of Implementers developing and Users utilising Implementations, and Implementation.|
|Normativity||An applicable set of attributes of a technology or a set of technologies specified by an MPAI standard.|
|Performance||The attribute of an Implementation of being Reliable, Robust, Fair and Replicable.|
|Performance Testing||The assessment of the level of Performance of an Implementation.|
|Profile||A particular subset of the technologies that are used in a Use Case standard and, where applicable, the classes, other subsets, options and parameters relevant to that subset.|
|Registration Entity||An entity that assigns Identifiers.|
|Reliability||The attribute of an Implementation that performs as specified by the standard, profile and version the Implementation refers to, e.g., within the application scope, stated limitations, and for the period of time specified by the Implementer.|
|Replicability||The attribute of an Implementation whose Performance, as Tested by a Testing Entity, can be replicated, within an agreed level, by another Testing Entity.|
|Robustness||The attribute of an Implementation that copes with data outside of the stated application scope with an estimated degree of confidence.|
|Standard||A set of Use Cases belonging to an application domain normatively specified by MPAI along with the AIMs required to Implement the Use Cases. MPAI may develop other types of standards.|
|Testing Entity||An Implementer or a Testing Laboratory authorised by MPAI to Test the Performance of an Implementation in a given domain|
|Testing Laboratory||A laboratory accredited by MPAI to Test Implementations for Performance|
|Testing Means||Elements such as tools, procedures, data sets, etc., developed or approved by MPAI to be used when Testing the Performance of an Implementation.|
|Use Case||A particular instance of the application domain covered by a Standard identified as Normative.|
|Version||A revision or extension of a Standard.|
Initial ideas for the Identifier structure
MPAI Identifiers could have the following structure
|Implementation no./version||LengthTI (Tested Implementations)||Implementer|
|LengthUT (unTested Implementations)|
|Standard, profile and version||LengthSPV||MPAI|
Workflow identifiers need not have an MPAI-defined structure.