Moving Picture, Audio and Data Coding
by Artificial Intelligence

A vision for AI-based data coding standards

Use of technologies based on Artificial Intelligence (AI) is extending to more and more applic­ations yielding one of the fastest-grow­ing markets in the data analysis and service sector.

However, industry must overcome hurdles for stakeholders to fully exploit this historical oppor­tunity: the current framework-based development model that makes applic­ation redep­loyment difficult, and monolithic and opaque AI applications that generate mistrust in users.

MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – believes that univer­sally accessible standards can have the same positive effects on AI as digital media stan­dards and has identified data coding as the area where standards can foster development of AI tech­nologies, promote use of AI applications and contribute to the solution of existing problems.

MPAI defines data coding as the transformation of data from a given representation to an equiv­alent one more suited to a specific application. Examples are compression and semantics extraction.

MPAI considers AI module (AIM) and its interfaces as the AI building block. The syntax and semantics of interfaces determine what AIMs should per­form, not how. AIMs can be implemented in hardware or software, with AI or Machine Learning legacy Data Processing.

MPAI’s AI framework enabling creation, execution, com­pos­ition and update of AIM-based work­flows (MPAI-AIF) is the cornerstone of MPAI standardisation because it enables building high-com­plexity AI solutions by interconnecting multi-vendor AIMs trained to specific tasks, operating in the standard AI framework and exchanging data in standard formats.

MPAI standards will address many of the problems mentioned above and benefit various actors:

  • Technology providers will be able to offer their conforming AIMs to an open market
  • Application developers will find on the open market the AIMs their applications need
  • Innovation will be fuelled by the demand for novel and more performing AIMs
  • Consumers will be offered a wider choice of better AI applications by a competitive market
  • Society will be able to lift the veil of opacity from large, monolithic AI-based applications.

Focusing on AI-based data coding will also allow MPAI to take advantage of the results of emer­ging and future research in representation learning, transfer learning, edge AI, and reproducibility of perfor­mance.

MPAI is mindful of IPR-related problems which have accompanied high-tech standardisation. Unlike standards developed by other bodies, which are based on vague and contention-prone Fair, Reasonable and Non-Discriminatory (FRAND) declarations, MPAI standards are based on Frame­work Licences where IPR holders set out in advance IPR guidelines.

Finally, although it is a technical body, MPAI is aware of the revolutionary impact AI will have on the future of human society. MPAI pledges to address ethical questions raised by its technical work with the involvement of high-profile external thinkers. The initial significant step is to enable the understanding of the inner working of complex AI systems.


FRAND forever? Or are there other business models possible?

What is the FrameWork License (FWL)

In the context of the development of new technology standards, Intellectual Property Rights (IP Rights) are the engine that ensure and sustain technology innovation. FWL intends to move past FRAND assurances which haven’t reduced friction between innovators and implementers.

In fact, the question of the implementation of FRAND assurances has created a diversity of interpretations, including different decisions taken by the courts. So much so that the recent judgment of the UK Supreme Court affirms a comprehensive principle of the meaning of FRAND: “This is a single, composite obligation, not three distinct obligations that the license terms should be fair, and separately, reasonable, and separately, non-discriminatory “.

So, the FRAND assurance, made during the standardization process of a new technology, has become a “headache” not only for the courts, but also for those who have to operate on the basis of what happened during the standardization process.

This is one reason why a new international and unaffiliated standards association called MPAI (Moving Picture, audio and data coding by Artificial Intelligence) has been established. MPAI has adopted a new management model of the IP Rights associated with the works done within a standardization body. This new industrial property management model is called FrameWork License (FWL). This model intends to overcome all the uncertainties generated by the FRAND declaration, because guidelines on how the future licenses relating to the Standard Essential Patents (SEPs) should be applied are already established at the outset of the standardization work.

With these more precise guidelines already decided in the course of the standardization process, MPAI plans to help both the holders of Standard Essential Patents (SEP) and the implementers of the new standardized technologies find an agreement for the use of SEPs, avoiding the frictions that we have sometimes seen.

As a consequence of the standardization works, a Call for Technologies supporting the MPAI AI Framework (AIF) standard was recently issued, along with AIF Framework License for its potentially essential IPRs.

The technical goal of MPAI-AIF is to enable the set up and the execution of mixed processing and inference workflows made of Machine Learning, Artificial Intelligence and legacy Data Processing components called AI Modules (AIM).

The MPAI AI Framework standard will facilitate integration of AI and legacy data processing components through standard interfaces and methods. MPAI experts have already validated MPAI’s innovative approach in a sample micro controller-based implementation that is synergistic with MPAI-AIF standard development.

The Framework License.

Access to the standard will be granted in a non-discriminatory fashion in compliance with the generally accepted principles of competition law and agreed upon before a standard is developed.

MPAI has replaced FRAND assurances with FWLs defined as the set of voluntary terms to use in a license, without monetary values. FWLs are developed by a committee (the IPR Support Advisory Committee) of MPAI members who are experts in the field of IP .

Practically, the FWL is the business model to remunerate IPRs in the standard that does not bear values: no $, no %, no dates etc. At most, the FWL could provide that in individual cases there is a CAP for the royalties to be paid, or an initial grace period where no royalties are paid to foster the adoption of the technology by the market and so on. Furthermore, the FWL states that the total cost of the licenses issued by IPR holders will be in line with the total cost of the licenses for similar standardized technologies and will take into account the value on the market of the specific standardized technology.

Only when the future standards developed by MPAI will be adopted by the marked and the FWLs will operate as guidelines for licensing the technologies compliant with the standard, it will be possible to really understand if the FWL is useful to help close the gap between the licensors and implementers. At that point, we might simply put the current FRAND declaration concept in the attic.

The full text of the FWL associated with the MPAI-AIF standard can be found on at this link.

The guidelines for the subsequent licenses to the AIF-FWL are listed in the following:

Conditions of use of the License

  1. The License will be in compliance with generally accepted principles of competition law and the MPAI Statutes
  2. The License will cover all of Licensor’s claims to Essential IPR practiced by a Licensee of the MPAI-AIF standard.
  3. The License will cover Development Rights and Implementation Rights
  4. The License will apply to a baseline MPAI-AIF profile and to other profiles containing additional technologies
  5. Access to Essential IPRs of the MPAI-AIF standard will be granted in a non-discriminatory fashion.
  6. The scope of the License will be subject to legal, bias, ethical and moral limitations
  7. Royalties will apply to Implementations that are based on the MPAI-AIF standard
  8. Royalties will not be based on the computational time nor on the number of API calls
  9. Royalties will apply on a worldwide basis
  10. Royalties will apply to any Implementation
  11. An MPAI-AIF Implementation may use other IPR to extend the MPAI-AIF Implementation or to provide additional functionalities
  12. The License may be granted free of charge for particular uses if so is decided by the licensors
  13. The Licenses will provide:
  14. a threshold below which a License will be granted free of charge and/or
  15. a grace period during which a License will be granted free of charge and/or
  16. an annual in-compliance royalty cap applying to total royalties due on worldwide revenues for a single Enterprise
  17. A preference will be expressed on the entity that should administer the patent pool of holders of Patents Essential to the MPAI-AIF standard
  18. The total cost of the Licenses issued by IPR holders will be in line with the total cost of the Licenses for similar technologies standardized in the context of Standard Development Organizations
  19. The total cost of the Licenses will take into account the value on the market of the AI Framework technology Standardized by MPAI.

By Robero Dini, member


What is the state of MPAI work?

Introduction

This article responds to the question:  where is MPAI today, 1st day of 2021, 3 months after its foundation, with its mission and plans?

Converting a mission into a work plan

Looking back, the MPAI mission “Moving Picture, Audio and Data Coding by Artificial Intelligence” looked very attractive, but the task of converting that nice-looking mission into a work plan wasdaunting. Is there anything to standardise in Artificial Intelligence (AI)? Thousands of companies use AI but do not need standards. Isn’t it so that AI signals the end of media and data coding standardisation?

The first answer is that we should first agree on a definition of standard. One is “the agreement reached by a group of individuals who recognise the advantage of all doing certain things in an agreed way”. There is, however, an older definition of standard that says “the agreement that permits large production runs of component parts that are readily fitted to other parts without adjustment”.

Everybody knows that implementing an MPEG audio or video codec means following a minutely prescribed procedure implied by definition #1. But what about an MPAI “codec”?

In the AI world, a neural network does the job it has been designed for and the network designer does not have to share with anyone else how his neural network works. This is true for the “simple” AI applications, like using AI to recognise a particular object, and for some of the large-scale AI applications that major OTTs run on the cloud.

The application scope of AI is expanding, however, and application developers do not necessarily have the know-how, capability or resources to develop all the pieces needed to make a complete AI application. Even if they wanted to, they could very well end up with an inferior solution because they would have to spread their resources across multiple technologies instead of concentrating on those they know best and acquire the others from the market.

MPAI has adopted the definition of standard as “the agreement that permits large production runs of component parts that are readily fitted to other parts without adjustment”. Therefore, MPAI standards target components, not systems, not the inside of the components, but the outside of the components. The goal is, indeed, to ensure standard users that the components will be “readily fitted to other parts without adjustment”.

The MPAI definition of standard appeared in an old version of the Encyclopaedia Britannica. Probably that the definition was inspired decades before, at the dawn of industrial standards and spearheaded by the British Standards Institute, the first modern industry standard association, when drilling, reaming and threading were all the rage in the industry of the time.

Drilling, reaming and threading in AI

AI has nothing to do with drilling, reaming and threading (actually, it could, but this is not a story for today). However, MPAI addresses the problem of standards in the same way a car manufacturer addresses the problem of procuring nuts and bolt.

Let us consider an example AI problem, a system that allows a machine to have a more meaningful dialogue with a human than it is possible today. Today, with speech recognition and synthesis technologies, it is already possible to have a meaningful man-machine dialogue. However, if you are offering a service and you happen to deal with an angry customer, it is highly desirable for the machine to understand the customer’s state of mind, i.e., her “emotion” and reconfigure the machine’s answers appropriately, lest the customer gets angrier. In yet another level of complexity, if your customer is having an audio-visual conversation with the machine, it would be useful for the machine to extract the person’s emotions from her face traits.

Sure, some companies can offer complete systems, full of neural networks designed to do the job. There is a problem, though, what control do you, as a user, have on the way AI is used in this big black box? The answer is unfortunately none, and this is one of the problems of mass use of AI where millions and in the future billions of people will deal with machines that show levels of intelligence, without people knowing how that (artificial) intelligence has been programmed before being injected in a machine or a service.

MPAI does not have in its mission nor can it offer a full solution to this problem. However, MPAI standards can offer a path that may lead to a less uncontrolled deployment of AI. This is exemplified by Figure 1 below.

Figure 1 – Human-machine conversation with emotion

Each of the six modules in the figure can be neural networks that have been trained to do a particular job. If the interfaces of the “Speech recognition” module, i.e., the AI equivalent of “mechanical threading”, are respected, the module can be replaced by another having the same interfaces. Eventually you can have a system with the same functionality but, possibly, with different performance. Individual modules can be tested in appropriate testing environments to assess how well a module does the job it claims it does.

It is useful to compare this approach with the way we understand the human brain operates. Our brain is not a network of variously connected 100 billion neurons. It is a system of “modules” whose nature and functions have been researched for more than a century. Each “module” is made of smaller components. All “modules” and their connections are implemented with the same technology: interconnected neurons.

Figure 2, courtesy of Prof. Wen Gao of Pengcheng Lab, Shenzhen, Guangdong, China, shows the processing steps of an image in the human brain until the content of the image is “understood” and the“push a button” action is ordered.

Figure 2 – The path from the retina to finger actuation in a human

A module of the figure is the Lateral Geniculate Nucleus (LGN). This connects the optic nerve to the occipital lobe. The LGN has 6 layers, kind of sub-modules, each of which performs distinct functions. Likewise for the other modules crossed by the path.

Independent modules need an environment

WE do not know what “entity” in the human brain controls the thousands of processes that take place in it, but we know that without an infrastructure governing the operation  we cannot make the modules of Figure 1 to operate and produce the desired results.

The environment where “AI modules” operate is clearly a target for a standard and MPAI has already defined the functional requirements for what it calls AI Framework, depicted in Figure 3. A Call for Technologies has been launched and submissions are due 2021/02/15.

Figure 3 – The MPAI AI Framework model (MPAI-AIF)

The inputs at the left-hand side correspond to the visual information from the retina in Figure 2, the outputs correspond to the activation of the muscle. One AI Module (AIM)  could correspond to the LGN and another to the V1 visual cortex, Storage could correspond to the short-term memory, Access to the long-term memory and Communication to the structure of axons connecting the 100 billion neurons. The AI Framework model considers the possibility to have distributed instances of AI Frameworks, something for which we have no correspondence, unless we believe in the possibility for a human to hypnotise another human and control their actions 😉.

The other element of the AI Framework that has no correspondence with the human brain – until proven otherwise, I mean – is the Management and Control component. In MPAI this plays clearly a very important role as demonstrated by the MPAI-AIF Functional Requirements.

Implementing human-machine conversation with emotion

Figure 1 is a variant of an MPAI Use Case called Conversation with Emotion, one of the 7 Use Cases that have reached the Commercial Requirements stage in MPAI. An implementation using the AI Framework can be depicted as in Figure 4.

Figure 4 – A fully AI-based implementation of human-machine conversation with emotion

If the six AIMs are implemented according to the emerging MPAI-AIF standard, then they can be individually obtained from an open “AIM market” and added to or replaced in Figure 4. Of course, a machine capable to have a conversation with a human can be implemented in many ways. However, a non standard system must be designed and implemented in all its components, and users have less visibility of how the machine works.

One could ask: why should AI Modules be “AI”? Why can’t they be simply Modules, i.e., implemented with legacy data processing technologies? Indeed, data processing in this and other fields has a decade-long history. While AI technologies are fast maturing, some implementers may wish to re-use some legacy Modules they have in their stores.

The AI Framework is open to this possibility and Figure 5 shows how this can be implemented. AI Modules contain the necessary intelligence in the neural networks inside the AIM, while legacy modules typically need Access to an external Knowledge Base.

Figure 5 – A mixed AI-legacy implementation of human-machine conversation with emotion

Conclusions

This article has described how MPAI is implementing its mission of developing standards in the Moving Picture, Audio and Data Coding by Artificial Intelligence domain. The method described blends the needs to have a common reference reference (the “agreement”, as called above) with the need to leave ample room to competition between actual implementations of MPAI standards.

The subdivision of a possibly complex AI system in elementary blocks – AI Modules – not only promotes the establishment of a competitive market of AI Modules, but gives users an insight on how the components of the AI system operate, hence giving back to humans more control on AI systems. It also lowers the threshold to the introduction of AI spreading its benefits to a larger number of people.


An introduction to the MPAI-AIF Call for Technologies

On 202/12/21 MPAI has held a teleconference to illustrate the MPAI-AIF Call for Technologies (CfT) and associated Framework Licence (FWL). This article summarises the main points illustrated at the teleconference: Whay and who is MPAI, the MPAI-AIF Functional Requirements, the MPAI_AIF Call for Technologies and the MPAI-AIF Framework Licence.

Miran Choi, an MPAI Director and Chair of the Communication Advisory Committee, recalled the reasons that led to the establishment of MPAI.

Over the past 3 decades, media compression standards have allowed manufacturing and services to boom. However, the technology momentum is progressively slowing while AI technologies are taking stage by offering more capabilities than traditional technologies, by being applicable to data other than audio/video and by being  supported by a global research effort. In addition to that, industry has recently suffered from the inadequacy of the FAIR, Reasonable and Non-Discriminatory (FRAND) model to deal with the tectonic changes of technology-intensive standards.

Miran then summarised the main characteristics of MPAI. A non-profit, unaffiliated and international association that develops

  • Standards for
    • AI-enabled data coding
    • Technologies that facilitate integration of data coding components in ICT systems and
  • Associated clear IPR licensing frameworks.

MPAI is the only standards organisation that has set AI as the key enabling technology for data coding standards. MPAI members come from industry, research and academia of 15 countries, representing a broad spectrum of technologies and applications.

The development of standards must obey rules of openness and due process, and MPAI has a rigorous process to develop standards in 6 steps

Step

Description

Use cases Collect/aggregate use cases in cohesive projects applicable across industries
Functional Requirements Identify the functional requirements the standard should satisfy
Commercial Requirements Develop and approve the framework licence of the stan­dard
Call for Technologies Publish a call for technologies supporting the functional and commercial requirements
Standard development Develop standard in an especially established Devel­opment Committee (DC)
MPAI standard Complete standard and obtain declarations from all Members

The transitions from one stage to the next are approved by the General Assembly.

The MPAI-AIF standard project is at the Call for Technologies stage.

Andrea Basso, Chair of the MPAI-AIF Development Committee (AIF-DC) in charge of the development of the MPAI-AIF standard introduced the motivations and functional requirements of the MPAI-AIF standard.

MPAI has developed several Use Cases for disparate applications coming to the conclusion that they can all be implemented with a combination of AI-based modules concurring to the achievement of the intended result. the history of media standards has shown the benefits of standardisation. Therefore, to avoid the danger of incompatible implementations of modules put to the market, where costs multiply at all levels and mass adoption of AI tech is delayed, MPAI seeks to standardise AI Modules (AIM) with standard interfaces, combined and executed within an MPAI-specified AI Framework. AIMs with standard interfaces will reduce overall design costs and increase component reusability,create favourable conditions leading to horizontal markets of competing implementations, and promote adoption and incite progress of AI technologies.

AIMs need an environment where they can be combined and executed. This is what MPAI- AIF – where AIF stands for AI Framework, is about. The AI Framework is depicted in the figure.

The AI Framework has 6 components: Management and Control, Execution, AI Modules, Communication, Storage and Access.

The MPAI functional requirements are

  1. Possibility to establish general Machine Learning and/or Data Processing life cycles
    1. for single AIMs to
      1. instantiate-configure-remove
      2. dump/retrieve internal state
      3. start-suspend-stop
      4. train-retrain-update
      5. enforce resource limits
      6. implement auto-configuration/reconfiguration of ML-based computational models of
    2. for multiple AIMs to
      1. initialise the overall computational model
      2. instantiate-remove-configure AIMs
      3. manually, automatically, dynamically and adaptively configure interfaces with Com­ponents
      4. one- and two-way signal for computational workflow initialisation and control of
      5. combinations of AIMs
  2. Application-scenario dependent hierarchical execution of workflows
  3. Topology of networked AIMs that can be synchronised according to a given time base and full ML life cycles
  4. Supervised, unsupervised and reinforcement-based learning paradigms
  5. Computational graphs, such as Direct Acyclic Graph (DAG) as a minimum
  6. Initialisation of signalling patterns, communication and security policies between AIMs
  7. Protocols to specify storage access time, retention, read/write throughput etc.
  8. Storage of Components’ data
  9. Access to
    1. Static or slowly changing data with standard formats
    2. Data with proprietary formats
  10. Possibility to implement AI Frameworks featuring
    1. Asynchronous and time-based synchronous operation depending on application
    2. Dynamic update of the ML models with seamless or minimal impact on its operation
    3. Time-sharing operation of ML-based AIMs shall enable use of the same ML-based AIM in multiple concurrent applications
    4. AIMs which are aggregations of AIMs exposing new interfaces
    5. Workflows that are a mixture of AI/ML-based and DP technology-based AIMs.
    6. Scalability of complexity and performance to cope with different scenarios, e.g. from small MCUs to complex distributed systems
  11. Possibility to create MPAI-AIF profiles

Panos Kudumakis, MPAI member, explained the MPAI-AIF Call For Technologies

  1. Who can submit
    1. All parties, including non members, who believe they have relevant technologies
    2. Responses submitted to secretariat who acknowledges via email
    3. Technologies submitted must
      1. Support the requirements of N74
      2. Be released according to the MPAI-AIF Framework Licence (N101) – if selected by MPAI for inclusion in MPAI-AIF
    4. MPAI will select the most suitable technologies on the basis of their technical merits for inclusion in MPAI-AIF.
    5. MPAI in not obligated to select a particular technology or to select any technology if those submitted are found inadequate.
  2. A submission shall contain
    1. Detailed documentation describing the proposed technologies.
    2. Annex A: Information Form (contact info, proposal summary).
    3. Annex B: Evaluation Sheet to be taken into consideration for self-evaluation (quantitative & qualitative) of the submission but will be filled out during the peer-to-peer evaluation phase.
    4. Annex C: Requirements Check List (N74) to be duly filled out indicating (using a table) which requirements identified are satisfied. If a requirement is not satisfied, the submission shall indicate the reason.
    5. Annex D: Mandatory text in responses
  3. A submission may contain
    1. Comments on the completeness and appropriateness of the MPAI-AIF requirements and any motivated suggestion to extend those requirements.
    2. A preliminary demonstration, with a detailed document describing it.
    3. Any other additional relevant information that may help evaluate the submission, such as additional use cases.
  4. Assessment
    1. Respondents must present their submission or proposal is discarded.
    2. If submission is accepted in whole or in part, submitter shall make available a working implementation, including source code (for use in MPAI-AIF Reference Software) before the technology is accepted for the MPAI-AIF standard.
    3. Software can be written in compiled or interpreted programming languages and in hardware description languages.
    4. A submitter who is not an MPAI member shall immediately join MPAI, else submission is discarded.
    5. Assessment guidelines form to aid peer-to-peer evaluation phase being finalised.
  5. Calendar
    1. Call for Technologies 16 Dec (MPAI-3)
    2. Presentation Conference Calls 21 Dec/07 Jan
    3. Notification of intention to submit 15 Jan
    4. Assessment form 20 Jan (MPAI-4)
    5. Submission deadline 15 Feb
    6. Calendar of evaluation of responses 17 Feb (MPAI-5)
    7. Approval of MPAI-AIF standard 19 July (estimate)

Davide Ferri, MPAI Director and Chair of AIF-FWL, the committee that developed the MPAI-AIF Framework Licence (FWL) explained that FWL covers the MPAI-AIF technology that specifies a generic execution environment, possibly integrating Machine Learning, Artificial Intelligence and legacy Data Processing components, implementing application areas such as

  1. Context-based Audio Enhancement (MPAI-CAE)
  2. Integrative Genomic/Sensor Analysis (MPAI-GSA)
  3. AI-Enhanced Video Coding (MPAI-EVC)
  4. Server-based Predictive Multiplayer Gaming (MPAI-SPG)
  5. Multi-Modal Conversation (MPAI-MMC)
  6. Compression and Understanding of Industrial data (MPAI-CUI)

These six application areas are expected to become MPAI standards.

The FWL includes a set of definitions that are omitted here. In particular the definition of Licence, namely, the Framework Licence to which values, e.g., currency, percent, dates etc., related to a specific Intellectual Property will be added.

The FWL is expressed in concise form as below

  1. The Licence will:
    1. be in compliance with generally accepted principles of competition law and the MPAI Statutes
    2. cover all of Licensor’s claims to Essential IPR practiced by a Licensee of the MPAI-AIF standard
    3. cover Development Rights and Implementation Rights
    4. apply to a baseline MPAI-AIF profile and to other profiles containing additional technologies
  2. Grant access to Essential IPRs of the MPAI-AIF standard in a non-discriminatory fashion.
  3. Have a scope to legal, bias, ethical and moral limitations
  4. Royalties will:
    1. apply to Implementations that are based on the MPAI-AIF standard
    2. not be based on the computational time nor on the number of API calls
    3. apply on a worldwide basis
    4. apply to any Implementation
  5. An MPAI-AIF Implementation may use other IPR to extend the MPAI-AIF Implementation or to provide additional functionalities
  6. The Licence may be granted free of charge for particular uses if so decided by the licensors
  7. The Licences will specify
    1. a threshold below which a Licence will be granted free of charge and/or
    2. a grace period during which a Licence will be granted free of charge and/or
    3. an annual in-compliance royalty cap applying to total royalties due on worldwide rev­enues for a single Enterprise
  8. A preference will be expressed on the entity that should administer the patent pool of holders of Patents Essential to the MPAI-AIF standard
  9. The total cost of the Licences issued by IPR holders will be in line with the total cost of the licences for similar technologies standardised in the context of Standard Development Organisations
  10. The total cost of the Licences will take into account the value on the market of the AI Framework technology Standardised by MPAI.

Miran reminded how easily legal entities or individuals representing a technical departments of a university supporting the MPAI mission and able to contribute to the development of MPAI standards can join MPAI. They should

  1. Choose one of the two classes of membership (until 2021/12/31):
    1. Principal Members, with the right to vote (2400 €)
    2. Associate Members, without the right to vote (480 €)
  2. Send secretariat@mpai.community
    1. a signed copy of Template for MPAI Membership applications
    2. a signed copy of the MPAI Statutes. Each page should be signed and initialled
    3. a copy of the bank transfer

MPAI issues a Call for Technologies supporting its AI Framework standard

Geneva, Switzerland – 16 December 2020. Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI), an international unaffiliated standards association, has approved a Call for Technologies (CfT) for publication at its 3rd General Assembly MPAI-3. The CfT concerns tech­nologies for MPAI-AIF, acronym of the MPAI AI Frame­work standard.

The goal of MPAI-AIF is to enable set up and execution of mixed processing and infer­ence work­flows made of Machine Learning, Artificial Intelligence and legacy Data Processing com­ponents called AI Modules (AIM).

The MPAI AI Framework standard will facilitate integration of AI and legacy data processing components through standard interfaces and methods. MPAI experts have already validated MPAI’s innovative approach in a sample micro controller-based implementation that is synergistic with MPAI-AIF standard development.

In line with its statutes, MPAI has developed the Framework Licence associated with the MPAI-AIF standard. Responses to the CfT shall be in line with the requirements laid down in the CfT and shall be supported by a statement that the respondent will licence their technologies, if adopted in the standard, according to the framework licence.

MPAI is also working on a range of standards for AIM input/output interfaces used in several application areas. Two candidate standards have completed the definition of Functional Requirements and have been promoted to the Commercial Requirements stage.

The two candidates are

  1. MPAI-CAE – Context-based Audio Enhancement uses AI to improve the user experience for a variety of uses such as entertainment, communication, teleconferencing, gaming, post-prod­uction, restoration etc. in the contexts of the home, the car, on-the-go, the studio etc. allowing a dynamically optimised user experience.
  2. MPAI-MMC – Multi-Modal Conversation uses AI to enable human-machine conversation that emulates human-human conversation in completeness and intensity.

MPAI adopts a light approach in the definition AIMs standardisation. Different implementors can produce AIMs of different performance still exposing the same standard interfaces. MPAI AIMs with different features from a variety of sources will promote hor­izontal markets of AI solutions that tap from and further promote AI innovation.

The MPAI web site provides more information about other MPAI standards: MPAI-CUI uses AI to compress and understand industrial data, MPAI-EVC to improve the performance of existing video codecs, MPAI GSA to to understand and compress the res­ult of combining genomic experiments with those produced by related devices, e.g. video, motion, location, weather, medical sensors, and MPAI-SPG to improve the user experience of online multiplayer games.

MPAI develops data coding standards for applications that have AI as core enabling technology. Any legal entity that supports the MPAI mission may join MPAI if it is able to contribute to the development of standards for the efficient use of Data.

Visit the MPAI home page and contact the MPAI secretariat for specific information.

 

 


MPAI commences development of the Framework Licence for the MPAI AI Framework

Geneva, Switzerland – 18 November 2020. The Geneva-based international Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) has concluded its second General As­sembly making a major step toward the development of its first standard called MPAI AI Frame­work, acronym MPAI-AIF.

MPAI-AIF has been designed to enable creation and automation of mixed processing and infer­ence workflows made of Machine Learning, Artificial Intelligence and traditional Data Processing components.

MPAI wishes to give as much information as possible to users of its standards. After approving the Functional Requirements, MPAI is now developing the Commercial Requirements, to be embodied in the MPAI-AIF Framework Licence. This will collect the set of conditions of use of the eventual licence(s), without values, e.g. currency, percentage, dates etc.

An optimal implementation of the MPAI use cases requires a coordinated combination of processing modules. MPAI has assessed that, by standardising the interfaces of Processing Mod­ules, to be executed in the MPAI AI-Framework, horizontal markets of competing standard implementations of processing modules will emerge.

The MPAI-AIF standard, that MPAI plans on delivering in July 2021, will reduce cost, promote adoption and incite progress of AI technologies while, if the market develops incompatible implementations, costs will multiply, and adoption of AI technologies will be delayed.

MPAI-AIF is the first of a series of standards MPAI has in its development pipeline. The following three work areas, promoted to Functional Requirements stage, will build on top of MPAI-AIF:

  1. MPAI-CAE – Context-based Audio Enhancement uses AI to improve the user experience for a variety of uses such as entertainment, communication, teleconferencing, gaming, post-prod­uction, restoration etc. in the contexts of the home, the car, on-the-go, the studio etc. allowing a dynamically optimised user experience.
  2. MPAI-GSA – Integrative Genomic/Sensor Analysis uses AI to understand and compress the results of high-throughput experiments combining genomic/proteomic and other data – for in­stance from video, motion, location, weather, medical sensors. The target use cases range from personalised medicine to smart farming.
  3. MPAI-MMC – Multi-Modal Conversation uses AI to enable human-machine conversation that emulates human-human conversation in completeness and intensity.

The MPAI web site provides more information about other MPAI standards: MPAI-EVC uses AI to improve the performance of existing video codecs, MPAI-SPG to improve the user experience of online multiplayer games and MPAI-CUI to compress and understand industrial data.

MPAI seeks the involvement of companies who can benefit from international data coding stan­dards and calls for proposals of standards. In a unique arrangement for a standards organisations, MPAI gives the opportunity, even to non-members, to accompany a proposal through the defin­ition of its goals and the development of functional requirements. More details here.

MPAI develops data coding standards for a range of applications with Artificial Intelligence (AI) as its core enabling technology. Any legal entity that supports the MPAI mission may join MPAI if it is able to contribute to the development of Technical Specifications for the efficient use of Data.

Visit the MPAI home page and  contact the MPAI secretariat for specific information.

 


A new way to develop useful standards

Communication standards, at least so far, are handled in an odd way. They are meant to serve the needs of millions, if not billions of people, still the decisions about what the standards should do are left in the hands of people who, no matter how many, are not billions, not millions, not even thousands.

This is the end point of the unilateral approach adopted by inventors starting, one can say, from Gutenberg’s moving characters and continuing with Niépce-Daguerre’s photography, Morse’s telegraph, Bell-Meucci’s telephone, Marconi’s radio and tens more.

In retrospect, those were “easy” times because each invention satisfied a basic need. Today, the situation is quite different: basic needs are more than satisfied (at least for a significant part of human beings) while the “other needs” can hardly be addressed by the mentioned unilateral approach at technology use. This is even more true today when we are dealing with a technology – Artificial Intelligence – that will likely be the most pervasive technology ever seen.

This is the reason why MPAI – Moving Pictures and Audio Coding for Artificial Intelligence – likes to call itself, as the domain extension says, a “community”. Indeed, MPAI opens its doors to those who have for a need or wish to propose the development of a new standard. As “MPAI doors” are virtual because all MPAI activities are done online, access to MPAI is all the more immediate.

This MPAI openness should not be taken as a mere “suggestion box”, because MPAI does more than just asking for ideas.

To understand how MPAI is “a community”, I need to explain the MPAI process to develop standards, depicted in Figure 1

Figure 1 The MPAI standard development stages

Let’s start from the bottom of the process. Members as well as non-members submit proposals. These are collected and harmonised, some proposals get merged with other similar proposals. Some get split because the harmonisation process so demands. The goal is to identify proposals of standard that reflect proponent’s wishes while making sense in terms of specification and use across different environments. Non-members can fully participate in this process on par with other members. The result of this process is the definition of one homogeneous areas of work called “Use Case”, but it is possible that or possibly more than one Use Case is identified. Each Use Case is described in an Application Note. An example of Application Note is

The 1st stage of the process entails a full characterisation of the Use Case and the description of the work program that will produce the Functional Requirements.

The 2nd stage is the actual development of the Functional Requirements of the area of work represented by the Use Case.

The “MPAI openness” is represented by the fact that anybody may participate in the three stages of Interest Collection, Use Case and Functional Requirements. With an exception, though, when a Member makes a proposal that s/he wishes to be exposed to members only.

The next stage is Commercial Requirements. A standard is like any other supply contract. You describe what you supply with its characteristics (Functional Requirements) and at what conditions (Commercial Requirements).

It should be noted that, from this stage on, non-members are not allowed to participate (but they can become members at any time), because their role of proposing and describing what a standard should do is over. Antitrust laws do not permits that sellers (technology providers) and buyers (users of the standard) sit together and agree on values such as numbers, percentage or dates, but it permit sellers to indicate the conditions, but without values. Therefore, the embodiment of the Commercial Requirements, i.e. the Framework Licence, will refrain from adding such details.

Once both Requirements are available MPAI is in a position to draft the call for Technologies (stage 4), review the proposals and develop the standards (stage 5). This is where the role of Associate Members in MPAI ends. Only Principal Members may vote to approve the standard, hence trigger its publication. But A, Associate Member may become a Principal Member at any time.


A new channel between industry and standards

The challenges of MPAI standardisation

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) is a standards organisation with the mission to develop data coding standards that have Artificial Intelligence as its core enabling technology.

MPAI faces two main challenges to achieve its mission. The first comes from MPAI’s definition of “data”: any digital representation of a real or computer-generated entity. As any living being, or human organisation generates data, and they are more and more pervasive, the scope of MPAI standards is definitely very broad. The second comes from MPAI’s definition of data coding as the transformation of data in one representation into another representation that is more convenient. As convenience is in the eye of the beholder, the number of transformations, hence standards, is potentially large.

Unlike audio and video coding, whose needs were clear from day one and whose application domains have incrementally extended over the years, data coding is a much more articulated domain where the simple one-dimensional (compression) world of audio and video coding morphs to a world where the dimensions are potentially many.

This is not a late discovery. The challenges were clear when the MPAI Statutes were drafted. The standard development workflow described in Annex 1 to the Statutes envisages the involvement of any party interested in the process of identifying projects of standard. This is very different from what happens, e.g. in ISO, where the process of definition of a standard happen in watertight compartments where only committee members or National Bodies can propose standard projects. The logic behind is that standards project are weapons in the hands of those who control the committees and you do not want to give up your weapons easily.

The MPAI standardisation process

MPAI has no weapons because its mission is to serve the industry. MPAI has a mission which it implements as follows

  1. MPAI solicits proposals of new projects in the form of use cases from anybody, collects and harmonises use cases in a structured proposal of a project and defines a comprehensive use case that is likely to be usable across industries.
  2. Then MPAI develops functional requirements. Note that in the first two stages of the MPAI standards workflow – Use Cases (UC) and Functional Requirements (FR) – participation in the relevant meetings is open to anybody (strictly speaking, however, the member who has been proposed a use case may request that only MPAI members participate in the UC and FR stages).
  3. The following stages are: Commercial Requirements, Call for Technologies, Standard development and MPAI standard.

The AI Framework case

In the spirit of collecting inputs from anybody, I will report the case of AI Framework that is likely to become the first MPAI standard. If you have any comment please send an email to leonardo@chiariglione.org.

MPAI has built and analysed six use cases where Artificial Intelligence (AI) technologies can offer sig­nif­icant benefits compared to traditional technologies. The use cases cover widely different applic­ation areas:

  1. imp­rov­ing the audio experience when audio is consumed in non-deal conditions
  2. processing DNA information jointly with consequent physical effects on living organisms
  3. replacing components of a traditional video codec with AI-based components
  4. making up for missing information from online gaming clients
  5. multimodal conversation, and
  6. compression and understanding of industrial data.

Even though use cases are disparate, each one of them can be implemented as a combination of processing modules performing functions that concur to achieving the intended result.

MPAI has assessed that leaving it to the market to develop individual implementations would multiply costs and delay adoption of AI technologies, while a suitable level of standardisation can reduce overall design costs and increase component reusability. Eventually a horizontal market may emerge where proprietary and competing implementations of components exposing standard interfaces will reduce cost and promote adoption and progress of AI technologies.

The MPAI-AIF standard

MPAI has determined that a standard for a processing framework satisfying the requirements derived from the six use cases will achieve the goal. MPAI calls the future standard as AI Framework (MPAI-AIF). As AI is a fast-moving field, MPAI expects that MPAI-AIF will be extended as new use cases will bring new requirements and new technologies will reach maturity.

To avoid the deadlock experienced in other high-technology fields, MPAI will develop a Frame­work Licence (FWL) associated with the defined MPAI-AIF Requirements. The FWL, essentially the business model that SEP holders apply to monetise their Intellectual Properties (IP), but without values such as the amount or percentage of royalties or dates due, will act as Commercial Requirements for the standard and provide a clear IPR licensing frameworks.

MPAI-AIF enables the creation and automation of mixed ML-AI-DP processing and inference workflows at scale for the use cases mentioned above. The key components of the framework should address different modalities of operation (AI, ML and DP), data pipeline jungles and computing resource allocations including constrained hardware scenarios of edge AI devices.

The MPAI-AIF reference model

The reference diagram of MPAI-AIF is given by the following figure

Figure 1 – Normative MPAI-AIF Architecture

  1. Management and Control: acts on PMs, so that they execute in the correct order and at the time when they are needed handling both simple orchestration tasks (i.e. a script) and much more complex tasks with a topology of networked PMs that need to be synchronised according to a given time base.
  2. Execution: is the environment where PMs operate. It is interfaced with M&C and with Communication and Storage. It receives external inputs and produces the requested outputs both of which are application specific.
  3. Processing Modules (PM) are composed of (ML or traditional Data Processor) processing element, interface to communication and storage and input and output interfaces (processing specific)
  4. Communication: is required in several cases and can be implemented accordingly, e.g. by means of a service bus.
  5. Storage: Stores the inputs and outputs of the individual PMs, data from the PM’s state and intermediary results, shared data among PMs, information used by M&C and its procedures
  6. Access: represents the access to static or slowly changing data required by the application such as domain knowledge data, data models, etc.

Requirements

Component requirements

  1. The MPAI-AIF standard shall include specifications of 6 Components
    1. Management and Control
    2. Execution
    3. Processing Modules (PM)
    4. Communication
    5. Storage
    6. Access
  2. Management and Control shall enable operations on the life cycle of
    1. Single PMs: instantiation/removal, reconfiguration, dump/retrieve internal state, start-suspend-stop, train-retrain-update, enforce resource limits
    2. Combinations of PMs: initialisation of the computational model, instructions (e.g. manual or automatic) to computational nodes to communicate between themselves and with output and storage, auto-configuration based on machine learning reconfig­uration of computational models, instantiation-removal-reconfiguration of PMs.
  3. Management and Control shall support
    1. An architecture that allows hierarchical execution of workflows, i.e. computational graphs of PMs, possibly structured in hierarchies, for the identified application scenarios
    2. Supervised and unsupervised learning, and reinforcement-based learning paradigms
    3. Direct Acyclic Graph (DAG) topology of PMs
  4. Execution shall
    1. Support distributed deployment of PMs in the cloud and at the edge
    2. Be scalable in complexity and performance to cope with different scenarios, e.g. from small MCUs to complex distributed systems
  5. PMs shall support protocols for
    1. Autoconfiguration (e.g. peer-to-peer)
    2. Manual configuration
    3. Advertising and Discovery
  6. PMs
    1. May be a mixture of AI/ML or DP technologies
    2. Shall be directly connected to the ML life cycle without interruption
  7. Communication shall enable direct and mediated interconnections of PMs
  8. Storage shall support protocols to specify application dependent non-functional requirements such as access time, retention, read/write throughput
  9. Access shall support access to static or slowly changing data of standard formats. Access to private data should also be possible

 Systems requirements

The following requirements are not intended to be applied to the MPAI-AIF standard, but should be used for assessing technologies

  1. Management and Control shall support time-based synchronous and asynchronous operation depending on application
  2. Execution shall allow seamless or minimal-impact operation of its PMs while algorithms or ML models are updated, because of new training, or retraining
  3. Task sharing for ML-based PMs shall be supported

General requirements

  1. The MPAI-AIF standard may include profiles for specific (sets of) requirements

For comments on MPAI requirements send an email to leonardo@chiariglione.org


MPAI launches 6 standard projects on audio, genomics, video, AI framework, multiuser online gaming and multimodal conversation

Geneva, Switzerland – 21 October 2020. The Geneva-based international Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) has concluded its first operational General Assembly adopting 6 areas of work, due to become standardisation projects.

MPAI-CAE – Context-based Audio Enhancement is an area that uses AI to improve the user experience for a variety of uses such as entertainment, communication, teleconferencing, gaming, post-production, restoration etc. in such contexts as in the home, in the car, on-the-go, in the studio etc. allowing a dynamically optimized user experience.

MPAI-GSA – Integrative Genomic/Sensor Analysis is an area that uses AI to understand and compress the results of high-throughput experiments combining genomic/proteomic and other data – for instance from video, motion, location, weather, medical sensors. The target use cases range. from personalised medicine to smart farming.

MPAI-SPG – Server-based Predictive Multiplayer Gaming uses AI to minimise the audio-visual and gameplay disruptions during an online real-time game caused by missing information at the server or at the client because of high latency and packet losses.

MPAI-EVC – AI-Enhanced Video Coding plans on using AI to further reduce the bitrate required to store and transmit video information for a variety of consumer and professional applications. One user of the MPAI-EVC standard is likely to be MPAI-SPG for improved compression and higher quality of cloud-gaming content.

MPAI-MMC – Multi-Modal Conversation aims to use AI to enable human-machine conversation that emulates human-human conversation in completeness and intensity

MPAI-AIF – Artificial Intelligence Framework is an area based on the notion of a framework populated by AI-based or traditional Processing Modules. As this is a foundational standard on which other planned MPAI standards such as MPAI-CAE, MPAI-GSA and MPAI-MMC, will be built, MPAI intends to move at an accelerated pace: Functional Requirements ready in November 2020, Commercial Requirements ready in December 2020 and Call for Technologies issued in January, 2021. The MPAI-AIF standard is planned to be ready before the summer holidays in 2021.

You can find more information about MPAI standards.

MPAI covers its Commercial Requirements needs with Framework Licences (FWL). These are the set of conditions of use of a license of a specific MPAI standard without the values, e.g. curren­cy, percentages, dates, etc. MPAI expects that FWLs will accelerate the practical use of its stan­dards.

MPAI develops data coding standards for a range of applications with Artificial Intelligence (AI) as its core enabling technology. Any legal entity that supports the MPAI mission may join MPAI if it is able to contribute to the development of Technical Specifications for the efficient use of Data.

Visit the MPAI home page and  contact the MPAI secretariat for spec­ific information.


What is MPAI going to do?

Moving Picture, Audio and Data Coding by Artificial Intelligence – MPAI – has been devised as an international non-profit organisation with the mission to take over the baton from old-style compression to the new AI-based compression . This will take compression performance to new levels, extend the benefits of compression to all industries beset by huge amounts of data and give them the possibility not only to save costs from compression, but to get more out of their data.

Now that MPAI has been officially constituted on Wednesday 30 September 2020  (see Press Release), what will MPAI do?

This is a reasonable question to ask, but a better question would be: what has MPAI been doing?  This is because, some 2 months before its actual establishment, a group of highly motivated experts has developed some use cases, aggregated in areas where MPAI standards can make the difference.

Thanks to the efforts of many, MPAI has the road already mapped out with several activities at different levels of maturity. The list below gives the more mature areas of the many that have been explored (see the list of use cases). The list order is a personal assessment of the maturity.

  1. Context-based Audio Enhancement (MPAI-CAE) is the most mature area. By using AI, MPAI-CAE can improve the user experience in a variety of instances such as entertainment, communication, teleconferencing, gaming, post-production, restorarton etc. in a variety of contexts such as in the home, in the car, on-the-go, in the studio etc.
  2. Integrative AI-based analysis of multi-source genomic/sensor experiments aims to define a framework where free and commercial AI-based processing components made available in a horizontal market can be combined to make application-specific “proc­essing apps”.
  3. Multi-modal conversation aims to define an AI-based framework of proces­sing components such as fusion of multimodal input, natural language understanding and generation, speech recognition and synthesis, emotion recognition, intention understanding, gesture recognition and knowledge fusion.
  4. Compression and understanding of financial data aims to enable AI-based filtering and extraction of key information from the flow of data that companies receive from the outside, generate inside or issue because of regulatory compliance.
  5. Server-based Predictive Distributed Multiplayer Online Gaming aims to min­imise the visual discontinuities experienced by gameplayer by feeding the data collected from the clients involved in a particular game to an AI-based system that can predict each individual participants’ moves in case that information is missing.
  6. AI-Enhanced Traditional Video Coding aims to develop be a video compres­sion standard that will substantially enhance the performance of an existing video codec by ehnancing or replacing traditional tools with AI-based tools.

MPAI signals a discontinuity with the past not only in the technology it uses to address known industry needs, but also in the way it overcomes the limitations of the Fair, Reasonable and Non-Discriminatory (FRAND) licensing declarations, a burning issue for many standard developing organisations and their industries. MPAI plans to develop and make known, for each MPAI stan­dard, a “framework licence”, i.e. the business model, without values, dates and percentages, that standard essential patent holders intend to use to monetise their patents adopted in the standard.

Companies, academic institutions and individuals representing departments of academic institut­ions may apply for MPAI membership, provided that they can contribute to the development of technical specifications for the efficient use of data.

The MPAI website provides additional information. To join MPAI  please contact the secretariat.