Geneva, Switzerland – 23 March 2022. Today the international, non-profit, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards developing organisation has concluded its 18th General Assembly. Among the outcomes is the publication of Call for Patent Pool Administrators for two of its approved Technical Specifications.
The MPAI process of standard development prescribes that Active Principal Members, i.e., those intending to participate in the development of a Technical Specification, adopt a Framework Licence before initiating the development. All those contributing to the work are requested to accept the Framework Licence. If they are not Members, they are requested to join MPAI. Once a Technical Specification is approved, MPAI identifies patent holders and facilitates the creation of a patent pool.
Patent holders of Context-based Audio Enhancement (MPAI-CAE) and Multimodal Conversation (MPAI-MMC) have agreed to issue a Call for Patent Pool Administrator and have asked MPAI to publish the call on its website. The Patent Holders expect to work with the selected Entity to facilitate a licensing program that responds to the requirements of the licensees while ensuring the commercial viability of the program. In the future, the coverage of the patent pool may be extended to new versions of MPAI-CAE and MPAI-MMC, and/or other MPAI standards.
Parties interested in being selected as Entity are requested to communicate, no later than 1 May 2022, their interest and provide appropriate material as a qualification to the MPAI Secretariat. The Secretariat will forward the received material to the Patent Holders.
While Version 1 of MPAI-CAE and MPAI-MMC are progressing toward practical deployment, work is ongoing to develop Use Cases and Functional Requirements of MPAI-CAE and MPAI-MMC V2. These will extend the V1 technologies to support new use cases, i.e.,
- Conversation about a Scene (CAS), enabling a human holds a conversation with a machine on the objects in a scene.
- Human to Connected Autonomous Vehicle Interaction (HCI), enabling humans to have rich interaction, including question answering and conversation with a Connected Autonomous Vehicle (CAV).
- Mixed-reality Collaborative Spaces (MCS), enabling humans to develop collaborative activities in a Mixed-Reality space via their avatars.
MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity supporting the MPAI mission may join MPAI, if able to contribute to the development of standards for the efficient use of data.
MPAI is currently engaged in extending some of the already approved standards and developing other 9 standards (those in italic in the list below).
Name of standard | Acronym | Brief description |
AI Framework | MPAI-AIF | Specifies an infrastructure enabling the execution of implementations and access to the MPAI Store. |
Context-based Audio Enhancement | MPAI-CAE | Improves the user experience of audio-related applications in a variety of contexts. |
Compression and Understanding of Industrial Data | MPAI-CUI | Predicts the company performance from governance, financial, and risk data. |
Governance of the MPAI Ecosystem | MPAI-GME | Establishes the rules governing the submission of and access to interoperable implementations. |
Multimodal Conversation | MPAI-MMC | Enables human-machine conversation emulating human-human conversation. |
Server-based Predictive Multiplayer Gaming | MPAI-SPG | Trains a network to compensate data losses and detects false data in online multiplayer gaming. |
AI-Enhanced Video Coding | MPAI-EVC | Improves existing video coding with AI tools for short-to-medium term applications. |
End-to-End Video Coding | MPAI-EEV | Explores the promising area of AI-based “end-to-end” video coding for longer-term applications. |
Connected Autonomous Vehicles | MPAI-CAV | Specifies components for Environment Sensing, Autonomous Motion, and Motion Actuation. |
Avatar Representation and Animation | MPAI-ARA | Specifies descriptors of avatars impersonating real humans. |
Neural Network Watermarking | MPAI-NNW | Measures the impact of adding ownership and licensing information in models and inferences. |
Integrative Genomic/Sensor Analysis | MPAI-GSA | Compresses high-throughput experiments data combining genomic/proteomic and other. |
Mixed-reality Collaborative Spaces | MPAI-MCS | Supports collaboration of humans represented by avatars in virtual-reality spaces called Ambients |
Visual Object and Scene Description | MPAI-OSD | Describes objects and their attributes in a scene and the semantic description of the objects. |
Visit the MPAI website, contact the MPAI secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.
Most importantly: join MPAI, share the fun, build the future.