Geneva, Switzerland – 11th June 2025. MPAI – Moving Picture, Audio and Data Coding by Artificial Intelligence – the international, non-profit, unaffiliated organisation developing AI-based data coding standards – has concluded its 57th General Assembly (MPAI-57) with the publication of the Call for Neural Network Traceability Technologies and three supporting documents.

The Call for Technologies: Neural Network Watermarking (MPAI-NNW) – Technologies (NNW-TEC) requests Neural Network Traceability Technologies that make it possible:

  1. To verify that the data provided by an Actor, and received by another Actor is not compromised, i.e. it can be used for the intended scope.
  2. To identify the Actors providing and receiving the data.
  3. To evaluate the quality of solutions supporting points 1 and 2 above implemented with the proposed Neural Network Traceability Technologies.

An Actor is a process producing, providing, processing, or consuming information.

An online presentation of the Call will be made on 2025/07/01T15 UTC. Please register at https://bit.ly/4mW6AWX to attend.

Responses to the Call are due to the MPAI Secretariat on 2025/09/27 T23:59 UTC.

MPAI is continuing its work plan that involves the following activities:

  1. AI Framework (MPAI-AIF): developing a new MPAI-AIF specification that facilitates the creation of new workflows using available AIMs.
  2. AI for Health (MPAI-AIH): developing the specification of a system receiving and processing licenses AI Health Data and enabling clients to improve health processing models via federated learning.
  3. Context-based Audio Enhancement (CAE-DC): developing the Audio Six Degrees of Freedom (CAE-6DF) and Audio Object Scene Rendering (CAE-AOR) specifications.
  4. Connected Autonomous Vehicle (MPAI-CAV): investigating extensions of the current CAV-TEC specification.
  5. Compression and Understanding of Industrial Data (MPAI-CUI): developing the Company Performance Prediction V2.0 specification.
  6. End-to-End Video Coding (MPAI-EEV): exploring the potential of AI-based End-to-End Video coding.
  7. AI-Enhanced Video Coding (MPAI-EVC): developing an optimised Up-sampling Filter for Video applications (EVC-UFV) standard.
  8. Governance of the MPAI Ecosystem (MPAI-GME): working on version 2.0 of the Specification.
  9. Human and Machine Communication (MPAI-HMC): developing reference software and performance assessment.
  10. Multimodal Conversation (MPAI-MMC): Developing the notion of Perceptive and Agentive AI (PAAI) capable of handling more complex questions.
  11. MPAI Metaverse Model (MPAI-MMM): extending the capabilities of the MMM-TEC specs to support more applications.
  12. Neural Network Watermarking (MPAI-NNW): Issuing a Call on Neural Network Traceability Technologies.
  13. Object and Scene Description (MPAI-OSD): extending the capabilities of the MPAI-OSD V1.3 to support more applications.
  14. Portable Avatar Format (MPAI-PAF): extending the capabilities of the MPAI-PAF V1.4 to support more applications.
  15. AI Module Profiles (MPAI-PRF): extending the scope of the current version of AI Module Profiles.
  16. Server-based Predictive Multiplayer Gaming (MPAI-SPG): exploring new standard opportunities in the domain.
  17. Data Types, Formats, and Attributes (MPAI-TFA) extending the standard to data types used by MPAI standards (e.g., automotive, health, and metaverse).
  18. XR Venues (MPAI-XRV): developing the standard for improved development and execution of Live Theatrical Performances.

Legal entities and representatives of academic departments supporting the MPAI mission and able to contribute to the development of standards for the efficient use of data can become MPAI members.

Please visit the MPAI website, contact the MPAI Secretariat for specific information, subscribe to the MPAI Newsletter and follow MPAI on social media: LinkedIn, Twitter, Facebook, Instagram, and YouTube.