Highlights
- Public online presentation of Neural Network Traceability Technologies V1.0
- An introduction to Neural Network Traceability Technologies V1.0
- MPAI workplan for the first 2026 semester
- Meetings in the February – March 2026 cycle.
Public online presentation of Neural Network Traceability Technologies V1.0
The 65th General Assembly (MPAI-65) has approved the publication of Version 1.0 of the Neural Network Traceability Technologies standard with a Request for Community Comments.
Register to attend the public online presentation of the NNW-TEC V1.0 standard to be held on 10 March 2026 at 15 UTC. Comments to the draft standard should be sent to the MPAI Secretariat by 13 April 2026. The next news provides a short introduction to NNW-TEC V1.0.
An introduction to Neural Network Traceability Technologies V1.0
During the last decade, Neural Networks have been deployed in an increasing variety of domains. The production of Neural Networks became costly, in terms of both resources (GPUs, CPUs, memory) and time. Moreover, there is an increasing need of for a certified service quality users of Neural Network–based services.
NN Traceability offers solutions to satisfy both needs, ensuring that a deployed Neural Network is traceable and untampered.
Inherited from the multimedia realm, watermarking regroups a family of methodological and application tools that allow the imperceptible and persistent insertion some metadata (payload) into an original NN model. Subsequently, detecting/decoding this metadata from the model itself or from any of its inferences provides the means to trace the source and to verify the authenticity.
An additional traceability technology is fingerprinting that relates to a family of methodological and application tools allowing to extract some salient information from the original NN model (a fingerprint) and to subsequently identify that model based on the extracted information.
Therefore, MPAI has found the application area called “Neural Network Watermarking” to be relevant for MPAI standardization as there is a need for both Nural Network Traceability technologies and for assessing the performances such technologies.
MPAI standards for Neural Netwok Traceability
In response to these needs, MPAI has established the Neural Network Watermarking Development Committee (NNW-DC). The DC has developed the Technical Specification: Neural Network Traceability (MPAI-NNT) V1.0 that specifies methods to evaluate the following aspects of Active (Watermarking) and Passive (Fingerprinting) Neural Network Traceability Methods:
- The ability of a Neural Network Traceability Detector/Decoder to detect/decode/match Traceability Data when the traced Neural Network has been modified,
- The computational cost of injecting, extracting, detecting, decoding, or matching Traceability Data,
- Specifically for active tracing methods, the impact of inserted Traceability Data on the performance of a neural network and on its inference.
New MPAI Standard for Neural Netwok Traceability
During its 65th GA held on February 18th, MPAI has released a new standard for community comments: Technical Specification – Neural Network Watermarking (MPAI-NNW) – Technologies.
Scope of the New Standard
The new standard:
- Specifies a general procedure to characterise Neural Network Traceability technologies that make it possible:
- To verify that the data provided by an Actor and transported to another Actor is not compromised, i.e. if modified, the modifications allow data to be used for the intended scope.
- To identify the Actors providing and receiving the data.
- Uses the MPAI-NNT Technical Specification [2] to evaluate the properties of Neural Network Traceability technologies that were developed based on the general procedure and applied for specific NNs or used in specific application domains.
NNW-TEC Technical Specification Versions are snapshots capturing the evolution of the general procedure and of performance of implementations.
Practical benefits
NN Traceability Technologies enable tracking of identities of some Actors and the Modifications to the NN effected by them. Typically, a Neural Network service involves the following Actors:
- Architect: designs the architecture of the model
- Trainer: trains the model for a purpose
- Tracker: provides the tracking technology
- Distributor: distributes trained model with tracking technology
- Generic user: any user intended by the Distributor
- Attacker: any user, be they intended or not by the Distributor, that can apply a modification on the Neural Network subjected to the Traceability Technology.
Examples of typical Modifications applied to Neural Networks are finetuning, pruning, or quantising.
The MPAI workplan for the first half of 2026
The MPAI machine is running at full capacity and has an aggressive plan for development and publication of new and revised standards covering the multiform industry areas where AI-based data coding technologies can effectively be applied.
Company Performance Prediction (CUI-CPP) V2.0, a significantly beefed-up version of the standard published in 2021, enables assessment of the performance of a company in a specified prediction horizon based on data concerning its governance, its finances, and its risks. CUI-CPP V2.0 was published with a request for Community Comments on 17 December 2025 and the final text approved by MPAI-65 on 18 February.
Health Secure Platform (AIH-HSP) V1.0 was published with a request for Community Comments by MPAI-64 on 26 January. The standard specifies a system enabling End Users to license their data to a Back End for processing by it and Third-Party Users in the licence, e.g., hospitals and universities. The Back End also collects neural networks trained by end user devices and distributes new neural networks updated by Federated Learning. MPAI-66 (18 March) is expected to approve the final version of the standard.
MPAI-64 also approved final publication of Audio Use Cases (CAE-USC) V2.4. The new version consolidates the four use cases and the AI Workflows, AI Modules, and Data Types by incorporating the new structure adopted by other MPAI standards such as MPAI-MMC V2.4 and MPAI-OSD V1.4.
MPAI-65 has approved publication of Neural Network Traceability Technologies (NNW-TEC) V1.0 with a Request for Community Comments. The standard assesses Imperceptibility, Robustness, and Computational Cost of specific Neural Network Traceability technologies capable of tracking the identities of some Actors and the Neural Network modifications effected by them. The final version of NNW-TEC V1.0 is expected to be approved on 15 April.
MPAI-66 (18 March) is expected to approve publication of the new version of two standards with Requests for Community Comments. The second standard is MPAI Metaverse Model Technologies (MMM-TEC) V2.2. This new version will include a new set of technologies enabling the realisation of virtual economies in MMM-TEC-conforming M-Instances with the support of two independent and interoperable reference software implementations – one based on Unity and one on Open Simulator. The final versions of both standards are expected to be approved on 13 May.
Audio will be the main player at MPAI-67 (15 April) when MPAI plans on publishing the Requests for Community Comments of two new Audio standards – Audio Six Degrees of Freedom (CAE-6DF) V1.0 and Audio Object Rendering (CAE-AOR) V1.0. CAE-6DF will specify the digital representation of a sound field, captured with microphone arrays (at least 3) each with at least 4 microphones to enable a decoder to reproduce any point of the sound field from the listener’s Position and Orientation. The standard will include reference software generated by AI. CAE-AOR will specify an AI Workflow enabling a user to render a real or synthetic Audio Object changing the listener’s position/orientation and remixing, re-positioning, substituting, or adding motion to audio objects, or modifying the acoustic environment. The final version of both standards are expected to be approved on 10 June.
The following MPAI-68 (13 May) is expected to publish new versions of four standards with Requests for Community Comments: Object and Scene Description (MPAI-OSD) V1.5, Portable Avatar Format (MPAI-PAF) V1.6, Data Types, Formats, and Attributes (MPAI-TFA) V1.5, and Connected Autonomous Vehicle Technologies (CAV-TEC) V1.1. The new versions of the first three standards introduce new Data Types required by several application-oriented standards (mostly MMM-TEC, PGM-AUA, and XRV-LTP). The new version of the fourth will be a major overhaul of CAV-TEC V1.0 as it will use the new MPAI-AIF V3.0 Data Types to specify a security framework for CAV-TEC. The final versions are expected to be published on 8 July.
MPAI-69 (10 June) is expected to produce four standards published with Requests for Community Comments. The first is Connected Autonomous Vehicle Technologies (CAV-TEC) V1.1, the first to adopt a new approach to security. This will be followed by Autonomous User Architecture (PGM-AUA) V1.0 part of Pursuing Goals in metaverse and Multimodal Conversation (MPAI-MMC) V2.5. PGM-AUA will be the first known attempt at specifying an Autonomous Agent architecture operating in a virtual space. MPAI-MMC will introduce new AI Modules and Data Types supporting conversation between two entities, either two Autonomous Agents or an Autonomous Agent and a human. The fourth standard – Live Theatrical Performance in XR Venues (XRV-LTP) V1.0 – will be the result of a major effort to use AI-based data coding standards to facilitate rapid mounting of shows in real/virtual venues that enable direct, precise, yet spontaneous show implementation and control to achieve the show director’s vision. The final versions of the three standards are expected to be approved on 19 August.
| Standard | V | Acronym | ComCom | Final |
| CUI Company Performance Prediction | 2.0 | CUI-CPP | 25/12/17 | 26/02/18 |
| AIH Health Secure Platform | 1.0 | AIH-HSP | 26/01/21 | 26/03/18 |
| Audio Use Cases | 2.4 | CAE-USC | – | 26/01/21 |
| NN Traceability Technologies | 1.0 | NNW-TEC | 26/02/18 | 26/04/15 |
| MPAI Metaverse Model – Technologies | 2.2 | MMM-TEC | 26/03/18 | 26/05/13 |
| Audio Six Degrees of Freedom | 1.0 | CAE-6DF | 26/04/15 | 26/06/10 |
| Audio Object Rendering | 1.0 | CAE-AOR | 26/04/15 | 26/06/10 |
| AI Framework (Controller Access from Remote) | 3.0 | MPAI-AIF | 26/03/18 | 26/05/13 |
| Object and Scene Description | 1.5 | MPAI-OSD | 26/05/13 | 26/07/08 |
| Portable Avatar Format | 1.6 | MPAI-PAF | 26/05/13 | 26/07/08 |
| Data Types, Formats, and Attributes | 1.5 | MPAI-TFA | 26/05/13 | 26/07/08 |
| PGM Autonomous User Architecture | 1.0 | PGM-AUA | 26/06/10 | 26/08/19 |
| Multimodal Conversation | 2.5 | MPAI-MMC | 26/06/10 | 26/08/19 |
| XRV Live Theatrical Performance | 1.0 | XRV-LTP | 26/06/10 | 26/08/19 |
Meetings in the February – March 2026 cycle.
| Group name | 23-27 Feb | 02-06 Mar | 09-13 Mar | 16-20 Mar | Time (UTC) |
| AI Framework | 23 | 2 | 9 | 16 | 16 |
| AI-based End-to-End Video Coding | 25 | 11 | 15 | ||
| AI-Enhanced Video Coding | 11 | 14 | |||
| Artificial Intelligence for Health | 27 | 13 | 15 | ||
| 16 | 16 | ||||
| Communication | 12 | 14 | |||
| 26 | 17 | ||||
| Compression & Understanding of Industrial Data | 11 | 16 | 10 | ||
| Connected Autonomous Vehicle | 26 | 5 | 12 | 19 | 15 |
| Context-based Audio enhancement | 24 | 3 | 10 | 17 | 17 |
| Industry and Standards | 27 | 13 | 17 | ||
| MPAI Metaverse Model | 26 | 5 | 12 | 19 | 16 |
| Multimodal Conversation | 3 | 10 | 17 | 14 | |
| 24 | 16 | ||||
| Neural Network Watermarking | 24 | 3 | 10 | 17 | 15 |
| Portable Avatar Format | 27 | 9 | |||
| XR Venues | 24 | 3 | 10 | 17 | 18 |
| General Assembly (MPAI-66) | 15 |