Capitalised Terms in EVC-UFV V1.0 have the meaning defined in Table 1.

A dash “-” preceding a Term in Table 1 indicates the following readings according to the font:

  1. Normal font: the Term in the table without a dash and preceding the one with a dash should be read after that Term. For example, “Risk” and “- Assessment” will yield “Risk Assessment”.
  2. Italic font: the Term in the table without a dash and preceding the one with a dash should be read before that Term. For example, “Descriptor” and “- Financial” will yield “Financial Descriptors.”

All MPAI-specified Terms are defined online.

Table 1 – EVC-UFV Terms

Term Definition
Block
– Residual Block A Block composed of concatenated DRLM modules where each module is followed by a Concatenation and Convolutional Layer.
Data Augmentation A technique that increasing the training dataset with new training examples obtained by altering some features of the original training dataset.
Densely Residual Laplacian
– Module  (DRLM) A set of RUs where each RU is followed by a Concatenation Layer.
– Network
Dilation
Dependency Graph (DepGraph) a framework to simplify the Structured Pruning operation of neural networks.
Fine Tuning
Frame
Video
Inference
Laplacian Attention Unit  (LC) A set of Convolutional Layers with a square filter size and Dilation that is greater than or equal the filter size.
Layer
– Concatenation 
– Convolutional
Model
– Deep Learning
– Machine Learning
– Pre-trained
Patch
Parameter
Patience
Pre-training
Pruning The process of removing less important parameters (like weights or neurons) from a neural network to reduce its size and computational requirements, while retaining the model performance.
Growing Regularization A technique that forces the model to set a whole dimension to low values before applying the pruning, so as to make the removed dimension already close to zero not impacting the model result.
Learning-Based A set of Pruning techniques which require variations of learning in order to be implemented.
– Recovery A method that involves retraining a pruned neural network to regain any lost accuracy.
– Structured A method that removes entire components like neurons, filters, or channels, resulting in a smaller dense model architecture.
– Unstructured Unstructured Pruning focuses on removing single redundant neurons. However, creating a sparse model representation which does not compute faster in common hardware.
Rectified Linear Unit (ReLU)
Residual Unit (RU) A set of alternate ReLU and Convolutional Layer.
Resolution
Saliency Value
Sampling
– Down-
– Up-
Super Resolution
Training