<-Scope      Go to ToC       References ->

Capitalised Terms in EVC-UFV V1.0 have the meaning defined in Table 1.

A dash “-” preceding a Term in Table 1 indicates the following readings according to the font:

  1. Normal font: the Term in the table without a dash and preceding the one with a dash should be read after that Term. For example, “Risk” and “- Assessment” will yield “Risk Assessment”.
  2. Italic font: the Term in the table without a dash and preceding the one with a dash should be read before that Term. For example, “Descriptor” and “- Financial” will yield “Financial Descriptors.”

All MPAI-specified Terms are defined online.

Table 1 – Terms defined and/or used by EVC-UFV

Term Definition
Activation Function
A mathematical function determining whether a neuron should be activated based on the input to the neuron.
Block A fundamental component or module within a neural network architecture.
Channel A single slice of data along the depth of the tensor. For example, in an image, depth is a single channel of the colour space.
Data Augmentation A technique that increasing the training dataset with new training examples obtained by altering some features of the original training dataset.
Densely Residual Laplacian
– Module  (DRLM) A set of RUs where each RU is followed by a Concatenation Layer.
– Network A Deep Learning Model that combines dense connections, residual learning, and Laplacian pyramids to enhance image restoration tasks like super-resolution and denoising. 
Dilation A technique for expanding a convolutional kernel by inserting holes or gaps between its elements.
Dependency Graph (DepGraph) a framework to simplify the Structured Pruning operation of neural networks.
Epoch The total number of iterations of all the Training data in one cycle for training a Machine Learning Model.
Feature maps The outputs of convolutional layers.
Fine Tuning The Process of re-training a model trained on a dataset A on a new dataset B.  
Importance The arithmetic mean of all Parameters of the Channel.
Inference The process of running a Model on an input to produce an output. 
Initial Number of Parameters The number of parameters of the unpruned Model.
Input/Output Decomposition The process of breaking down complex input data into simpler, more manageable components or features, and then using these to generate meaningful outputs.
Dependency Graph A graph representing the dependency between any input and output decomposition.
Laplacian
– Attention Unit  (LC) A set of Convolutional Layers with a square filter size and Dilation that is greater than or equal the filter size.
– Pyramid
Layer A set of parameters at a particular depth in Neural Network.
– Concatenation The process of combining multiple layers  into a single tensor.
– Convolutional A Layer of Neural Network Model that applies a convolutional filter over the input. 
Learning
Deep  A type of Machine Learning that uses artificial Neural Networks with many Layers to learn patterns from data.
– Machine A class of algorithms that enable computers to learn from data thus enabling them to make predictions called inferences from new data.
– Rate A value linked to the step size at each iteration toward a minimum of the Loss Function.
– Sparsity Learning strategy  to detect the most relevant features of a Model in the set of all the Model features for a particular learning task.
Loss function
A mathematical function that measures the distance between the output of a Machine Learning Model and the actual value
Maximum Pruning Ratio The highest percentage of a neural network’s parameters (weights, neurons, or connections) that can be removed without causing a significant performance drop.
Model
– Deep Learning An algorithm that is implemented with a multi-Layered Neural Network.
– Machine Learning An algorithm able to identify patterns or make predictions on datasets not experienced before.
– Pre-trained A Model that has been trained on a Dataset possibly of a different from the one in which the Model has to be used. 
– Recovery Phase A training procedure applied after Pruning to recover part of the performance lost during application of  Pruning Algorithm.
Neural Network Also Artificial Neural Network, a set of interconnected data processing nodes whose connections are affected by Weights.
Neuron A data processing node in a Neural Network.
Patch A squared subset of a frame, whose size if often multiple of 2, used to define the square size (e.g., 8×8, 16×16, 32×32).
Parameter The multiplier of the input to a Neural Network neuron learned via Training.
Performance Criterion The percentage ratio of the Pruned Model and the unpruned Model that is considered acceptable.
Pre-training A phase of Neural Network Model Training where a model is trained on an often-generic dataset to allow it to learn a more generic representation of the task. 
Pruning The process of removing less important parameters (like weights or neurons) from a neural network to reduce its size and computational requirements, while retaining the model performance.
– Group A group of decompositions that include those dependent on each other and subject to joint pruning.
– Target The percentage of the Model parameters – computed with reference to to the Initial Number of Parameters – be be pruned.
Growing Regularisation A technique that forces the model to set a whole dimension to low values before applying the pruning, so as to make the removed dimension already close to zero not impacting the model result.
– Learning-Based A set of Pruning techniques which require variations of learning in order to be implemented.
– Recovery A method that involves retraining a pruned neural network to regain any lost accuracy.
– Structured A method that removes entire components like neurons, filters, or channels, resulting in a smaller dense model architecture.
– Unstructured Unstructured Pruning focuses on removing single redundant neurons. However, creating a sparse model representation which does not compute faster in common hardware.
Rectified Linear Unit (ReLU) An Activation Function whose output is the input if it is positive and zero otherwise. 
Residual
– Block A Block composed of concatenated DRLM modules where each module is followed by a Concatenation and Convolutional Layer.
– Function A function that provides the difference between the input and the desired output of a layer or a stack of layers.
– Neural Network (ResNet) A Neural Network whose Layers learn Residual Functions with reference to the inputs to each Layer.
– Unit (RU) A set of alternate ReLU and Convolutional Layer.
Resolution
– Visual The dimension in pixels, expressed as width × height (e.g., 1920×1080), indicating how many pixels make up an image or a video frame. 
Saliency Value A value representing the ability of an image or a video frame to grab the attention of a human. 
Sampling
– Down- The process of reducing the Visual Resolution. 
– Up- The process of increasing the Visual Resolution. 
Super Resolution The technique enabling the generation of High-Resolution Visual Data from a low-Resolution one. 
Training The process of letting a Model experience examples of inputs that the Trained Model might experience or outputs that the Trained Model should produce, or both.
– Set The dataset used to train a Model.
Validation The process of evaluating a Trained Model on a dataset (called Validation Set) that the Model has not experienced during training.
– Score The error of a Model on the Validation Set.
– Set The data set used to check the performance of a Model to know when to stop the Training.
Video Frame An image drawn from for the sequences of images composing a video. 

 

<-Scope      Go to ToC       References ->