Artificial Neural Networks are powerful learning models that allow to obtain extraordinary performance in many complex tasks. However, state-of-the-art architectures often require the use of a huge number of parameters, making it hard to employ them with limited hardware resources.

This thesis is devoted to the application on neural networks of modern optimization algorithms designed to induce sparsity (low number of nonzero variables) by means of the introduction of l1-norm or l0-pseudonorm terms into the training problem.

image_pdfimage_print
Optimization Methods for Neural Networks Pruning