Séminaire quaresmi ce jeudi 12 mai 2022 “Spectral Tools for training and analysing Neural Networks”

Ce jeudi 12 mai, pour notre dernier séminaire Quaresmi de cette année académique, nous aurons le plaisir d’accueillir Lorenzo Giambagli, chercheur à l’Université de Florence et actuellement à l’Université de Namur, qui nous présentera un exposé intitulé “Spectral tools for training and analysing neural networks” dont voici un résumé :

Deep Feedforward Neural Networks (FFNNs) play a central role in the Machine Learning field. They are usually trained in the space of nodes, by adjusting the weights of existing links via suitable optimization protocols. Recently a radically new approach has been proposed (1). By anchoring the learning process to reciprocal space, the new targets of the optimization process are eigenvectors and eigenvalues of the transfer operators between layers.

Shifting the focus on such fundamental mathematical structures we have been able to understand their pivotal role in training and analyzing NNs. Indeed, while seeking for a small subset of trainable parameters capable of carrying out the training procedure, eigenvalues are what to look for (2). Choosing them as trainable parameters allows the optimizer to exploit the parallel adjustment of several weights, the ones underlined by the corresponding eigenvector, and therefore made their after-training interpretation possible.

Firstly, eigenvalues magnitude after the training procedure has occurred has been empirically and heuristically proven being a proxy their relevance in the optimization process. Indeed, a precise correspondence between nodes and eigenvalues can be established, leading to a novel pruning procedure. The nodes related with low magnitude eigenvalues can be removed leading to a fast and easy implemented network compression algorithm (3).

Secondly, accounting for eigenvalues in the optimization process, it is possible to dynamically train sparse network (2). Sparsity constrains in the direct space implies that certain weights got filtered under a mask, leading to a gradient equal to zero during the training procedure. Working in the reciprocal space, however, allows masked weights to still be modified, due to the non-local effect of the eigenvalues. Such approach leads to sparse networks whose topology is not fixed to the starting one, resulting in a much more efficient training.

1. Machine learning in spectral domain. Lorenzo Giambagli, Lorenzo Buffoni, Timoteo Carletti, Walter Nocentini & Duccio Fanelli. 2021, Nature Communications.

2. On the training of sparse and dense deep neural networks: less parameters, same performance. Lorenzo Giambagli, Lorenzo Chicchi, Lorenzo Buffoni, Timoteo Carletti, Marco Ciavarella, Duccio Fanelli. 2021, PRE.

3. Spectral Pruning of Fully Connected Layers. Lorenzo Giambagli, Lorenzo Buffoni, Enrico Civitelli, Lorenzo Chicchi, and Duccio Fanelli. 2021, (under peer review of Scientific Report).

Le séminaire, ouvert à tous, aura lieu en présentiel à l’ICHEC, jeudi 12 mai à 13h au local E202. Afin de faciliter l’organisation, merci de vous inscrire auprès de quaresmi@ichec.be.

Au plaisir de vous y voir nombreux.