Efficiency and generalization of sparse neural networks
Peste E-A. 2023. Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria.
Download
Thesis
| PhD
| Published
| English
Author
Supervisor
Department
Series Title
ISTA Thesis
Abstract
Deep learning has become an integral part of a large number of important applications, and many of the recent breakthroughs have been enabled by the ability to train very large models, capable to capture complex patterns and relationships from the data. At the same time, the massive sizes of modern deep learning models have made their deployment to smaller devices more challenging; this is particularly important, as in many applications the users rely on accurate deep learning predictions, but they only have access to devices with limited memory and compute power. One solution to this problem is to prune neural networks, by setting as many of their parameters as possible to zero, to obtain accurate sparse models with lower memory footprint. Despite the great research progress in obtaining sparse models that preserve accuracy, while satisfying memory and computational constraints, there are still many challenges associated with efficiently training sparse models, as well as understanding their generalization properties.
The focus of this thesis is to investigate how the training process of sparse models can be made more efficient, and to understand the differences between sparse and dense models in terms of how well they can generalize to changes in the data distribution. We first study a method for co-training sparse and dense models, at a lower cost compared to regular training. With our method we can obtain very accurate sparse networks, and dense models that can recover the baseline accuracy. Furthermore, we are able to more easily analyze the differences, at prediction level, between the sparse-dense model pairs. Next, we investigate the generalization properties of sparse neural networks in more detail, by studying how well different sparse models trained on a larger task can adapt to smaller, more specialized tasks, in a transfer learning scenario. Our analysis across multiple pruning methods and sparsity levels reveals that sparse models provide features that can transfer similarly to or better than the dense baseline. However, the choice of the pruning method plays an important role, and can influence the results when the features are fixed (linear finetuning), or when they are allowed to adapt to the new task (full finetuning). Using sparse models with fixed masks for finetuning on new tasks has an important practical advantage, as it enables training neural networks on smaller devices. However, one drawback of current pruning methods is that the entire training cycle has to be repeated to obtain the initial sparse model, for every sparsity target; in consequence, the entire training process is costly and also multiple models need to be stored. In the last part of the thesis we propose a method that can train accurate dense models that are compressible in a single step, to multiple sparsity levels, without additional finetuning. Our method results in sparse models that can be competitive with existing pruning methods, and which can also successfully generalize to new tasks.
Publishing Year
Date Published
2023-05-23
Publisher
Institute of Science and Technology Austria
Acknowledged SSUs
Page
147
ISSN
IST-REx-ID
Cite this
Peste E-A. Efficiency and generalization of sparse neural networks. 2023. doi:10.15479/at:ista:13074
Peste, E.-A. (2023). Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:13074
Peste, Elena-Alexandra. “Efficiency and Generalization of Sparse Neural Networks.” Institute of Science and Technology Austria, 2023. https://doi.org/10.15479/at:ista:13074.
E.-A. Peste, “Efficiency and generalization of sparse neural networks,” Institute of Science and Technology Austria, 2023.
Peste E-A. 2023. Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria.
Peste, Elena-Alexandra. Efficiency and Generalization of Sparse Neural Networks. Institute of Science and Technology Austria, 2023, doi:10.15479/at:ista:13074.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
File Name
Access Level
Open Access
Date Uploaded
2023-05-24
MD5 Checksum
6b3354968403cb9d48cc5a83611fb571
Source File
File Name
PhD_Thesis_APeste.zip
1.66 MB
Access Level
Closed Access
Date Uploaded
2023-05-24
MD5 Checksum
8d0df94bbcf4db72c991f22503b3fd60
Material in ISTA:
Part of this Dissertation
Part of this Dissertation
Part of this Dissertation