{"oa_version":"Published Version","oa":1,"date_created":"2023-05-23T17:07:53Z","user_id":"8b945eb4-e2f2-11eb-945a-df72226e66a9","publisher":"Institute of Science and Technology Austria","title":"Efficiency and generalization of sparse neural networks","status":"public","related_material":{"record":[{"status":"public","id":"11458","relation":"part_of_dissertation"},{"relation":"part_of_dissertation","id":"13053","status":"public"},{"status":"public","relation":"part_of_dissertation","id":"12299"}]},"alternative_title":["ISTA Thesis"],"type":"dissertation","file":[{"relation":"main_file","checksum":"6b3354968403cb9d48cc5a83611fb571","file_name":"PhD_Thesis_Alexandra_Peste_final.pdf","access_level":"open_access","creator":"epeste","content_type":"application/pdf","date_updated":"2023-05-24T16:11:16Z","file_size":2152072,"date_created":"2023-05-24T16:11:16Z","file_id":"13087","success":1},{"date_updated":"2023-05-24T16:12:59Z","file_id":"13088","file_size":1658293,"date_created":"2023-05-24T16:12:59Z","relation":"source_file","checksum":"8d0df94bbcf4db72c991f22503b3fd60","content_type":"application/zip","file_name":"PhD_Thesis_APeste.zip","access_level":"closed","creator":"epeste"}],"day":"23","degree_awarded":"PhD","supervisor":[{"orcid":"0000-0001-8622-7887","first_name":"Christoph","last_name":"Lampert","id":"40C20FD2-F248-11E8-B48F-1D18A9856A87","full_name":"Lampert, Christoph"},{"last_name":"Alistarh","first_name":"Dan-Adrian","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","full_name":"Alistarh, Dan-Adrian","orcid":"0000-0003-3650-940X"}],"article_processing_charge":"No","month":"05","date_updated":"2023-08-04T10:33:27Z","date_published":"2023-05-23T00:00:00Z","publication_status":"published","department":[{"_id":"GradSch"},{"_id":"DaAl"},{"_id":"ChLa"}],"abstract":[{"lang":"eng","text":"Deep learning has become an integral part of a large number of important applications, and many of the recent breakthroughs have been enabled by the ability to train very large models, capable to capture complex patterns and relationships from the data. At the same time, the massive sizes of modern deep learning models have made their deployment to smaller devices more challenging; this is particularly important, as in many applications the users rely on accurate deep learning predictions, but they only have access to devices with limited memory and compute power. One solution to this problem is to prune neural networks, by setting as many of their parameters as possible to zero, to obtain accurate sparse models with lower memory footprint. Despite the great research progress in obtaining sparse models that preserve accuracy, while satisfying memory and computational constraints, there are still many challenges associated with efficiently training sparse models, as well as understanding their generalization properties.\r\n\r\nThe focus of this thesis is to investigate how the training process of sparse models can be made more efficient, and to understand the differences between sparse and dense models in terms of how well they can generalize to changes in the data distribution. We first study a method for co-training sparse and dense models, at a lower cost compared to regular training. With our method we can obtain very accurate sparse networks, and dense models that can recover the baseline accuracy. Furthermore, we are able to more easily analyze the differences, at prediction level, between the sparse-dense model pairs. Next, we investigate the generalization properties of sparse neural networks in more detail, by studying how well different sparse models trained on a larger task can adapt to smaller, more specialized tasks, in a transfer learning scenario. Our analysis across multiple pruning methods and sparsity levels reveals that sparse models provide features that can transfer similarly to or better than the dense baseline. However, the choice of the pruning method plays an important role, and can influence the results when the features are fixed (linear finetuning), or when they are allowed to adapt to the new task (full finetuning). Using sparse models with fixed masks for finetuning on new tasks has an important practical advantage, as it enables training neural networks on smaller devices. However, one drawback of current pruning methods is that the entire training cycle has to be repeated to obtain the initial sparse model, for every sparsity target; in consequence, the entire training process is costly and also multiple models need to be stored. In the last part of the thesis we propose a method that can train accurate dense models that are compressible in a single step, to multiple sparsity levels, without additional finetuning. Our method results in sparse models that can be competitive with existing pruning methods, and which can also successfully generalize to new tasks."}],"year":"2023","acknowledged_ssus":[{"_id":"ScienComp"}],"file_date_updated":"2023-05-24T16:12:59Z","project":[{"grant_number":"665385","_id":"2564DBCA-B435-11E9-9278-68D0E5697425","name":"International IST Doctoral Program","call_identifier":"H2020"},{"call_identifier":"H2020","grant_number":"805223","_id":"268A44D6-B435-11E9-9278-68D0E5697425","name":"Elastic Coordination for Scalable Machine Learning"}],"ddc":["000"],"has_accepted_license":"1","author":[{"id":"32D78294-F248-11E8-B48F-1D18A9856A87","full_name":"Peste, Elena-Alexandra","first_name":"Elena-Alexandra","last_name":"Peste"}],"doi":"10.15479/at:ista:13074","page":"147","publication_identifier":{"issn":["2663-337X"]},"citation":{"ieee":"E.-A. Peste, “Efficiency and generalization of sparse neural networks,” Institute of Science and Technology Austria, 2023.","mla":"Peste, Elena-Alexandra. Efficiency and Generalization of Sparse Neural Networks. Institute of Science and Technology Austria, 2023, doi:10.15479/at:ista:13074.","short":"E.-A. Peste, Efficiency and Generalization of Sparse Neural Networks, Institute of Science and Technology Austria, 2023.","chicago":"Peste, Elena-Alexandra. “Efficiency and Generalization of Sparse Neural Networks.” Institute of Science and Technology Austria, 2023. https://doi.org/10.15479/at:ista:13074.","ista":"Peste E-A. 2023. Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria.","apa":"Peste, E.-A. (2023). Efficiency and generalization of sparse neural networks. Institute of Science and Technology Austria. https://doi.org/10.15479/at:ista:13074","ama":"Peste E-A. Efficiency and generalization of sparse neural networks. 2023. doi:10.15479/at:ista:13074"},"ec_funded":1,"_id":"13074","language":[{"iso":"eng"}]}