Benign overfitting in deep neural networks under lazy training

Zhu Z, Liu F, Chrysos GG, Locatello F, Cevher V. 2023. Benign overfitting in deep neural networks under lazy training. Proceedings of the 40th International Conference on Machine Learning. International Conference on Machine Learning, PMLR, vol. 202, 43105–43128.

Download (ext.)
Conference Paper | Published | English
Author
Zhu, Zhenyu; Liu, Fanghui; Chrysos, Grigorios G; Locatello, FrancescoISTA ; Cevher, Volkan
Department
Series Title
PMLR
Abstract
This paper focuses on over-parameterized deep neural networks (DNNs) with ReLU activation functions and proves that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification while obtaining (nearly) zero-training error under the lazy training regime. For this purpose, we unify three interrelated concepts of overparameterization, benign overfitting, and the Lipschitz constant of DNNs. Our results indicate that interpolating with smoother functions leads to better generalization. Furthermore, we investigate the special case where interpolating smooth ground-truth functions is performed by DNNs under the Neural Tangent Kernel (NTK) regime for generalization. Our result demonstrates that the generalization error converges to a constant order that only depends on label noise and initialization noise, which theoretically verifies benign overfitting. Our analysis provides a tight lower bound on the normalized margin under non-smooth activation functions, as well as the minimum eigenvalue of NTK under high-dimensional settings, which has its own interest in learning theory.
Publishing Year
Date Published
2023-05-30
Proceedings Title
Proceedings of the 40th International Conference on Machine Learning
Publisher
ML Research Press
Volume
202
Page
43105-43128
Conference
International Conference on Machine Learning
Conference Location
Honolulu, Hawaii, United States
Conference Date
2023-07-23 – 2023-07-29
IST-REx-ID

Cite this

Zhu Z, Liu F, Chrysos GG, Locatello F, Cevher V. Benign overfitting in deep neural networks under lazy training. In: Proceedings of the 40th International Conference on Machine Learning. Vol 202. ML Research Press; 2023:43105-43128.
Zhu, Z., Liu, F., Chrysos, G. G., Locatello, F., & Cevher, V. (2023). Benign overfitting in deep neural networks under lazy training. In Proceedings of the 40th International Conference on Machine Learning (Vol. 202, pp. 43105–43128). Honolulu, Hawaii, United States: ML Research Press.
Zhu, Zhenyu, Fanghui Liu, Grigorios G Chrysos, Francesco Locatello, and Volkan Cevher. “Benign Overfitting in Deep Neural Networks under Lazy Training.” In Proceedings of the 40th International Conference on Machine Learning, 202:43105–28. ML Research Press, 2023.
Z. Zhu, F. Liu, G. G. Chrysos, F. Locatello, and V. Cevher, “Benign overfitting in deep neural networks under lazy training,” in Proceedings of the 40th International Conference on Machine Learning, Honolulu, Hawaii, United States, 2023, vol. 202, pp. 43105–43128.
Zhu Z, Liu F, Chrysos GG, Locatello F, Cevher V. 2023. Benign overfitting in deep neural networks under lazy training. Proceedings of the 40th International Conference on Machine Learning. International Conference on Machine Learning, PMLR, vol. 202, 43105–43128.
Zhu, Zhenyu, et al. “Benign Overfitting in Deep Neural Networks under Lazy Training.” Proceedings of the 40th International Conference on Machine Learning, vol. 202, ML Research Press, 2023, pp. 43105–28.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]

Link(s) to Main File(s)
Access Level
OA Open Access

Export

Marked Publications

Open Data ISTA Research Explorer

Sources

arXiv 2305.19377

Search this title in

Google Scholar