---
_id: '14459'
abstract:
- lang: eng
  text: Autoencoders are a popular model in many branches of machine learning and
    lossy data compression. However, their fundamental limits, the performance of
    gradient methods and the features learnt during optimization remain poorly understood,
    even in the two-layer setting. In fact, earlier work has considered either linear
    autoencoders or specific training regimes (leading to vanishing or diverging compression
    rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders
    trained in the challenging proportional regime in which the input dimension scales
    linearly with the size of the representation. Our results characterize the minimizers
    of the population risk, and show that such minimizers are achieved by gradient
    methods; their structure is also unveiled, thus leading to a concise description
    of the features obtained via training. For the special case of a sign activation
    function, our analysis establishes the fundamental limits for the lossy compression
    of Gaussian sources via (shallow) autoencoders. Finally, while the results are
    proved for Gaussian data, numerical simulations on standard datasets display the
    universality of the theoretical predictions.
acknowledgement: Aleksandr Shevchenko, Kevin Kogler and Marco Mondelli are supported
  by the 2019 Lopez-Loreta Prize. Hamed Hassani acknowledges the support by the NSF
  CIF award (1910056) and the NSF Institute for CORE Emerging Methods in Data Science
  (EnCORE).
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Aleksandr
  full_name: Shevchenko, Aleksandr
  id: F2B06EC2-C99E-11E9-89F0-752EE6697425
  last_name: Shevchenko
- first_name: Kevin
  full_name: Kögler, Kevin
  id: 94ec913c-dc85-11ea-9058-e5051ab2428b
  last_name: Kögler
- first_name: Hamed
  full_name: Hassani, Hamed
  last_name: Hassani
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Shevchenko A, Kögler K, Hassani H, Mondelli M. Fundamental limits of two-layer
    autoencoders, and achieving them with gradient methods. In: <i>Proceedings of
    the 40th International Conference on Machine Learning</i>. Vol 202. ML Research
    Press; 2023:31151-31209.'
  apa: 'Shevchenko, A., Kögler, K., Hassani, H., &#38; Mondelli, M. (2023). Fundamental
    limits of two-layer autoencoders, and achieving them with gradient methods. In
    <i>Proceedings of the 40th International Conference on Machine Learning</i> (Vol.
    202, pp. 31151–31209). Honolulu, Hawaii, HI, United States: ML Research Press.'
  chicago: Shevchenko, Aleksandr, Kevin Kögler, Hamed Hassani, and Marco Mondelli.
    “Fundamental Limits of Two-Layer Autoencoders, and Achieving Them with Gradient
    Methods.” In <i>Proceedings of the 40th International Conference on Machine Learning</i>,
    202:31151–209. ML Research Press, 2023.
  ieee: A. Shevchenko, K. Kögler, H. Hassani, and M. Mondelli, “Fundamental limits
    of two-layer autoencoders, and achieving them with gradient methods,” in <i>Proceedings
    of the 40th International Conference on Machine Learning</i>, Honolulu, Hawaii,
    HI, United States, 2023, vol. 202, pp. 31151–31209.
  ista: 'Shevchenko A, Kögler K, Hassani H, Mondelli M. 2023. Fundamental limits of
    two-layer autoencoders, and achieving them with gradient methods. Proceedings
    of the 40th International Conference on Machine Learning. ICML: International
    Conference on Machine Learning, PMLR, vol. 202, 31151–31209.'
  mla: Shevchenko, Aleksandr, et al. “Fundamental Limits of Two-Layer Autoencoders,
    and Achieving Them with Gradient Methods.” <i>Proceedings of the 40th International
    Conference on Machine Learning</i>, vol. 202, ML Research Press, 2023, pp. 31151–209.
  short: A. Shevchenko, K. Kögler, H. Hassani, M. Mondelli, in:, Proceedings of the
    40th International Conference on Machine Learning, ML Research Press, 2023, pp.
    31151–31209.
conference:
  end_date: 2023-07-29
  location: Honolulu, Hawaii, HI, United States
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2023-07-23
date_created: 2023-10-29T23:01:17Z
date_published: 2023-07-30T00:00:00Z
date_updated: 2024-09-10T13:03:19Z
day: '30'
department:
- _id: MaMo
- _id: DaAl
external_id:
  arxiv:
  - '2212.13468'
intvolume: '       202'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2212.13468
month: '07'
oa: 1
oa_version: Preprint
page: 31151-31209
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of the 40th International Conference on Machine Learning
publication_identifier:
  eissn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Fundamental limits of two-layer autoencoders, and achieving them with gradient
  methods
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 202
year: '2023'
...
---
_id: '11420'
abstract:
- lang: eng
  text: 'Understanding the properties of neural networks trained via stochastic gradient
    descent (SGD) is at the heart of the theory of deep learning. In this work, we
    take a mean-field view, and consider a two-layer ReLU network trained via noisy-SGD
    for a univariate regularized regression problem. Our main result is that SGD with
    vanishingly small noise injected in the gradients is biased towards a simple solution:
    at convergence, the ReLU network implements a piecewise linear map of the inputs,
    and the number of “knot” points -- i.e., points where the tangent of the ReLU
    network estimator changes -- between two consecutive training inputs is at most
    three. In particular, as the number of neurons of the network grows, the SGD dynamics
    is captured by the solution of a gradient flow and, at convergence, the distribution
    of the weights approaches the unique minimizer of a related free energy, which
    has a Gibbs form. Our key technical contribution consists in the analysis of the
    estimator resulting from this minimizer: we show that its second derivative vanishes
    everywhere, except at some specific locations which represent the “knot” points.
    We also provide empirical evidence that knots at locations distinct from the data
    points might occur, as predicted by our theory.'
acknowledgement: "We would like to thank Mert Pilanci for several exploratory discussions
  in the early stage\r\nof the project, Jan Maas for clarifications about Jordan et
  al. (1998), and Max Zimmer for\r\nsuggestive numerical experiments. A. Shevchenko
  and M. Mondelli are partially supported\r\nby the 2019 Lopez-Loreta Prize. V. Kungurtsev
  acknowledges support to the OP VVV\r\nproject CZ.02.1.01/0.0/0.0/16 019/0000765
  Research Center for Informatics.\r\n"
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Aleksandr
  full_name: Shevchenko, Aleksandr
  id: F2B06EC2-C99E-11E9-89F0-752EE6697425
  last_name: Shevchenko
- first_name: Vyacheslav
  full_name: Kungurtsev, Vyacheslav
  last_name: Kungurtsev
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: Shevchenko A, Kungurtsev V, Mondelli M. Mean-field analysis of piecewise linear
    solutions for wide ReLU networks. <i>Journal of Machine Learning Research</i>.
    2022;23(130):1-55.
  apa: Shevchenko, A., Kungurtsev, V., &#38; Mondelli, M. (2022). Mean-field analysis
    of piecewise linear solutions for wide ReLU networks. <i>Journal of Machine Learning
    Research</i>. Journal of Machine Learning Research.
  chicago: Shevchenko, Aleksandr, Vyacheslav Kungurtsev, and Marco Mondelli. “Mean-Field
    Analysis of Piecewise Linear Solutions for Wide ReLU Networks.” <i>Journal of
    Machine Learning Research</i>. Journal of Machine Learning Research, 2022.
  ieee: A. Shevchenko, V. Kungurtsev, and M. Mondelli, “Mean-field analysis of piecewise
    linear solutions for wide ReLU networks,” <i>Journal of Machine Learning Research</i>,
    vol. 23, no. 130. Journal of Machine Learning Research, pp. 1–55, 2022.
  ista: Shevchenko A, Kungurtsev V, Mondelli M. 2022. Mean-field analysis of piecewise
    linear solutions for wide ReLU networks. Journal of Machine Learning Research.
    23(130), 1–55.
  mla: Shevchenko, Aleksandr, et al. “Mean-Field Analysis of Piecewise Linear Solutions
    for Wide ReLU Networks.” <i>Journal of Machine Learning Research</i>, vol. 23,
    no. 130, Journal of Machine Learning Research, 2022, pp. 1–55.
  short: A. Shevchenko, V. Kungurtsev, M. Mondelli, Journal of Machine Learning Research
    23 (2022) 1–55.
date_created: 2022-05-29T22:01:54Z
date_published: 2022-04-01T00:00:00Z
date_updated: 2024-09-10T13:03:17Z
day: '01'
ddc:
- '000'
department:
- _id: MaMo
- _id: DaAl
external_id:
  arxiv:
  - '2111.02278'
file:
- access_level: open_access
  checksum: d4ff5d1affb34848b5c5e4002483fc62
  content_type: application/pdf
  creator: cchlebak
  date_created: 2022-05-30T08:22:55Z
  date_updated: 2022-05-30T08:22:55Z
  file_id: '11422'
  file_name: 21-1365.pdf
  file_size: 1521701
  relation: main_file
  success: 1
file_date_updated: 2022-05-30T08:22:55Z
has_accepted_license: '1'
intvolume: '        23'
issue: '130'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 1-55
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Journal of Machine Learning Research
publication_identifier:
  eissn:
  - 1533-7928
  issn:
  - 1532-4435
publication_status: published
publisher: Journal of Machine Learning Research
quality_controlled: '1'
related_material:
  link:
  - relation: other
    url: https://www.jmlr.org/papers/v23/21-1365.html
scopus_import: '1'
status: public
title: Mean-field analysis of piecewise linear solutions for wide ReLU networks
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
volume: 23
year: '2022'
...
