---
_id: '14459'
abstract:
- lang: eng
  text: Autoencoders are a popular model in many branches of machine learning and
    lossy data compression. However, their fundamental limits, the performance of
    gradient methods and the features learnt during optimization remain poorly understood,
    even in the two-layer setting. In fact, earlier work has considered either linear
    autoencoders or specific training regimes (leading to vanishing or diverging compression
    rates). Our paper addresses this gap by focusing on non-linear two-layer autoencoders
    trained in the challenging proportional regime in which the input dimension scales
    linearly with the size of the representation. Our results characterize the minimizers
    of the population risk, and show that such minimizers are achieved by gradient
    methods; their structure is also unveiled, thus leading to a concise description
    of the features obtained via training. For the special case of a sign activation
    function, our analysis establishes the fundamental limits for the lossy compression
    of Gaussian sources via (shallow) autoencoders. Finally, while the results are
    proved for Gaussian data, numerical simulations on standard datasets display the
    universality of the theoretical predictions.
acknowledgement: Aleksandr Shevchenko, Kevin Kogler and Marco Mondelli are supported
  by the 2019 Lopez-Loreta Prize. Hamed Hassani acknowledges the support by the NSF
  CIF award (1910056) and the NSF Institute for CORE Emerging Methods in Data Science
  (EnCORE).
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Aleksandr
  full_name: Shevchenko, Aleksandr
  id: F2B06EC2-C99E-11E9-89F0-752EE6697425
  last_name: Shevchenko
- first_name: Kevin
  full_name: Kögler, Kevin
  id: 94ec913c-dc85-11ea-9058-e5051ab2428b
  last_name: Kögler
- first_name: Hamed
  full_name: Hassani, Hamed
  last_name: Hassani
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Shevchenko A, Kögler K, Hassani H, Mondelli M. Fundamental limits of two-layer
    autoencoders, and achieving them with gradient methods. In: <i>Proceedings of
    the 40th International Conference on Machine Learning</i>. Vol 202. ML Research
    Press; 2023:31151-31209.'
  apa: 'Shevchenko, A., Kögler, K., Hassani, H., &#38; Mondelli, M. (2023). Fundamental
    limits of two-layer autoencoders, and achieving them with gradient methods. In
    <i>Proceedings of the 40th International Conference on Machine Learning</i> (Vol.
    202, pp. 31151–31209). Honolulu, Hawaii, HI, United States: ML Research Press.'
  chicago: Shevchenko, Aleksandr, Kevin Kögler, Hamed Hassani, and Marco Mondelli.
    “Fundamental Limits of Two-Layer Autoencoders, and Achieving Them with Gradient
    Methods.” In <i>Proceedings of the 40th International Conference on Machine Learning</i>,
    202:31151–209. ML Research Press, 2023.
  ieee: A. Shevchenko, K. Kögler, H. Hassani, and M. Mondelli, “Fundamental limits
    of two-layer autoencoders, and achieving them with gradient methods,” in <i>Proceedings
    of the 40th International Conference on Machine Learning</i>, Honolulu, Hawaii,
    HI, United States, 2023, vol. 202, pp. 31151–31209.
  ista: 'Shevchenko A, Kögler K, Hassani H, Mondelli M. 2023. Fundamental limits of
    two-layer autoencoders, and achieving them with gradient methods. Proceedings
    of the 40th International Conference on Machine Learning. ICML: International
    Conference on Machine Learning, PMLR, vol. 202, 31151–31209.'
  mla: Shevchenko, Aleksandr, et al. “Fundamental Limits of Two-Layer Autoencoders,
    and Achieving Them with Gradient Methods.” <i>Proceedings of the 40th International
    Conference on Machine Learning</i>, vol. 202, ML Research Press, 2023, pp. 31151–209.
  short: A. Shevchenko, K. Kögler, H. Hassani, M. Mondelli, in:, Proceedings of the
    40th International Conference on Machine Learning, ML Research Press, 2023, pp.
    31151–31209.
conference:
  end_date: 2023-07-29
  location: Honolulu, Hawaii, HI, United States
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2023-07-23
date_created: 2023-10-29T23:01:17Z
date_published: 2023-07-30T00:00:00Z
date_updated: 2024-09-10T13:03:19Z
day: '30'
department:
- _id: MaMo
- _id: DaAl
external_id:
  arxiv:
  - '2212.13468'
intvolume: '       202'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2212.13468
month: '07'
oa: 1
oa_version: Preprint
page: 31151-31209
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of the 40th International Conference on Machine Learning
publication_identifier:
  eissn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
scopus_import: '1'
status: public
title: Fundamental limits of two-layer autoencoders, and achieving them with gradient
  methods
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 202
year: '2023'
...
---
_id: '14665'
abstract:
- lang: eng
  text: We derive lower bounds on the maximal rates for multiple packings in high-dimensional
    Euclidean spaces. For any N > 0 and L ∈ Z ≥2 , a multiple packing is a set C of
    points in R n such that any point in R n lies in the intersection of at most L
    - 1 balls of radius √ nN around points in C . This is a natural generalization
    of the sphere packing problem. We study the multiple packing problem for both
    bounded point sets whose points have norm at most √ nP for some constant P > 0,
    and unbounded point sets whose points are allowed to be anywhere in R n . Given
    a well-known connection with coding theory, multiple packings can be viewed as
    the Euclidean analog of list-decodable codes, which are well-studied over finite
    fields. We derive the best known lower bounds on the optimal multiple packing
    density. This is accomplished by establishing an inequality which relates the
    list-decoding error exponent for additive white Gaussian noise channels, a quantity
    of average-case nature, to the list-decoding radius, a quantity of worst-case
    nature. We also derive novel bounds on the list-decoding error exponent for infinite
    constellations and closed-form expressions for the list-decoding error exponents
    for the power-constrained AWGN channel, which may be of independent interest beyond
    multiple packing.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
  orcid: 0000-0002-6465-6258
- first_name: Shashank
  full_name: Vatedka, Shashank
  last_name: Vatedka
citation:
  ama: 'Zhang Y, Vatedka S. Multiple packing: Lower bounds via error exponents. <i>IEEE
    Transactions on Information Theory</i>. 2023. doi:<a href="https://doi.org/10.1109/TIT.2023.3334032">10.1109/TIT.2023.3334032</a>'
  apa: 'Zhang, Y., &#38; Vatedka, S. (2023). Multiple packing: Lower bounds via error
    exponents. <i>IEEE Transactions on Information Theory</i>. IEEE. <a href="https://doi.org/10.1109/TIT.2023.3334032">https://doi.org/10.1109/TIT.2023.3334032</a>'
  chicago: 'Zhang, Yihan, and Shashank Vatedka. “Multiple Packing: Lower Bounds via
    Error Exponents.” <i>IEEE Transactions on Information Theory</i>. IEEE, 2023.
    <a href="https://doi.org/10.1109/TIT.2023.3334032">https://doi.org/10.1109/TIT.2023.3334032</a>.'
  ieee: 'Y. Zhang and S. Vatedka, “Multiple packing: Lower bounds via error exponents,”
    <i>IEEE Transactions on Information Theory</i>. IEEE, 2023.'
  ista: 'Zhang Y, Vatedka S. 2023. Multiple packing: Lower bounds via error exponents.
    IEEE Transactions on Information Theory.'
  mla: 'Zhang, Yihan, and Shashank Vatedka. “Multiple Packing: Lower Bounds via Error
    Exponents.” <i>IEEE Transactions on Information Theory</i>, IEEE, 2023, doi:<a
    href="https://doi.org/10.1109/TIT.2023.3334032">10.1109/TIT.2023.3334032</a>.'
  short: Y. Zhang, S. Vatedka, IEEE Transactions on Information Theory (2023).
date_created: 2023-12-10T23:01:00Z
date_published: 2023-11-16T00:00:00Z
date_updated: 2023-12-18T07:46:45Z
day: '16'
department:
- _id: MaMo
doi: 10.1109/TIT.2023.3334032
external_id:
  arxiv:
  - '2211.04408'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.04408
month: '11'
oa: 1
oa_version: Preprint
publication: IEEE Transactions on Information Theory
publication_identifier:
  eissn:
  - 1557-9654
  issn:
  - 0018-9448
publication_status: epub_ahead
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Multiple packing: Lower bounds via error exponents'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14751'
abstract:
- lang: eng
  text: 'We consider zero-error communication over a two-transmitter deterministic
    adversarial multiple access channel (MAC) governed by an adversary who has access
    to the transmissions of both senders (hence called omniscient ) and aims to maliciously
    corrupt the communication. None of the encoders, jammer and decoder is allowed
    to randomize using private or public randomness. This enforces a combinatorial
    nature of the problem. Our model covers a large family of channels studied in
    the literature, including all deterministic discrete memoryless noisy or noiseless
    MACs. In this work, given an arbitrary two-transmitter deterministic omniscient
    adversarial MAC, we characterize when the capacity region: 1) has nonempty interior
    (in particular, is two-dimensional); 2) consists of two line segments (in particular,
    has empty interior); 3) consists of one line segment (in particular, is one-dimensional);
    4) or only contains (0,0) (in particular, is zero-dimensional). This extends a
    recent result by Wang et al. (201 9) from the point-to-point setting to the multiple
    access setting. Indeed, our converse arguments build upon their generalized Plotkin
    bound and involve delicate case analysis. One of the technical challenges is to
    take care of both “joint confusability” and “marginal confusability”. In particular,
    the treatment of marginal confusability does not follow from the point-to-point
    results by Wang et al. Our achievability results follow from random coding with
    expurgation.'
acknowledgement: "The author would like to thank Amitalok J. Budkuley and Sidharth
  Jaggi for many helpful discussions at the early stage of this work. He would also
  like to thank Nir Ailon, Qi Cao, and Chandra Nair for discussions on a related problem
  regarding zero-error binary adder MACs.\r\nThe work of Yihan Zhang was supported
  by the European Union’s Horizon 2020 Research and Innovation Programme under Grant
  682203-ERC-[Inf-Speed-Tradeoff]"
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
  orcid: 0000-0002-6465-6258
citation:
  ama: Zhang Y. Zero-error communication over adversarial MACs. <i>IEEE Transactions
    on Information Theory</i>. 2023;69(7):4093-4127. doi:<a href="https://doi.org/10.1109/tit.2023.3257239">10.1109/tit.2023.3257239</a>
  apa: Zhang, Y. (2023). Zero-error communication over adversarial MACs. <i>IEEE Transactions
    on Information Theory</i>. Institute of Electrical and Electronics Engineers.
    <a href="https://doi.org/10.1109/tit.2023.3257239">https://doi.org/10.1109/tit.2023.3257239</a>
  chicago: Zhang, Yihan. “Zero-Error Communication over Adversarial MACs.” <i>IEEE
    Transactions on Information Theory</i>. Institute of Electrical and Electronics
    Engineers, 2023. <a href="https://doi.org/10.1109/tit.2023.3257239">https://doi.org/10.1109/tit.2023.3257239</a>.
  ieee: Y. Zhang, “Zero-error communication over adversarial MACs,” <i>IEEE Transactions
    on Information Theory</i>, vol. 69, no. 7. Institute of Electrical and Electronics
    Engineers, pp. 4093–4127, 2023.
  ista: Zhang Y. 2023. Zero-error communication over adversarial MACs. IEEE Transactions
    on Information Theory. 69(7), 4093–4127.
  mla: Zhang, Yihan. “Zero-Error Communication over Adversarial MACs.” <i>IEEE Transactions
    on Information Theory</i>, vol. 69, no. 7, Institute of Electrical and Electronics
    Engineers, 2023, pp. 4093–127, doi:<a href="https://doi.org/10.1109/tit.2023.3257239">10.1109/tit.2023.3257239</a>.
  short: Y. Zhang, IEEE Transactions on Information Theory 69 (2023) 4093–4127.
date_created: 2024-01-08T13:04:54Z
date_published: 2023-07-01T00:00:00Z
date_updated: 2024-01-09T08:45:24Z
day: '01'
department:
- _id: MaMo
doi: 10.1109/tit.2023.3257239
external_id:
  arxiv:
  - '2101.12426'
intvolume: '        69'
issue: '7'
keyword:
- Computer Science Applications
- Information Systems
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2101.12426
month: '07'
oa: 1
oa_version: Preprint
page: 4093-4127
publication: IEEE Transactions on Information Theory
publication_identifier:
  eissn:
  - 1557-9654
  issn:
  - 0018-9448
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Zero-error communication over adversarial MACs
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 69
year: '2023'
...
---
_id: '14921'
abstract:
- lang: eng
  text: Neural collapse (NC) refers to the surprising structure of the last layer
    of deep neural networks in the terminal phase of gradient descent training. Recently,
    an increasing amount of experimental evidence has pointed to the propagation of
    NC to earlier layers of neural networks. However, while the NC in the last layer
    is well studied theoretically, much less is known about its multi-layered counterpart
    - deep neural collapse (DNC). In particular, existing work focuses either on linear
    layers or only on the last two layers at the price of an extra assumption. Our
    paper fills this gap by generalizing the established analytical framework for
    NC - the unconstrained features model - to multiple non-linear layers. Our key
    technical contribution is to show that, in a deep unconstrained features model,
    the unique global optimum for binary classification exhibits all the properties
    typical of DNC. This explains the existing experimental evidence of DNC. We also
    empirically show that (i) by optimizing deep unconstrained features models via
    gradient descent, the resulting solution agrees well with our theory, and (ii)
    trained networks recover the unconstrained features suitable for the occurrence
    of DNC, thus supporting the validity of this modeling principle.
acknowledgement: M. M. is partially supported by the 2019 Lopez-Loreta Prize. The
  authors would like to thank Eugenia Iofinova, Bernd Prach and Simone Bombari for
  valuable feedback on the manuscript.
alternative_title:
- NeurIPS
article_processing_charge: No
arxiv: 1
author:
- first_name: Peter
  full_name: Súkeník, Peter
  id: d64d6a8d-eb8e-11eb-b029-96fd216dec3c
  last_name: Súkeník
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
    for the deep unconstrained features model. In: <i>37th Annual Conference on Neural
    Information Processing Systems</i>.'
  apa: Súkeník, P., Mondelli, M., &#38; Lampert, C. (n.d.). Deep neural collapse is
    provably optimal for the deep unconstrained features model. In <i>37th Annual
    Conference on Neural Information Processing Systems</i>. New Orleans, LA, United
    States.
  chicago: Súkeník, Peter, Marco Mondelli, and Christoph Lampert. “Deep Neural Collapse
    Is Provably Optimal for the Deep Unconstrained Features Model.” In <i>37th Annual
    Conference on Neural Information Processing Systems</i>, n.d.
  ieee: P. Súkeník, M. Mondelli, and C. Lampert, “Deep neural collapse is provably
    optimal for the deep unconstrained features model,” in <i>37th Annual Conference
    on Neural Information Processing Systems</i>, New Orleans, LA, United States.
  ista: 'Súkeník P, Mondelli M, Lampert C. Deep neural collapse is provably optimal
    for the deep unconstrained features model. 37th Annual Conference on Neural Information
    Processing Systems. NeurIPS: Neural Information Processing Systems, NeurIPS, .'
  mla: Súkeník, Peter, et al. “Deep Neural Collapse Is Provably Optimal for the Deep
    Unconstrained Features Model.” <i>37th Annual Conference on Neural Information
    Processing Systems</i>.
  short: P. Súkeník, M. Mondelli, C. Lampert, in:, 37th Annual Conference on Neural
    Information Processing Systems, n.d.
conference:
  end_date: 2023-12-16
  location: New Orleans, LA, United States
  name: 'NeurIPS: Neural Information Processing Systems'
  start_date: 2023-12-10
date_created: 2024-02-02T11:17:41Z
date_published: 2023-12-15T00:00:00Z
date_updated: 2024-09-10T13:03:19Z
day: '15'
department:
- _id: MaMo
- _id: ChLa
external_id:
  arxiv:
  - '2305.13165'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2305.13165'
month: '12'
oa: 1
oa_version: Preprint
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: 37th Annual Conference on Neural Information Processing Systems
publication_status: inpress
quality_controlled: '1'
status: public
title: Deep neural collapse is provably optimal for the deep unconstrained features
  model
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14922'
abstract:
- lang: eng
  text: 'We propose a novel approach to concentration for non-independent random variables.
    The main idea is to ``pretend'''' that the random variables are independent and
    pay a multiplicative price measuring how far they are from actually being independent.
    This price is encapsulated in the Hellinger integral between the joint and the
    product of the marginals, which is then upper bounded leveraging tensorisation
    properties. Our bounds represent a natural generalisation of concentration inequalities
    in the presence of dependence: we recover exactly the classical bounds (McDiarmid''s
    inequality) when the random variables are independent. Furthermore, in a ``large
    deviations'''' regime, we obtain the same decay in the probability as for the
    independent case, even when the random variables display non-trivial dependencies.
    To show this, we consider a number of applications of interest. First, we provide
    a bound for Markov chains with finite state space. Then, we consider the Simple
    Symmetric Random Walk, which is a non-contracting Markov chain, and a non-Markovian
    setting in which the stochastic process depends on its entire past. To conclude,
    we propose an application to Markov Chain Monte Carlo methods, where our approach
    leads to an improved lower bound on the minimum burn-in period required to reach
    a certain accuracy. In all of these settings, we provide a regime of parameters
    in which our bound fares better than what the state of the art can provide.'
acknowledgement: The authors are partially supported by the 2019 Lopez-Loreta Prize.
  They would also like to thank Professor Jan Maas for providing valuable suggestions
  and comments on an early version of the work.
article_processing_charge: No
arxiv: 1
author:
- first_name: Amedeo Roberto
  full_name: Esposito, Amedeo Roberto
  id: 9583e921-e1ad-11ec-9862-cef099626dc9
  last_name: Esposito
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Esposito AR, Mondelli M. Concentration without independence via information
    measures. In: <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>. IEEE. doi:<a href="https://doi.org/10.1109/isit54713.2023.10206899">10.1109/isit54713.2023.10206899</a>'
  apa: 'Esposito, A. R., &#38; Mondelli, M. (n.d.). Concentration without independence
    via information measures. In <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>. Taipei, Taiwan: IEEE. <a href="https://doi.org/10.1109/isit54713.2023.10206899">https://doi.org/10.1109/isit54713.2023.10206899</a>'
  chicago: Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence
    via Information Measures.” In <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>. IEEE, n.d. <a href="https://doi.org/10.1109/isit54713.2023.10206899">https://doi.org/10.1109/isit54713.2023.10206899</a>.
  ieee: A. R. Esposito and M. Mondelli, “Concentration without independence via information
    measures,” in <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>, Taipei, Taiwan.
  ista: 'Esposito AR, Mondelli M. Concentration without independence via information
    measures. Proceedings of 2023 IEEE International Symposium on Information Theory.
    ISIT: IEEE International Symposium on Information Theory.'
  mla: Esposito, Amedeo Roberto, and Marco Mondelli. “Concentration without Independence
    via Information Measures.” <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>, IEEE, doi:<a href="https://doi.org/10.1109/isit54713.2023.10206899">10.1109/isit54713.2023.10206899</a>.
  short: A.R. Esposito, M. Mondelli, in:, Proceedings of 2023 IEEE International Symposium
    on Information Theory, IEEE, n.d.
conference:
  end_date: 2023-06-30
  location: Taipei, Taiwan
  name: 'ISIT: IEEE International Symposium on Information Theory'
  start_date: 2023-06-25
date_created: 2024-02-02T11:18:40Z
date_published: 2023-06-30T00:00:00Z
date_updated: 2024-02-14T14:24:25Z
day: '30'
department:
- _id: MaMo
doi: 10.1109/isit54713.2023.10206899
external_id:
  arxiv:
  - '2303.07245'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2303.07245
month: '06'
oa: 1
oa_version: Preprint
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of 2023 IEEE International Symposium on Information Theory
publication_status: inpress
publisher: IEEE
quality_controlled: '1'
status: public
title: Concentration without independence via information measures
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14923'
abstract:
- lang: eng
  text: We study the performance of a Bayesian statistician who estimates a rank-one
    signal corrupted by non-symmetric rotationally invariant noise with a generic
    distribution of singular values. As the signal-to-noise ratio and the noise structure
    are unknown, a Gaussian setup is incorrectly assumed. We derive the exact analytic
    expression for the error of the mismatched Bayes estimator and also provide the
    analysis of an approximate message passing (AMP) algorithm. The first result exploits
    the asymptotic behavior of spherical integrals for rectangular matrices and of
    low-rank matrix perturbations; the second one relies on the design and analysis
    of an auxiliary AMP. The numerical experiments show that there is a performance
    gap between the AMP and Bayes estimators, which is due to the incorrect estimation
    of the signal norm.
article_processing_charge: No
arxiv: 1
author:
- first_name: Teng
  full_name: Fu, Teng
  last_name: Fu
- first_name: YuHao
  full_name: Liu, YuHao
  last_name: Liu
- first_name: Jean
  full_name: Barbier, Jean
  last_name: Barbier
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: ShanSuo
  full_name: Liang, ShanSuo
  last_name: Liang
- first_name: TianQi
  full_name: Hou, TianQi
  last_name: Hou
citation:
  ama: 'Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. Mismatched estimation
    of non-symmetric rank-one matrices corrupted by structured noise. In: <i>Proceedings
    of 2023 IEEE International Symposium on Information Theory</i>. IEEE. doi:<a href="https://doi.org/10.1109/isit54713.2023.10206671">10.1109/isit54713.2023.10206671</a>'
  apa: 'Fu, T., Liu, Y., Barbier, J., Mondelli, M., Liang, S., &#38; Hou, T. (n.d.).
    Mismatched estimation of non-symmetric rank-one matrices corrupted by structured
    noise. In <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>.
    Taipei, Taiwan: IEEE. <a href="https://doi.org/10.1109/isit54713.2023.10206671">https://doi.org/10.1109/isit54713.2023.10206671</a>'
  chicago: Fu, Teng, YuHao Liu, Jean Barbier, Marco Mondelli, ShanSuo Liang, and TianQi
    Hou. “Mismatched Estimation of Non-Symmetric Rank-One Matrices Corrupted by Structured
    Noise.” In <i>Proceedings of 2023 IEEE International Symposium on Information
    Theory</i>. IEEE, n.d. <a href="https://doi.org/10.1109/isit54713.2023.10206671">https://doi.org/10.1109/isit54713.2023.10206671</a>.
  ieee: T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, and T. Hou, “Mismatched
    estimation of non-symmetric rank-one matrices corrupted by structured noise,”
    in <i>Proceedings of 2023 IEEE International Symposium on Information Theory</i>,
    Taipei, Taiwan.
  ista: 'Fu T, Liu Y, Barbier J, Mondelli M, Liang S, Hou T. Mismatched estimation
    of non-symmetric rank-one matrices corrupted by structured noise. Proceedings
    of 2023 IEEE International Symposium on Information Theory. ISIT: IEEE International
    Symposium on Information Theory.'
  mla: Fu, Teng, et al. “Mismatched Estimation of Non-Symmetric Rank-One Matrices
    Corrupted by Structured Noise.” <i>Proceedings of 2023 IEEE International Symposium
    on Information Theory</i>, IEEE, doi:<a href="https://doi.org/10.1109/isit54713.2023.10206671">10.1109/isit54713.2023.10206671</a>.
  short: T. Fu, Y. Liu, J. Barbier, M. Mondelli, S. Liang, T. Hou, in:, Proceedings
    of 2023 IEEE International Symposium on Information Theory, IEEE, n.d.
conference:
  end_date: 2023-06-30
  location: Taipei, Taiwan
  name: 'ISIT: IEEE International Symposium on Information Theory'
  start_date: 2023-06-25
date_created: 2024-02-02T11:20:39Z
date_published: 2023-06-30T00:00:00Z
date_updated: 2024-02-14T14:34:03Z
day: '30'
department:
- _id: MaMo
doi: 10.1109/isit54713.2023.10206671
external_id:
  arxiv:
  - '2302.03306'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2302.03306
month: '06'
oa: 1
oa_version: Preprint
publication: Proceedings of 2023 IEEE International Symposium on Information Theory
publication_status: inpress
publisher: IEEE
quality_controlled: '1'
status: public
title: Mismatched estimation of non-symmetric rank-one matrices corrupted by structured
  noise
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14924'
abstract:
- lang: eng
  text: "The stochastic heavy ball method (SHB), also known as stochastic gradient
    descent (SGD) with Polyak's momentum, is widely used in training neural networks.
    However, despite the remarkable success of such algorithm in practice, its theoretical
    characterization remains limited. In this paper, we focus on neural networks with
    two and three layers and provide a rigorous understanding of the properties of
    the solutions found by SHB: \\emph{(i)} stability after dropping out part of the
    neurons, \\emph{(ii)} connectivity along a low-loss path, and \\emph{(iii)} convergence
    to the global optimum.\r\nTo achieve this goal, we take a mean-field view and
    relate the SHB dynamics to a certain partial differential equation in the limit
    of large network widths. This mean-field perspective has inspired a recent line
    of work focusing on SGD while, in contrast, our paper considers an algorithm with
    momentum. More specifically, after proving existence and uniqueness of the limit
    differential equations, we show convergence to the global optimum and give a quantitative
    bound between the mean-field limit and the SHB dynamics of a finite-width network.
    Armed with this last bound, we are able to establish the dropout-stability and
    connectivity of SHB solutions."
acknowledgement: D. Wu and M. Mondelli are partially supported by the 2019 Lopez-Loreta
  Prize. V. Kungurtsev was supported by the OP VVV project CZ.02.1.01/0.0/0.0/16_019/0000765
  "Research Center for Informatics".
alternative_title:
- TMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Diyuan
  full_name: Wu, Diyuan
  id: 1a5914c2-896a-11ed-bdf8-fb80621a0635
  last_name: Wu
- first_name: Vyacheslav
  full_name: Kungurtsev, Vyacheslav
  last_name: Kungurtsev
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Wu D, Kungurtsev V, Mondelli M. Mean-field analysis for heavy ball methods:
    Dropout-stability, connectivity, and global convergence. In: <i>Transactions on
    Machine Learning Research</i>. ML Research Press; 2023.'
  apa: 'Wu, D., Kungurtsev, V., &#38; Mondelli, M. (2023). Mean-field analysis for
    heavy ball methods: Dropout-stability, connectivity, and global convergence. In
    <i>Transactions on Machine Learning Research</i>. ML Research Press.'
  chicago: 'Wu, Diyuan, Vyacheslav Kungurtsev, and Marco Mondelli. “Mean-Field Analysis
    for Heavy Ball Methods: Dropout-Stability, Connectivity, and Global Convergence.”
    In <i>Transactions on Machine Learning Research</i>. ML Research Press, 2023.'
  ieee: 'D. Wu, V. Kungurtsev, and M. Mondelli, “Mean-field analysis for heavy ball
    methods: Dropout-stability, connectivity, and global convergence,” in <i>Transactions
    on Machine Learning Research</i>, 2023.'
  ista: 'Wu D, Kungurtsev V, Mondelli M. 2023. Mean-field analysis for heavy ball
    methods: Dropout-stability, connectivity, and global convergence. Transactions
    on Machine Learning Research. , TMLR, .'
  mla: 'Wu, Diyuan, et al. “Mean-Field Analysis for Heavy Ball Methods: Dropout-Stability,
    Connectivity, and Global Convergence.” <i>Transactions on Machine Learning Research</i>,
    ML Research Press, 2023.'
  short: D. Wu, V. Kungurtsev, M. Mondelli, in:, Transactions on Machine Learning
    Research, ML Research Press, 2023.
date_created: 2024-02-02T11:21:56Z
date_published: 2023-02-28T00:00:00Z
date_updated: 2024-09-10T13:03:20Z
day: '28'
department:
- _id: MaMo
external_id:
  arxiv:
  - '2210.06819'
has_accepted_license: '1'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2210.06819
month: '02'
oa: 1
oa_version: Published Version
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Transactions on Machine Learning Research
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
status: public
title: 'Mean-field analysis for heavy ball methods: Dropout-stability, connectivity,
  and global convergence'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '13269'
abstract:
- lang: eng
  text: This paper is a collection of results on combinatorial properties of codes
    for the Z-channel . A Z-channel with error fraction τ takes as input a length-
    n binary codeword and injects in an adversarial manner up to n τ asymmetric errors,
    i.e., errors that only zero out bits but do not flip 0’s to 1’s. It is known that
    the largest ( L - 1)-list-decodable code for the Z-channel with error fraction
    τ has exponential size (in n ) if τ is less than a critical value that we call
    the ( L - 1)- list-decoding Plotkin point and has constant size if τ is larger
    than the threshold. The ( L -1)-list-decoding Plotkin point is known to be L -1/L-1
    – L -L/ L-1 , which equals 1/4 for unique-decoding with L -1 = 1. In this paper,
    we derive various results for the size of the largest codes above and below the
    list-decoding Plotkin point. In particular, we show that the largest ( L -1)-list-decodable
    code ε-above the Plotkin point, for any given sufficiently small positive constant
    ε > 0, has size Θ L (ε -3/2 ) for any L - 1 ≥ 1. We also devise upper and lower
    bounds on the exponential size of codes below the list-decoding Plotkin point.
acknowledgement: "Nikita Polyanskii’s research was conducted in part during October
  2020 - December 2021 with the Technical University of Munich and the Skolkovo Institute
  of Science and Technology. His work was supported by the German Research Foundation
  (Deutsche Forschungsgemeinschaft, DFG) under Grant No. WA3907/1-1 and the Russian
  Foundation for Basic Research (RFBR)\r\nunder Grant No. 20-01-00559.\r\nYihan Zhang
  is supported by funding from the European Union’s Horizon 2020 research and innovation
  programme under grant agreement No 682203-ERC-[Inf-Speed-Tradeoff]."
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Nikita
  full_name: Polyanskii, Nikita
  last_name: Polyanskii
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
  orcid: 0000-0002-6465-6258
citation:
  ama: Polyanskii N, Zhang Y. Codes for the Z-channel. <i>IEEE Transactions on Information
    Theory</i>. 2023;69(10):6340-6357. doi:<a href="https://doi.org/10.1109/TIT.2023.3292219">10.1109/TIT.2023.3292219</a>
  apa: Polyanskii, N., &#38; Zhang, Y. (2023). Codes for the Z-channel. <i>IEEE Transactions
    on Information Theory</i>. Institute of Electrical and Electronics Engineers.
    <a href="https://doi.org/10.1109/TIT.2023.3292219">https://doi.org/10.1109/TIT.2023.3292219</a>
  chicago: Polyanskii, Nikita, and Yihan Zhang. “Codes for the Z-Channel.” <i>IEEE
    Transactions on Information Theory</i>. Institute of Electrical and Electronics
    Engineers, 2023. <a href="https://doi.org/10.1109/TIT.2023.3292219">https://doi.org/10.1109/TIT.2023.3292219</a>.
  ieee: N. Polyanskii and Y. Zhang, “Codes for the Z-channel,” <i>IEEE Transactions
    on Information Theory</i>, vol. 69, no. 10. Institute of Electrical and Electronics
    Engineers, pp. 6340–6357, 2023.
  ista: Polyanskii N, Zhang Y. 2023. Codes for the Z-channel. IEEE Transactions on
    Information Theory. 69(10), 6340–6357.
  mla: Polyanskii, Nikita, and Yihan Zhang. “Codes for the Z-Channel.” <i>IEEE Transactions
    on Information Theory</i>, vol. 69, no. 10, Institute of Electrical and Electronics
    Engineers, 2023, pp. 6340–57, doi:<a href="https://doi.org/10.1109/TIT.2023.3292219">10.1109/TIT.2023.3292219</a>.
  short: N. Polyanskii, Y. Zhang, IEEE Transactions on Information Theory 69 (2023)
    6340–6357.
date_created: 2023-07-23T22:01:14Z
date_published: 2023-07-04T00:00:00Z
date_updated: 2024-01-29T11:10:54Z
day: '04'
department:
- _id: MaMo
doi: 10.1109/TIT.2023.3292219
external_id:
  arxiv:
  - '2105.01427'
  isi:
  - '001069680100011'
intvolume: '        69'
isi: 1
issue: '10'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2105.01427
month: '07'
oa: 1
oa_version: Preprint
page: 6340-6357
publication: IEEE Transactions on Information Theory
publication_identifier:
  eissn:
  - 1557-9654
  issn:
  - 0018-9448
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Codes for the Z-channel
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 69
year: '2023'
...
---
_id: '13315'
abstract:
- lang: eng
  text: How do statistical dependencies in measurement noise influence high-dimensional
    inference? To answer this, we study the paradigmatic spiked matrix model of principal
    components analysis (PCA), where a rank-one matrix is corrupted by additive noise.
    We go beyond the usual independence assumption on the noise entries, by drawing
    the noise from a low-order polynomial orthogonal matrix ensemble. The resulting
    noise correlations make the setting relevant for applications but analytically
    challenging. We provide characterization of the Bayes optimal limits of inference
    in this model. If the spike is rotation invariant, we show that standard spectral
    PCA is optimal. However, for more general priors, both PCA and the existing approximate
    message-passing algorithm (AMP) fall short of achieving the information-theoretic
    limits, which we compute using the replica method from statistical physics. We
    thus propose an AMP, inspired by the theory of adaptive Thouless–Anderson–Palmer
    equations, which is empirically observed to saturate the conjectured theoretical
    limit. This AMP comes with a rigorous state evolution analysis tracking its performance.
    Although we focus on specific noise distributions, our methodology can be generalized
    to a wide class of trace matrix ensembles at the cost of more involved expressions.
    Finally, despite the seemingly strong assumption of rotation-invariant noise,
    our theory empirically predicts algorithmic performance on real data, pointing
    at strong universality properties.
acknowledgement: J.B. was funded by the European Union (ERC, CHORAL, project number
  101039794). Views and opinions expressed are however those of the author(s) only
  and do not necessarily reflect those of the European Union or the European Research
  Council. Neither the European Union nor the granting authority can be held responsible
  for them. M.M. was supported by the 2019 Lopez-Loreta Prize. We would like to thank
  the reviewers for the insightful comments and, in particular, for suggesting the
  BAMP-inspired denoisers leading to AMP-AP.
article_number: e2302028120
article_processing_charge: Yes (in subscription journal)
article_type: original
author:
- first_name: Jean
  full_name: Barbier, Jean
  last_name: Barbier
- first_name: Francesco
  full_name: Camilli, Francesco
  last_name: Camilli
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: Manuel
  full_name: Sáenz, Manuel
  last_name: Sáenz
citation:
  ama: Barbier J, Camilli F, Mondelli M, Sáenz M. Fundamental limits in structured
    principal component analysis and how to reach them. <i>Proceedings of the National
    Academy of Sciences of the United States of America</i>. 2023;120(30). doi:<a
    href="https://doi.org/10.1073/pnas.2302028120">10.1073/pnas.2302028120</a>
  apa: Barbier, J., Camilli, F., Mondelli, M., &#38; Sáenz, M. (2023). Fundamental
    limits in structured principal component analysis and how to reach them. <i>Proceedings
    of the National Academy of Sciences of the United States of America</i>. National
    Academy of Sciences. <a href="https://doi.org/10.1073/pnas.2302028120">https://doi.org/10.1073/pnas.2302028120</a>
  chicago: Barbier, Jean, Francesco Camilli, Marco Mondelli, and Manuel Sáenz. “Fundamental
    Limits in Structured Principal Component Analysis and How to Reach Them.” <i>Proceedings
    of the National Academy of Sciences of the United States of America</i>. National
    Academy of Sciences, 2023. <a href="https://doi.org/10.1073/pnas.2302028120">https://doi.org/10.1073/pnas.2302028120</a>.
  ieee: J. Barbier, F. Camilli, M. Mondelli, and M. Sáenz, “Fundamental limits in
    structured principal component analysis and how to reach them,” <i>Proceedings
    of the National Academy of Sciences of the United States of America</i>, vol.
    120, no. 30. National Academy of Sciences, 2023.
  ista: Barbier J, Camilli F, Mondelli M, Sáenz M. 2023. Fundamental limits in structured
    principal component analysis and how to reach them. Proceedings of the National
    Academy of Sciences of the United States of America. 120(30), e2302028120.
  mla: Barbier, Jean, et al. “Fundamental Limits in Structured Principal Component
    Analysis and How to Reach Them.” <i>Proceedings of the National Academy of Sciences
    of the United States of America</i>, vol. 120, no. 30, e2302028120, National Academy
    of Sciences, 2023, doi:<a href="https://doi.org/10.1073/pnas.2302028120">10.1073/pnas.2302028120</a>.
  short: J. Barbier, F. Camilli, M. Mondelli, M. Sáenz, Proceedings of the National
    Academy of Sciences of the United States of America 120 (2023).
date_created: 2023-07-30T22:01:02Z
date_published: 2023-07-25T00:00:00Z
date_updated: 2024-09-10T13:03:18Z
day: '25'
ddc:
- '000'
department:
- _id: MaMo
doi: 10.1073/pnas.2302028120
external_id:
  pmid:
  - '37463204'
file:
- access_level: open_access
  checksum: 1fc06228afdb3aa80cf8e7766bcf9dc5
  content_type: application/pdf
  creator: dernst
  date_created: 2023-07-31T07:30:48Z
  date_updated: 2023-07-31T07:30:48Z
  file_id: '13323'
  file_name: 2023_PNAS_Barbier.pdf
  file_size: 995933
  relation: main_file
  success: 1
file_date_updated: 2023-07-31T07:30:48Z
has_accepted_license: '1'
intvolume: '       120'
issue: '30'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
pmid: 1
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of the National Academy of Sciences of the United States
  of America
publication_identifier:
  eissn:
  - 1091-6490
publication_status: published
publisher: National Academy of Sciences
quality_controlled: '1'
related_material:
  link:
  - relation: software
    url: https://github.com/fcamilli95/Structured-PCA-
scopus_import: '1'
status: public
title: Fundamental limits in structured principal component analysis and how to reach
  them
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 120
year: '2023'
...
---
_id: '13321'
abstract:
- lang: eng
  text: We consider the problem of reconstructing the signal and the hidden variables
    from observations coming from a multi-layer network with rotationally invariant
    weight matrices. The multi-layer structure models inference from deep generative
    priors, and the rotational invariance imposed on the weights generalizes the i.i.d.
    Gaussian assumption by allowing for a complex correlation structure, which is
    typical in applications. In this work, we present a new class of approximate message
    passing (AMP) algorithms and give a state evolution recursion which precisely
    characterizes their performance in the large system limit. In contrast with the
    existing multi-layer VAMP (ML-VAMP) approach, our proposed AMP – dubbed multilayer
    rotationally invariant generalized AMP (ML-RI-GAMP) – provides a natural generalization
    beyond Gaussian designs, in the sense that it recovers the existing Gaussian AMP
    as a special case. Furthermore, ML-RI-GAMP exhibits a significantly lower complexity
    than ML-VAMP, as the computationally intensive singular value decomposition is
    replaced by an estimation of the moments of the design matrices. Finally, our
    numerical results show that this complexity gain comes at little to no cost in
    the performance of the algorithm.
acknowledgement: Marco Mondelli was partially supported by the 2019 Lopez-Loreta prize.
article_processing_charge: No
arxiv: 1
author:
- first_name: Yizhou
  full_name: Xu, Yizhou
  last_name: Xu
- first_name: Tian Qi
  full_name: Hou, Tian Qi
  last_name: Hou
- first_name: Shan Suo
  full_name: Liang, Shan Suo
  last_name: Liang
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Xu Y, Hou TQ, Liang SS, Mondelli M. Approximate message passing for multi-layer
    estimation in rotationally invariant models. In: <i>2023 IEEE Information Theory
    Workshop</i>. Institute of Electrical and Electronics Engineers; 2023:294-298.
    doi:<a href="https://doi.org/10.1109/ITW55543.2023.10160238">10.1109/ITW55543.2023.10160238</a>'
  apa: 'Xu, Y., Hou, T. Q., Liang, S. S., &#38; Mondelli, M. (2023). Approximate message
    passing for multi-layer estimation in rotationally invariant models. In <i>2023
    IEEE Information Theory Workshop</i> (pp. 294–298). Saint-Malo, France: Institute
    of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/ITW55543.2023.10160238">https://doi.org/10.1109/ITW55543.2023.10160238</a>'
  chicago: Xu, Yizhou, Tian Qi Hou, Shan Suo Liang, and Marco Mondelli. “Approximate
    Message Passing for Multi-Layer Estimation in Rotationally Invariant Models.”
    In <i>2023 IEEE Information Theory Workshop</i>, 294–98. Institute of Electrical
    and Electronics Engineers, 2023. <a href="https://doi.org/10.1109/ITW55543.2023.10160238">https://doi.org/10.1109/ITW55543.2023.10160238</a>.
  ieee: Y. Xu, T. Q. Hou, S. S. Liang, and M. Mondelli, “Approximate message passing
    for multi-layer estimation in rotationally invariant models,” in <i>2023 IEEE
    Information Theory Workshop</i>, Saint-Malo, France, 2023, pp. 294–298.
  ista: 'Xu Y, Hou TQ, Liang SS, Mondelli M. 2023. Approximate message passing for
    multi-layer estimation in rotationally invariant models. 2023 IEEE Information
    Theory Workshop. ITW: Information Theory Workshop, 294–298.'
  mla: Xu, Yizhou, et al. “Approximate Message Passing for Multi-Layer Estimation
    in Rotationally Invariant Models.” <i>2023 IEEE Information Theory Workshop</i>,
    Institute of Electrical and Electronics Engineers, 2023, pp. 294–98, doi:<a href="https://doi.org/10.1109/ITW55543.2023.10160238">10.1109/ITW55543.2023.10160238</a>.
  short: Y. Xu, T.Q. Hou, S.S. Liang, M. Mondelli, in:, 2023 IEEE Information Theory
    Workshop, Institute of Electrical and Electronics Engineers, 2023, pp. 294–298.
conference:
  end_date: 2023-04-28
  location: Saint-Malo, France
  name: 'ITW: Information Theory Workshop'
  start_date: 2023-04-23
date_created: 2023-07-30T22:01:04Z
date_published: 2023-05-01T00:00:00Z
date_updated: 2024-09-10T13:03:19Z
day: '01'
department:
- _id: MaMo
doi: 10.1109/ITW55543.2023.10160238
external_id:
  arxiv:
  - '2212.01572'
  isi:
  - '001031733100053'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2212.01572
month: '05'
oa: 1
oa_version: Preprint
page: 294-298
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: 2023 IEEE Information Theory Workshop
publication_identifier:
  eissn:
  - 2475-4218
  isbn:
  - '9798350301496'
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
scopus_import: '1'
status: public
title: Approximate message passing for multi-layer estimation in rotationally invariant
  models
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2023'
...
---
_id: '14083'
abstract:
- lang: eng
  text: "In this work we consider the list-decodability and list-recoverability of
    arbitrary q-ary codes, for all integer values of q ≥ 2. A code is called (p,L)_q-list-decodable
    if every radius pn Hamming ball contains less than L codewords; (p,\U0001D4C1,L)_q-list-recoverability
    is a generalization where we place radius pn Hamming balls on every point of a
    combinatorial rectangle with side length \U0001D4C1 and again stipulate that there
    be less than L codewords.\r\nOur main contribution is to precisely calculate the
    maximum value of p for which there exist infinite families of positive rate (p,\U0001D4C1,L)_q-list-recoverable
    codes, the quantity we call the zero-rate threshold. Denoting this value by p_*,
    we in fact show that codes correcting a p_*+ε fraction of errors must have size
    O_ε(1), i.e., independent of n. Such a result is typically referred to as a \"Plotkin
    bound.\" To complement this, a standard random code with expurgation construction
    shows that there exist positive rate codes correcting a p_*-ε fraction of errors.
    We also follow a classical proof template (typically attributed to Elias and Bassalygo)
    to derive from the zero-rate threshold other tradeoffs between rate and decoding
    radius for list-decoding and list-recovery.\r\nTechnically, proving the Plotkin
    bound boils down to demonstrating the Schur convexity of a certain function defined
    on the q-simplex as well as the convexity of a univariate function derived from
    it. We remark that an earlier argument claimed similar results for q-ary list-decoding;
    however, we point out that this earlier proof is flawed."
acknowledgement: "Nicolas Resch: Research supported in part by ERC H2020 grant No.74079
  (ALGSTRONGCRYPTO). Chen Yuan: Research supported in part by the National Key Research
  and Development Projects under Grant 2022YFA1004900 and Grant 2021YFE0109900, the
  National Natural Science Foundation of China under Grant 12101403 and Grant 12031011.\r\nAcknowledgements
  YZ is grateful to Shashank Vatedka, Diyuan Wu and Fengxing Zhu for inspiring discussions."
alternative_title:
- LIPIcs
article_number: '99'
article_processing_charge: Yes
arxiv: 1
author:
- first_name: Nicolas
  full_name: Resch, Nicolas
  last_name: Resch
- first_name: Chen
  full_name: Yuan, Chen
  last_name: Yuan
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
  orcid: 0000-0002-6465-6258
citation:
  ama: 'Resch N, Yuan C, Zhang Y. Zero-rate thresholds and new capacity bounds for
    list-decoding and list-recovery. In: <i>50th International Colloquium on Automata,
    Languages, and Programming</i>. Vol 261. Schloss Dagstuhl - Leibniz-Zentrum für
    Informatik; 2023. doi:<a href="https://doi.org/10.4230/LIPIcs.ICALP.2023.99">10.4230/LIPIcs.ICALP.2023.99</a>'
  apa: 'Resch, N., Yuan, C., &#38; Zhang, Y. (2023). Zero-rate thresholds and new
    capacity bounds for list-decoding and list-recovery. In <i>50th International
    Colloquium on Automata, Languages, and Programming</i> (Vol. 261). Paderborn,
    Germany: Schloss Dagstuhl - Leibniz-Zentrum für Informatik. <a href="https://doi.org/10.4230/LIPIcs.ICALP.2023.99">https://doi.org/10.4230/LIPIcs.ICALP.2023.99</a>'
  chicago: Resch, Nicolas, Chen Yuan, and Yihan Zhang. “Zero-Rate Thresholds and New
    Capacity Bounds for List-Decoding and List-Recovery.” In <i>50th International
    Colloquium on Automata, Languages, and Programming</i>, Vol. 261. Schloss Dagstuhl
    - Leibniz-Zentrum für Informatik, 2023. <a href="https://doi.org/10.4230/LIPIcs.ICALP.2023.99">https://doi.org/10.4230/LIPIcs.ICALP.2023.99</a>.
  ieee: N. Resch, C. Yuan, and Y. Zhang, “Zero-rate thresholds and new capacity bounds
    for list-decoding and list-recovery,” in <i>50th International Colloquium on Automata,
    Languages, and Programming</i>, Paderborn, Germany, 2023, vol. 261.
  ista: 'Resch N, Yuan C, Zhang Y. 2023. Zero-rate thresholds and new capacity bounds
    for list-decoding and list-recovery. 50th International Colloquium on Automata,
    Languages, and Programming. ICALP: International Colloquium on Automata, Languages,
    and Programming, LIPIcs, vol. 261, 99.'
  mla: Resch, Nicolas, et al. “Zero-Rate Thresholds and New Capacity Bounds for List-Decoding
    and List-Recovery.” <i>50th International Colloquium on Automata, Languages, and
    Programming</i>, vol. 261, 99, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
    2023, doi:<a href="https://doi.org/10.4230/LIPIcs.ICALP.2023.99">10.4230/LIPIcs.ICALP.2023.99</a>.
  short: N. Resch, C. Yuan, Y. Zhang, in:, 50th International Colloquium on Automata,
    Languages, and Programming, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
    2023.
conference:
  end_date: 2023-07-14
  location: Paderborn, Germany
  name: 'ICALP: International Colloquium on Automata, Languages, and Programming'
  start_date: 2023-07-10
date_created: 2023-08-20T22:01:13Z
date_published: 2023-07-01T00:00:00Z
date_updated: 2023-08-21T07:26:01Z
day: '01'
ddc:
- '000'
department:
- _id: MaMo
doi: 10.4230/LIPIcs.ICALP.2023.99
external_id:
  arxiv:
  - '2210.07754'
file:
- access_level: open_access
  checksum: a449143fec3fbebb092cb8ef3b53c226
  content_type: application/pdf
  creator: dernst
  date_created: 2023-08-21T07:23:18Z
  date_updated: 2023-08-21T07:23:18Z
  file_id: '14091'
  file_name: 2023_LIPIcsICALP_Resch.pdf
  file_size: 1141497
  relation: main_file
  success: 1
file_date_updated: 2023-08-21T07:23:18Z
has_accepted_license: '1'
intvolume: '       261'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
publication: 50th International Colloquium on Automata, Languages, and Programming
publication_identifier:
  isbn:
  - '9783959772785'
  issn:
  - 1868-8969
publication_status: published
publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik
quality_controlled: '1'
scopus_import: '1'
status: public
title: Zero-rate thresholds and new capacity bounds for list-decoding and list-recovery
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 261
year: '2023'
...
---
_id: '12838'
abstract:
- lang: eng
  text: We study the problem of high-dimensional multiple packing in Euclidean space.
    Multiple packing is a natural generalization of sphere packing and is defined
    as follows. Let N > 0 and L ∈ Z ≽2 . A multiple packing is a set C of points in
    R n such that any point in R n lies in the intersection of at most L – 1 balls
    of radius √ nN around points in C . Given a well-known connection with coding
    theory, multiple packings can be viewed as the Euclidean analog of list-decodable
    codes, which are well-studied for finite fields. In this paper, we derive the
    best known lower bounds on the optimal density of list-decodable infinite constellations
    for constant L under a stronger notion called average-radius multiple packing.
    To this end, we apply tools from high-dimensional geometry and large deviation
    theory.
acknowledgement: "YZ thanks Jiajin Li for making the observation given by Equation
  (23). He also would like to thank Nir Ailon and Ely Porat for several helpful conversations
  throughout this project, and Alexander Barg for insightful comments on the manuscript.\r\nYZ
  has received funding from the European Union’s Horizon 2020 research and innovation
  programme under grant agreement No 682203-ERC-[Inf-Speed-Tradeoff]. The work of
  SV was supported by a seed grant from IIT Hyderabad and the start-up research grant
  from the Science and Engineering Research Board, India (SRG/2020/000910)."
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
  orcid: 0000-0002-6465-6258
- first_name: Shashank
  full_name: Vatedka, Shashank
  last_name: Vatedka
citation:
  ama: 'Zhang Y, Vatedka S. Multiple packing: Lower bounds via infinite constellations.
    <i>IEEE Transactions on Information Theory</i>. 2023;69(7):4513-4527. doi:<a href="https://doi.org/10.1109/TIT.2023.3260950">10.1109/TIT.2023.3260950</a>'
  apa: 'Zhang, Y., &#38; Vatedka, S. (2023). Multiple packing: Lower bounds via infinite
    constellations. <i>IEEE Transactions on Information Theory</i>. IEEE. <a href="https://doi.org/10.1109/TIT.2023.3260950">https://doi.org/10.1109/TIT.2023.3260950</a>'
  chicago: 'Zhang, Yihan, and Shashank Vatedka. “Multiple Packing: Lower Bounds via
    Infinite Constellations.” <i>IEEE Transactions on Information Theory</i>. IEEE,
    2023. <a href="https://doi.org/10.1109/TIT.2023.3260950">https://doi.org/10.1109/TIT.2023.3260950</a>.'
  ieee: 'Y. Zhang and S. Vatedka, “Multiple packing: Lower bounds via infinite constellations,”
    <i>IEEE Transactions on Information Theory</i>, vol. 69, no. 7. IEEE, pp. 4513–4527,
    2023.'
  ista: 'Zhang Y, Vatedka S. 2023. Multiple packing: Lower bounds via infinite constellations.
    IEEE Transactions on Information Theory. 69(7), 4513–4527.'
  mla: 'Zhang, Yihan, and Shashank Vatedka. “Multiple Packing: Lower Bounds via Infinite
    Constellations.” <i>IEEE Transactions on Information Theory</i>, vol. 69, no.
    7, IEEE, 2023, pp. 4513–27, doi:<a href="https://doi.org/10.1109/TIT.2023.3260950">10.1109/TIT.2023.3260950</a>.'
  short: Y. Zhang, S. Vatedka, IEEE Transactions on Information Theory 69 (2023) 4513–4527.
date_created: 2023-04-16T22:01:09Z
date_published: 2023-07-01T00:00:00Z
date_updated: 2023-12-13T11:16:46Z
day: '01'
department:
- _id: MaMo
doi: 10.1109/TIT.2023.3260950
external_id:
  arxiv:
  - '2211.04407'
  isi:
  - '001017307000023'
intvolume: '        69'
isi: 1
issue: '7'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2211.04407
month: '07'
oa: 1
oa_version: Preprint
page: 4513-4527
publication: IEEE Transactions on Information Theory
publication_identifier:
  eissn:
  - 1557-9654
  issn:
  - 0018-9448
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Multiple packing: Lower bounds via infinite constellations'
type: journal_article
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 69
year: '2023'
...
---
_id: '12859'
abstract:
- lang: eng
  text: 'Machine learning models are vulnerable to adversarial perturbations, and
    a thought-provoking paper by Bubeck and Sellke has analyzed this phenomenon through
    the lens of over-parameterization: interpolating smoothly the data requires significantly
    more parameters than simply memorizing it. However, this "universal" law provides
    only a necessary condition for robustness, and it is unable to discriminate between
    models. In this paper, we address these gaps by focusing on empirical risk minimization
    in two prototypical settings, namely, random features and the neural tangent kernel
    (NTK). We prove that, for random features, the model is not robust for any degree
    of over-parameterization, even when the necessary condition coming from the universal
    law of robustness is satisfied. In contrast, for even activations, the NTK model
    meets the universal lower bound, and it is robust as soon as the necessary condition
    on over-parameterization is fulfilled. This also addresses a conjecture in prior
    work by Bubeck, Li and Nagaraj. Our analysis decouples the effect of the kernel
    of the model from an "interaction matrix", which describes the interaction with
    the test data and captures the effect of the activation. Our theoretical results
    are corroborated by numerical evidence on both synthetic and standard datasets
    (MNIST, CIFAR-10).'
acknowledgement: "Simone Bombari and Marco Mondelli were partially supported by the
  2019 Lopez-Loreta prize, and\r\nthe authors would like to thank Hamed Hassani for
  helpful discussions.\r\n"
alternative_title:
- PMLR
article_processing_charge: No
arxiv: 1
author:
- first_name: Simone
  full_name: Bombari, Simone
  id: ca726dda-de17-11ea-bc14-f9da834f63aa
  last_name: Bombari
- first_name: Shayan
  full_name: Kiyani, Shayan
  id: f5a2b424-e339-11ed-8435-ff3b4fe70cf8
  last_name: Kiyani
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: 'Bombari S, Kiyani S, Mondelli M. Beyond the universal law of robustness: Sharper
    laws for random features and neural tangent kernels. In: <i>Proceedings of the
    40th International Conference on Machine Learning</i>. Vol 202. ML Research Press;
    2023:2738-2776.'
  apa: 'Bombari, S., Kiyani, S., &#38; Mondelli, M. (2023). Beyond the universal law
    of robustness: Sharper laws for random features and neural tangent kernels. In
    <i>Proceedings of the 40th International Conference on Machine Learning</i> (Vol.
    202, pp. 2738–2776). Honolulu, HI, United States: ML Research Press.'
  chicago: 'Bombari, Simone, Shayan Kiyani, and Marco Mondelli. “Beyond the Universal
    Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels.”
    In <i>Proceedings of the 40th International Conference on Machine Learning</i>,
    202:2738–76. ML Research Press, 2023.'
  ieee: 'S. Bombari, S. Kiyani, and M. Mondelli, “Beyond the universal law of robustness:
    Sharper laws for random features and neural tangent kernels,” in <i>Proceedings
    of the 40th International Conference on Machine Learning</i>, Honolulu, HI, United
    States, 2023, vol. 202, pp. 2738–2776.'
  ista: 'Bombari S, Kiyani S, Mondelli M. 2023. Beyond the universal law of robustness:
    Sharper laws for random features and neural tangent kernels. Proceedings of the
    40th International Conference on Machine Learning. ICML: International Conference
    on Machine Learning, PMLR, vol. 202, 2738–2776.'
  mla: 'Bombari, Simone, et al. “Beyond the Universal Law of Robustness: Sharper Laws
    for Random Features and Neural Tangent Kernels.” <i>Proceedings of the 40th International
    Conference on Machine Learning</i>, vol. 202, ML Research Press, 2023, pp. 2738–76.'
  short: S. Bombari, S. Kiyani, M. Mondelli, in:, Proceedings of the 40th International
    Conference on Machine Learning, ML Research Press, 2023, pp. 2738–2776.
conference:
  end_date: 2023-07-29
  location: Honolulu, HI, United States
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2023-07-23
date_created: 2023-04-23T16:11:03Z
date_published: 2023-10-27T00:00:00Z
date_updated: 2024-09-10T13:03:19Z
day: '27'
department:
- _id: GradSch
- _id: MaMo
external_id:
  arxiv:
  - '2302.01629'
intvolume: '       202'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2302.01629
month: '10'
oa: 1
oa_version: Preprint
page: 2738-2776
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Proceedings of the 40th International Conference on Machine Learning
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
  link:
  - relation: software
    url: https://github.com/simone-bombari/beyond-universal-robustness
status: public
title: 'Beyond the universal law of robustness: Sharper laws for random features and
  neural tangent kernels'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 202
year: '2023'
...
---
_id: '11420'
abstract:
- lang: eng
  text: 'Understanding the properties of neural networks trained via stochastic gradient
    descent (SGD) is at the heart of the theory of deep learning. In this work, we
    take a mean-field view, and consider a two-layer ReLU network trained via noisy-SGD
    for a univariate regularized regression problem. Our main result is that SGD with
    vanishingly small noise injected in the gradients is biased towards a simple solution:
    at convergence, the ReLU network implements a piecewise linear map of the inputs,
    and the number of “knot” points -- i.e., points where the tangent of the ReLU
    network estimator changes -- between two consecutive training inputs is at most
    three. In particular, as the number of neurons of the network grows, the SGD dynamics
    is captured by the solution of a gradient flow and, at convergence, the distribution
    of the weights approaches the unique minimizer of a related free energy, which
    has a Gibbs form. Our key technical contribution consists in the analysis of the
    estimator resulting from this minimizer: we show that its second derivative vanishes
    everywhere, except at some specific locations which represent the “knot” points.
    We also provide empirical evidence that knots at locations distinct from the data
    points might occur, as predicted by our theory.'
acknowledgement: "We would like to thank Mert Pilanci for several exploratory discussions
  in the early stage\r\nof the project, Jan Maas for clarifications about Jordan et
  al. (1998), and Max Zimmer for\r\nsuggestive numerical experiments. A. Shevchenko
  and M. Mondelli are partially supported\r\nby the 2019 Lopez-Loreta Prize. V. Kungurtsev
  acknowledges support to the OP VVV\r\nproject CZ.02.1.01/0.0/0.0/16 019/0000765
  Research Center for Informatics.\r\n"
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Aleksandr
  full_name: Shevchenko, Aleksandr
  id: F2B06EC2-C99E-11E9-89F0-752EE6697425
  last_name: Shevchenko
- first_name: Vyacheslav
  full_name: Kungurtsev, Vyacheslav
  last_name: Kungurtsev
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
citation:
  ama: Shevchenko A, Kungurtsev V, Mondelli M. Mean-field analysis of piecewise linear
    solutions for wide ReLU networks. <i>Journal of Machine Learning Research</i>.
    2022;23(130):1-55.
  apa: Shevchenko, A., Kungurtsev, V., &#38; Mondelli, M. (2022). Mean-field analysis
    of piecewise linear solutions for wide ReLU networks. <i>Journal of Machine Learning
    Research</i>. Journal of Machine Learning Research.
  chicago: Shevchenko, Aleksandr, Vyacheslav Kungurtsev, and Marco Mondelli. “Mean-Field
    Analysis of Piecewise Linear Solutions for Wide ReLU Networks.” <i>Journal of
    Machine Learning Research</i>. Journal of Machine Learning Research, 2022.
  ieee: A. Shevchenko, V. Kungurtsev, and M. Mondelli, “Mean-field analysis of piecewise
    linear solutions for wide ReLU networks,” <i>Journal of Machine Learning Research</i>,
    vol. 23, no. 130. Journal of Machine Learning Research, pp. 1–55, 2022.
  ista: Shevchenko A, Kungurtsev V, Mondelli M. 2022. Mean-field analysis of piecewise
    linear solutions for wide ReLU networks. Journal of Machine Learning Research.
    23(130), 1–55.
  mla: Shevchenko, Aleksandr, et al. “Mean-Field Analysis of Piecewise Linear Solutions
    for Wide ReLU Networks.” <i>Journal of Machine Learning Research</i>, vol. 23,
    no. 130, Journal of Machine Learning Research, 2022, pp. 1–55.
  short: A. Shevchenko, V. Kungurtsev, M. Mondelli, Journal of Machine Learning Research
    23 (2022) 1–55.
date_created: 2022-05-29T22:01:54Z
date_published: 2022-04-01T00:00:00Z
date_updated: 2024-09-10T13:03:17Z
day: '01'
ddc:
- '000'
department:
- _id: MaMo
- _id: DaAl
external_id:
  arxiv:
  - '2111.02278'
file:
- access_level: open_access
  checksum: d4ff5d1affb34848b5c5e4002483fc62
  content_type: application/pdf
  creator: cchlebak
  date_created: 2022-05-30T08:22:55Z
  date_updated: 2022-05-30T08:22:55Z
  file_id: '11422'
  file_name: 21-1365.pdf
  file_size: 1521701
  relation: main_file
  success: 1
file_date_updated: 2022-05-30T08:22:55Z
has_accepted_license: '1'
intvolume: '        23'
issue: '130'
language:
- iso: eng
month: '04'
oa: 1
oa_version: Published Version
page: 1-55
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: Journal of Machine Learning Research
publication_identifier:
  eissn:
  - 1533-7928
  issn:
  - 1532-4435
publication_status: published
publisher: Journal of Machine Learning Research
quality_controlled: '1'
related_material:
  link:
  - relation: other
    url: https://www.jmlr.org/papers/v23/21-1365.html
scopus_import: '1'
status: public
title: Mean-field analysis of piecewise linear solutions for wide ReLU networks
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: 8b945eb4-e2f2-11eb-945a-df72226e66a9
volume: 23
year: '2022'
...
---
_id: '11639'
abstract:
- lang: eng
  text: We study the list decodability of different ensembles of codes over the real
    alphabet under the assumption of an omniscient adversary. It is a well-known result
    that when the source and the adversary have power constraints P and N respectively,
    the list decoding capacity is equal to 1/2logP/N. Random spherical codes achieve
    constant list sizes, and the goal of the present paper is to obtain a better understanding
    of the smallest achievable list size as a function of the gap to capacity. We
    show a reduction from arbitrary codes to spherical codes, and derive a lower bound
    on the list size of typical random spherical codes. We also give an upper bound
    on the list size achievable using nested Construction-A lattices and infinite
    Construction-A lattices. We then define and study a class of infinite constellations
    that generalize Construction-A lattices and prove upper and lower bounds for the
    same. Other goodness properties such as packing goodness and AWGN goodness of
    infinite constellations are proved along the way. Finally, we consider random
    lattices sampled from the Haar distribution and show that if a certain conjecture
    that originates in analytic number theory is true, then the list size grows as
    a polynomial function of the gap-to-capacity.
acknowledgement: "This work was done when Shashank Vatedka was at the Chinese University
  of Hong Kong, where he was supported in part by CUHK Direct Grants 4055039 and 4055077.
  He would like to acknowledge funding from a seed grant offered by IIT Hyderabad
  and the Start-up Research Grant (SRG/2020/000910) from the Science and Engineering
  Board, India. Yihan Zhang has received funding from the European Union’s Horizon
  2020 research and innovation programme\r\nunder grant agreement No 682203-ERC-[Inf-Speed-Tradeoff]."
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
- first_name: Shashank
  full_name: Vatedka, Shashank
  last_name: Vatedka
citation:
  ama: Zhang Y, Vatedka S. List decoding random Euclidean codes and Infinite constellations.
    <i>IEEE Transactions on Information Theory</i>. 2022;68(12):7753-7786. doi:<a
    href="https://doi.org/10.1109/TIT.2022.3189542">10.1109/TIT.2022.3189542</a>
  apa: Zhang, Y., &#38; Vatedka, S. (2022). List decoding random Euclidean codes and
    Infinite constellations. <i>IEEE Transactions on Information Theory</i>. IEEE.
    <a href="https://doi.org/10.1109/TIT.2022.3189542">https://doi.org/10.1109/TIT.2022.3189542</a>
  chicago: Zhang, Yihan, and Shashank Vatedka. “List Decoding Random Euclidean Codes
    and Infinite Constellations.” <i>IEEE Transactions on Information Theory</i>.
    IEEE, 2022. <a href="https://doi.org/10.1109/TIT.2022.3189542">https://doi.org/10.1109/TIT.2022.3189542</a>.
  ieee: Y. Zhang and S. Vatedka, “List decoding random Euclidean codes and Infinite
    constellations,” <i>IEEE Transactions on Information Theory</i>, vol. 68, no.
    12. IEEE, pp. 7753–7786, 2022.
  ista: Zhang Y, Vatedka S. 2022. List decoding random Euclidean codes and Infinite
    constellations. IEEE Transactions on Information Theory. 68(12), 7753–7786.
  mla: Zhang, Yihan, and Shashank Vatedka. “List Decoding Random Euclidean Codes and
    Infinite Constellations.” <i>IEEE Transactions on Information Theory</i>, vol.
    68, no. 12, IEEE, 2022, pp. 7753–86, doi:<a href="https://doi.org/10.1109/TIT.2022.3189542">10.1109/TIT.2022.3189542</a>.
  short: Y. Zhang, S. Vatedka, IEEE Transactions on Information Theory 68 (2022) 7753–7786.
date_created: 2022-07-24T22:01:42Z
date_published: 2022-12-01T00:00:00Z
date_updated: 2023-08-03T12:12:19Z
day: '01'
department:
- _id: MaMo
doi: 10.1109/TIT.2022.3189542
external_id:
  arxiv:
  - '1901.03790'
  isi:
  - '000891796100007'
intvolume: '        68'
isi: 1
issue: '12'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.1901.03790
month: '12'
oa: 1
oa_version: Preprint
page: 7753-7786
publication: IEEE Transactions on Information Theory
publication_identifier:
  eissn:
  - 1557-9654
  issn:
  - 0018-9448
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: List decoding random Euclidean codes and Infinite constellations
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 68
year: '2022'
...
---
_id: '10364'
abstract:
- lang: eng
  text: 'This paper characterizes the latency of the simplified successive-cancellation
    (SSC) decoding scheme for polar codes under hardware resource constraints. In
    particular, when the number of processing elements P that can perform SSC decoding
    operations in parallel is limited, as is the case in practice, the latency of
    SSC decoding is O(N1-1/μ + N/P log2 log2 N/P), where N is the block length of
    the code and μ is the scaling exponent of the channel. Three direct consequences
    of this bound are presented. First, in a fully-parallel implementation where P
    = N/2, the latency of SSC decoding is O(N1-1/μ), which is sublinear in the block
    length. This recovers a result from our earlier work. Second, in a fully-serial
    implementation where P = 1, the latency of SSC decoding scales as O(N log2 log2
    N). The multiplicative constant is also calculated: we show that the latency of
    SSC decoding when P = 1 is given by (2 + o(1))N log2 log2 N. Third, in a semi-parallel
    implementation, the smallest P that gives the same latency as that of the fully-parallel
    implementation is P = N1/μ. The tightness of our bound on SSC decoding latency
    and the applicability of the foregoing results is validated through extensive
    simulations.'
acknowledgement: "S. A. Hashemi is supported by a Postdoctoral Fellowship from the
  Natural Sciences and\r\nEngineering Research Council of Canada (NSERC) and by Huawei.
  M. Mondelli is partially\r\nsupported by the 2019 Lopez-Loreta Prize. A. Fazeli
  and A. Vardy were supported in part by\r\nthe National Science Foundation under
  Grant CCF-1764104."
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Seyyed Ali
  full_name: Hashemi, Seyyed Ali
  last_name: Hashemi
- first_name: Marco
  full_name: Mondelli, Marco
  id: 27EB676C-8706-11E9-9510-7717E6697425
  last_name: Mondelli
  orcid: 0000-0002-3242-7020
- first_name: Arman
  full_name: Fazeli, Arman
  last_name: Fazeli
- first_name: Alexander
  full_name: Vardy, Alexander
  last_name: Vardy
- first_name: John
  full_name: Cioffi, John
  last_name: Cioffi
- first_name: Andrea
  full_name: Goldsmith, Andrea
  last_name: Goldsmith
citation:
  ama: Hashemi SA, Mondelli M, Fazeli A, Vardy A, Cioffi J, Goldsmith A. Parallelism
    versus latency in simplified successive-cancellation decoding of polar codes.
    <i>IEEE Transactions on Wireless Communications</i>. 2022;21(6):3909-3920. doi:<a
    href="https://doi.org/10.1109/TWC.2021.3125626">10.1109/TWC.2021.3125626</a>
  apa: Hashemi, S. A., Mondelli, M., Fazeli, A., Vardy, A., Cioffi, J., &#38; Goldsmith,
    A. (2022). Parallelism versus latency in simplified successive-cancellation decoding
    of polar codes. <i>IEEE Transactions on Wireless Communications</i>. Institute
    of Electrical and Electronics Engineers. <a href="https://doi.org/10.1109/TWC.2021.3125626">https://doi.org/10.1109/TWC.2021.3125626</a>
  chicago: Hashemi, Seyyed Ali, Marco Mondelli, Arman Fazeli, Alexander Vardy, John
    Cioffi, and Andrea Goldsmith. “Parallelism versus Latency in Simplified Successive-Cancellation
    Decoding of Polar Codes.” <i>IEEE Transactions on Wireless Communications</i>.
    Institute of Electrical and Electronics Engineers, 2022. <a href="https://doi.org/10.1109/TWC.2021.3125626">https://doi.org/10.1109/TWC.2021.3125626</a>.
  ieee: S. A. Hashemi, M. Mondelli, A. Fazeli, A. Vardy, J. Cioffi, and A. Goldsmith,
    “Parallelism versus latency in simplified successive-cancellation decoding of
    polar codes,” <i>IEEE Transactions on Wireless Communications</i>, vol. 21, no.
    6. Institute of Electrical and Electronics Engineers, pp. 3909–3920, 2022.
  ista: Hashemi SA, Mondelli M, Fazeli A, Vardy A, Cioffi J, Goldsmith A. 2022. Parallelism
    versus latency in simplified successive-cancellation decoding of polar codes.
    IEEE Transactions on Wireless Communications. 21(6), 3909–3920.
  mla: Hashemi, Seyyed Ali, et al. “Parallelism versus Latency in Simplified Successive-Cancellation
    Decoding of Polar Codes.” <i>IEEE Transactions on Wireless Communications</i>,
    vol. 21, no. 6, Institute of Electrical and Electronics Engineers, 2022, pp. 3909–20,
    doi:<a href="https://doi.org/10.1109/TWC.2021.3125626">10.1109/TWC.2021.3125626</a>.
  short: S.A. Hashemi, M. Mondelli, A. Fazeli, A. Vardy, J. Cioffi, A. Goldsmith,
    IEEE Transactions on Wireless Communications 21 (2022) 3909–3920.
date_created: 2021-11-28T23:01:29Z
date_published: 2022-06-01T00:00:00Z
date_updated: 2024-09-10T13:03:18Z
day: '01'
department:
- _id: MaMo
doi: 10.1109/TWC.2021.3125626
external_id:
  arxiv:
  - '2012.13378'
  isi:
  - '000809406400028'
intvolume: '        21'
isi: 1
issue: '6'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2012.13378
month: '06'
oa: 1
oa_version: Preprint
page: 3909-3920
project:
- _id: 059876FA-7A3F-11EA-A408-12923DDC885E
  name: Prix Lopez-Loretta 2019 - Marco Mondelli
publication: IEEE Transactions on Wireless Communications
publication_identifier:
  eissn:
  - 1558-2248
  issn:
  - 1536-1276
publication_status: published
publisher: Institute of Electrical and Electronics Engineers
quality_controlled: '1'
related_material:
  record:
  - id: '10053'
    relation: earlier_version
    status: public
scopus_import: '1'
status: public
title: Parallelism versus latency in simplified successive-cancellation decoding of
  polar codes
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 21
year: '2022'
...
---
_id: '12011'
abstract:
- lang: eng
  text: We characterize the capacity for the discrete-time arbitrarily varying channel
    with discrete inputs, outputs, and states when (a) the encoder and decoder do
    not share common randomness, (b) the input and state are subject to cost constraints,
    (c) the transition matrix of the channel is deterministic given the state, and
    (d) at each time step the adversary can only observe the current and past channel
    inputs when choosing the state at that time. The achievable strategy involves
    stochastic encoding together with list decoding and a disambiguation step. The
    converse uses a two-phase "babble-and-push" strategy where the adversary chooses
    the state randomly in the first phase, list decodes the output, and then chooses
    state inputs to symmetrize the channel in the second phase. These results generalize
    prior work on specific channels models (additive, erasure) to general discrete
    alphabets and models.
acknowledgement: The work of ADS and ML was supported in part by the US National Science
  Foundation under awards CCF-1909468 and CCF-1909451.
article_processing_charge: No
arxiv: 1
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
- first_name: Sidharth
  full_name: Jaggi, Sidharth
  last_name: Jaggi
- first_name: Michael
  full_name: Langberg, Michael
  last_name: Langberg
- first_name: Anand D.
  full_name: Sarwate, Anand D.
  last_name: Sarwate
citation:
  ama: 'Zhang Y, Jaggi S, Langberg M, Sarwate AD. The capacity of causal adversarial
    channels. In: <i>2022 IEEE International Symposium on Information Theory</i>.
    Vol 2022. IEEE; 2022:2523-2528. doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834709">10.1109/ISIT50566.2022.9834709</a>'
  apa: 'Zhang, Y., Jaggi, S., Langberg, M., &#38; Sarwate, A. D. (2022). The capacity
    of causal adversarial channels. In <i>2022 IEEE International Symposium on Information
    Theory</i> (Vol. 2022, pp. 2523–2528). Espoo, Finland: IEEE. <a href="https://doi.org/10.1109/ISIT50566.2022.9834709">https://doi.org/10.1109/ISIT50566.2022.9834709</a>'
  chicago: Zhang, Yihan, Sidharth Jaggi, Michael Langberg, and Anand D. Sarwate. “The
    Capacity of Causal Adversarial Channels.” In <i>2022 IEEE International Symposium
    on Information Theory</i>, 2022:2523–28. IEEE, 2022. <a href="https://doi.org/10.1109/ISIT50566.2022.9834709">https://doi.org/10.1109/ISIT50566.2022.9834709</a>.
  ieee: Y. Zhang, S. Jaggi, M. Langberg, and A. D. Sarwate, “The capacity of causal
    adversarial channels,” in <i>2022 IEEE International Symposium on Information
    Theory</i>, Espoo, Finland, 2022, vol. 2022, pp. 2523–2528.
  ista: 'Zhang Y, Jaggi S, Langberg M, Sarwate AD. 2022. The capacity of causal adversarial
    channels. 2022 IEEE International Symposium on Information Theory. ISIT: Internation
    Symposium on Information Theory vol. 2022, 2523–2528.'
  mla: Zhang, Yihan, et al. “The Capacity of Causal Adversarial Channels.” <i>2022
    IEEE International Symposium on Information Theory</i>, vol. 2022, IEEE, 2022,
    pp. 2523–28, doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834709">10.1109/ISIT50566.2022.9834709</a>.
  short: Y. Zhang, S. Jaggi, M. Langberg, A.D. Sarwate, in:, 2022 IEEE International
    Symposium on Information Theory, IEEE, 2022, pp. 2523–2528.
conference:
  end_date: 2022-07-01
  location: Espoo, Finland
  name: 'ISIT: Internation Symposium on Information Theory'
  start_date: 2022-06-26
date_created: 2022-09-04T22:02:03Z
date_published: 2022-08-03T00:00:00Z
date_updated: 2022-09-05T09:09:15Z
day: '03'
department:
- _id: MaMo
doi: 10.1109/ISIT50566.2022.9834709
external_id:
  arxiv:
  - '2205.06708'
intvolume: '      2022'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: ' https://doi.org/10.48550/arXiv.2205.06708'
month: '08'
oa: 1
oa_version: Preprint
page: 2523-2528
publication: 2022 IEEE International Symposium on Information Theory
publication_identifier:
  isbn:
  - '9781665421591'
  issn:
  - 2157-8095
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: The capacity of causal adversarial channels
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '12012'
abstract:
- lang: eng
  text: This paper is eligible for the Jack Keil Wolf ISIT Student Paper Award. We
    generalize a previous framework for designing utility-optimal differentially private
    (DP) mechanisms via graphs, where datasets are vertices in the graph and edges
    represent dataset neighborhood. The boundary set contains datasets where an individual’s
    response changes the binary-valued query compared to its neighbors. Previous work
    was limited to the homogeneous case where the privacy parameter ε across all datasets
    was the same and the mechanism at boundary datasets was identical. In our work,
    the mechanism can take different distributions at the boundary and the privacy
    parameter ε is a function of neighboring datasets, which recovers an earlier definition
    of personalized DP as special case. The problem is how to extend the mechanism,
    which is only defined at the boundary set, to other datasets in the graph in a
    computationally efficient and utility optimal manner. Using the concept of strongest
    induced DP condition we solve this problem efficiently in polynomial time (in
    the size of the graph).
article_processing_charge: No
arxiv: 1
author:
- first_name: Sahel
  full_name: Torkamani, Sahel
  id: 0503e7f8-2d05-11ed-aa17-db0640c720fc
  last_name: Torkamani
- first_name: Javad B.
  full_name: Ebrahimi, Javad B.
  last_name: Ebrahimi
- first_name: Parastoo
  full_name: Sadeghi, Parastoo
  last_name: Sadeghi
- first_name: Rafael G.L.
  full_name: D'Oliveira, Rafael G.L.
  last_name: D'Oliveira
- first_name: Muriel
  full_name: Médard, Muriel
  last_name: Médard
citation:
  ama: 'Torkamani S, Ebrahimi JB, Sadeghi P, D’Oliveira RGL, Médard M. Heterogeneous
    differential privacy via graphs. In: <i>2022 IEEE International Symposium on Information
    Theory</i>. Vol 2022. IEEE; 2022:1623-1628. doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834711">10.1109/ISIT50566.2022.9834711</a>'
  apa: 'Torkamani, S., Ebrahimi, J. B., Sadeghi, P., D’Oliveira, R. G. L., &#38; Médard,
    M. (2022). Heterogeneous differential privacy via graphs. In <i>2022 IEEE International
    Symposium on Information Theory</i> (Vol. 2022, pp. 1623–1628). Espoo, Finland:
    IEEE. <a href="https://doi.org/10.1109/ISIT50566.2022.9834711">https://doi.org/10.1109/ISIT50566.2022.9834711</a>'
  chicago: Torkamani, Sahel, Javad B. Ebrahimi, Parastoo Sadeghi, Rafael G.L. D’Oliveira,
    and Muriel Médard. “Heterogeneous Differential Privacy via Graphs.” In <i>2022
    IEEE International Symposium on Information Theory</i>, 2022:1623–28. IEEE, 2022.
    <a href="https://doi.org/10.1109/ISIT50566.2022.9834711">https://doi.org/10.1109/ISIT50566.2022.9834711</a>.
  ieee: S. Torkamani, J. B. Ebrahimi, P. Sadeghi, R. G. L. D’Oliveira, and M. Médard,
    “Heterogeneous differential privacy via graphs,” in <i>2022 IEEE International
    Symposium on Information Theory</i>, Espoo, Finland, 2022, vol. 2022, pp. 1623–1628.
  ista: 'Torkamani S, Ebrahimi JB, Sadeghi P, D’Oliveira RGL, Médard M. 2022. Heterogeneous
    differential privacy via graphs. 2022 IEEE International Symposium on Information
    Theory. ISIT: Internation Symposium on Information Theory vol. 2022, 1623–1628.'
  mla: Torkamani, Sahel, et al. “Heterogeneous Differential Privacy via Graphs.” <i>2022
    IEEE International Symposium on Information Theory</i>, vol. 2022, IEEE, 2022,
    pp. 1623–28, doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834711">10.1109/ISIT50566.2022.9834711</a>.
  short: S. Torkamani, J.B. Ebrahimi, P. Sadeghi, R.G.L. D’Oliveira, M. Médard, in:,
    2022 IEEE International Symposium on Information Theory, IEEE, 2022, pp. 1623–1628.
conference:
  end_date: 2022-07-01
  location: Espoo, Finland
  name: 'ISIT: Internation Symposium on Information Theory'
  start_date: 2022-06-26
date_created: 2022-09-04T22:02:04Z
date_published: 2022-08-03T00:00:00Z
date_updated: 2022-09-05T10:28:35Z
day: '03'
department:
- _id: MaMo
doi: 10.1109/ISIT50566.2022.9834711
external_id:
  arxiv:
  - '2203.15429'
intvolume: '      2022'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.48550/arXiv.2203.15429
month: '08'
oa: 1
oa_version: Preprint
page: 1623-1628
publication: 2022 IEEE International Symposium on Information Theory
publication_identifier:
  isbn:
  - '9781665421591'
  issn:
  - 2157-8095
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: Heterogeneous differential privacy via graphs
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '12013'
abstract:
- lang: eng
  text: We consider the problem of communication over adversarial channels with feedback.
    Two parties comprising sender Alice and receiver Bob seek to communicate reliably.
    An adversary James observes Alice's channel transmission entirely and chooses,
    maliciously, its additive channel input or jamming state thereby corrupting Bob's
    observation. Bob can communicate over a one-way reverse link with Alice; we assume
    that transmissions over this feedback link cannot be corrupted by James. Our goal
    in this work is to study the optimum throughput or capacity over such channels
    with feedback. We first present results for the quadratically-constrained additive
    channel where communication is known to be impossible when the noise-to-signal
    (power) ratio (NSR) is at least 1. We present a novel achievability scheme to
    establish that positive rate communication is possible even when the NSR is as
    high as 8/9. We also present new converse upper bounds on the capacity of this
    channel under potentially stochastic encoders and decoders. We also study feedback
    communication over the more widely studied q-ary alphabet channel under additive
    noise. For the q -ary channel, where q > 2, it is well known that capacity is
    positive under full feedback if and only if the adversary can corrupt strictly
    less than half the transmitted symbols. We generalize this result and show that
    the same threshold holds for positive rate communication when the noiseless feedback
    may only be partial; our scheme employs a stochastic decoder. We extend this characterization,
    albeit partially, to fully deterministic schemes under partial noiseless feedback.
    We also present new converse upper bounds for q-ary channels under full feedback,
    where the encoder and/or decoder may privately randomize. Our converse results
    bring to the fore an interesting alternate expression for the well known converse
    bound for the q—ary channel under full feedback which, when specialized to the
    binary channel, also equals its known capacity.
article_processing_charge: No
author:
- first_name: Pranav
  full_name: Joshi, Pranav
  last_name: Joshi
- first_name: Amritakshya
  full_name: Purkayastha, Amritakshya
  last_name: Purkayastha
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
- first_name: Amitalok J.
  full_name: Budkuley, Amitalok J.
  last_name: Budkuley
- first_name: Sidharth
  full_name: Jaggi, Sidharth
  last_name: Jaggi
citation:
  ama: 'Joshi P, Purkayastha A, Zhang Y, Budkuley AJ, Jaggi S. On the capacity of
    additive AVCs with feedback. In: <i>2022 IEEE International Symposium on Information
    Theory</i>. Vol 2022. IEEE; 2022:504-509. doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834850">10.1109/ISIT50566.2022.9834850</a>'
  apa: 'Joshi, P., Purkayastha, A., Zhang, Y., Budkuley, A. J., &#38; Jaggi, S. (2022).
    On the capacity of additive AVCs with feedback. In <i>2022 IEEE International
    Symposium on Information Theory</i> (Vol. 2022, pp. 504–509). Espoo, Finland:
    IEEE. <a href="https://doi.org/10.1109/ISIT50566.2022.9834850">https://doi.org/10.1109/ISIT50566.2022.9834850</a>'
  chicago: Joshi, Pranav, Amritakshya Purkayastha, Yihan Zhang, Amitalok J. Budkuley,
    and Sidharth Jaggi. “On the Capacity of Additive AVCs with Feedback.” In <i>2022
    IEEE International Symposium on Information Theory</i>, 2022:504–9. IEEE, 2022.
    <a href="https://doi.org/10.1109/ISIT50566.2022.9834850">https://doi.org/10.1109/ISIT50566.2022.9834850</a>.
  ieee: P. Joshi, A. Purkayastha, Y. Zhang, A. J. Budkuley, and S. Jaggi, “On the
    capacity of additive AVCs with feedback,” in <i>2022 IEEE International Symposium
    on Information Theory</i>, Espoo, Finland, 2022, vol. 2022, pp. 504–509.
  ista: 'Joshi P, Purkayastha A, Zhang Y, Budkuley AJ, Jaggi S. 2022. On the capacity
    of additive AVCs with feedback. 2022 IEEE International Symposium on Information
    Theory. ISIT: Internation Symposium on Information Theory vol. 2022, 504–509.'
  mla: Joshi, Pranav, et al. “On the Capacity of Additive AVCs with Feedback.” <i>2022
    IEEE International Symposium on Information Theory</i>, vol. 2022, IEEE, 2022,
    pp. 504–09, doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834850">10.1109/ISIT50566.2022.9834850</a>.
  short: P. Joshi, A. Purkayastha, Y. Zhang, A.J. Budkuley, S. Jaggi, in:, 2022 IEEE
    International Symposium on Information Theory, IEEE, 2022, pp. 504–509.
conference:
  end_date: 2022-07-01
  location: Espoo, Finland
  name: 'ISIT: Internation Symposium on Information Theory'
  start_date: 2022-06-26
date_created: 2022-09-04T22:02:04Z
date_published: 2022-08-03T00:00:00Z
date_updated: 2022-09-05T10:23:35Z
day: '03'
department:
- _id: MaMo
doi: 10.1109/ISIT50566.2022.9834850
intvolume: '      2022'
language:
- iso: eng
month: '08'
oa_version: None
page: 504-509
publication: 2022 IEEE International Symposium on Information Theory
publication_identifier:
  isbn:
  - '9781665421591'
  issn:
  - 2157-8095
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: On the capacity of additive AVCs with feedback
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
---
_id: '12014'
abstract:
- lang: eng
  text: We study the problem of high-dimensional multiple packing in Euclidean space.
    Multiple packing is a natural generalization of sphere packing and is defined
    as follows. Let N > 0 and L∈Z≥2. A multiple packing is a set C of points in Rn
    such that any point in Rn lies in the intersection of at most L – 1 balls of radius
    nN−−−√ around points in C. Given a well-known connection with coding theory, multiple
    packings can be viewed as the Euclidean analog of list-decodable codes, which
    are well-studied for finite fields. In this paper, we exactly pin down the asymptotic
    density of (expurgated) Poisson Point Processes under a stronger notion called
    average-radius multiple packing. To this end, we apply tools from high-dimensional
    geometry and large deviation theory. This gives rise to the best known lower bound
    on the largest multiple packing density. Our result corrects a mistake in a previous
    paper by Blinovsky [Bli05].
article_processing_charge: No
author:
- first_name: Yihan
  full_name: Zhang, Yihan
  id: 2ce5da42-b2ea-11eb-bba5-9f264e9d002c
  last_name: Zhang
- first_name: Shashank
  full_name: Vatedka, Shashank
  last_name: Vatedka
citation:
  ama: 'Zhang Y, Vatedka S. List-decodability of Poisson Point Processes. In: <i>2022
    IEEE International Symposium on Information Theory</i>. Vol 2022. IEEE; 2022:2559-2564.
    doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834512">10.1109/ISIT50566.2022.9834512</a>'
  apa: 'Zhang, Y., &#38; Vatedka, S. (2022). List-decodability of Poisson Point Processes.
    In <i>2022 IEEE International Symposium on Information Theory</i> (Vol. 2022,
    pp. 2559–2564). Espoo, Finland: IEEE. <a href="https://doi.org/10.1109/ISIT50566.2022.9834512">https://doi.org/10.1109/ISIT50566.2022.9834512</a>'
  chicago: Zhang, Yihan, and Shashank Vatedka. “List-Decodability of Poisson Point
    Processes.” In <i>2022 IEEE International Symposium on Information Theory</i>,
    2022:2559–64. IEEE, 2022. <a href="https://doi.org/10.1109/ISIT50566.2022.9834512">https://doi.org/10.1109/ISIT50566.2022.9834512</a>.
  ieee: Y. Zhang and S. Vatedka, “List-decodability of Poisson Point Processes,” in
    <i>2022 IEEE International Symposium on Information Theory</i>, Espoo, Finland,
    2022, vol. 2022, pp. 2559–2564.
  ista: 'Zhang Y, Vatedka S. 2022. List-decodability of Poisson Point Processes. 2022
    IEEE International Symposium on Information Theory. ISIT: Internation Symposium
    on Information Theory vol. 2022, 2559–2564.'
  mla: Zhang, Yihan, and Shashank Vatedka. “List-Decodability of Poisson Point Processes.”
    <i>2022 IEEE International Symposium on Information Theory</i>, vol. 2022, IEEE,
    2022, pp. 2559–64, doi:<a href="https://doi.org/10.1109/ISIT50566.2022.9834512">10.1109/ISIT50566.2022.9834512</a>.
  short: Y. Zhang, S. Vatedka, in:, 2022 IEEE International Symposium on Information
    Theory, IEEE, 2022, pp. 2559–2564.
conference:
  end_date: 2022-07-01
  location: Espoo, Finland
  name: 'ISIT: Internation Symposium on Information Theory'
  start_date: 2022-06-26
date_created: 2022-09-04T22:02:04Z
date_published: 2022-08-03T00:00:00Z
date_updated: 2022-09-05T09:23:04Z
day: '03'
department:
- _id: MaMo
doi: 10.1109/ISIT50566.2022.9834512
intvolume: '      2022'
language:
- iso: eng
month: '08'
oa_version: None
page: 2559-2564
publication: 2022 IEEE International Symposium on Information Theory
publication_identifier:
  isbn:
  - '9781665421591'
  issn:
  - 2157-8095
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: List-decodability of Poisson Point Processes
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 2022
year: '2022'
...
