_id,title
14459,"Fundamental limits of two-layer autoencoders, and achieving them with gradient methods"
14921,Deep neural collapse is provably optimal for the deep unconstrained features model
14922,Concentration without independence via information measures
14924,"Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence"
13315,Fundamental limits in structured principal component analysis and how to reach them
13321,Approximate message passing for multi-layer estimation in rotationally invariant models
12859,Beyond the universal law of robustness: Sharper laws for random features and neural tangent kernels
11420,Mean-field analysis of piecewise linear solutions for wide ReLU networks
10364,Parallelism versus latency in simplified successive-cancellation decoding of polar codes
12016,Polar coded computing: The role of the scaling exponent
12480,Approximate message passing with spectral initialization for generalized linear models
12537,Memorization and optimization in deep neural networks with minimum over-parameterization
12540,Estimation in rotationally invariant generalized linear models via approximate message passing
13146,Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks
10053,Parallelism versus latency in simplified successive-cancellation decoding of polar codes
10593,PCA initialization for approximate message passing in rotationally invariant models
10594,When are solutions connected in deep networks?
10595,Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep ReLU networks
10597,Sparse multi-decoder recursive projection aggregation for Reed-Muller codes
10598,Approximate message passing with spectral initialization for generalized linear models
