_id,title
14459,"Fundamental limits of two-layer autoencoders, and achieving them with gradient methods"
14921,Deep neural collapse is provably optimal for the deep unconstrained features model
14922,Concentration without independence via information measures
14923,Mismatched estimation of non-symmetric rank-one matrices corrupted by structured noise
14924,"Mean-field analysis for heavy ball methods: Dropout-stability, connectivity, and global convergence"
13315,Fundamental limits in structured principal component analysis and how to reach them
13321,Approximate message passing for multi-layer estimation in rotationally invariant models
12859,Beyond the universal law of robustness: Sharper laws for random features and neural tangent kernels
11420,Mean-field analysis of piecewise linear solutions for wide ReLU networks
10364,Parallelism versus latency in simplified successive-cancellation decoding of polar codes
12016,Polar coded computing: The role of the scaling exponent
12233,Decoding Reed-Muller codes with successive codeword permutations
12480,Approximate message passing with spectral initialization for generalized linear models
12536,The price of ignorance: How much does it cost to forget noise structure in low-rank matrix estimation?
12537,Memorization and optimization in deep neural networks with minimum over-parameterization
12538,Sharp asymptotics on the compression of two-layer neural networks
12540,Estimation in rotationally invariant generalized linear models via approximate message passing
13146,Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks
9002,Binary linear codes with optimal scaling: Polar codes with large kernels
9047,Sublinear latency for simplified successive cancellation decoding of polar codes
