_id,title
15011,"How to prune your language model: Recovering accuracy on the ""Sparsity May Cry"" benchmark"
14458,SparseGPT: Massive language models can be accurately pruned in one-shot
14459,"Fundamental limits of two-layer autoencoders, and achieving them with gradient methods"
14460,SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge
14461,Quantized distributed training of large models with convergence guarantees
14462,Constant matters: Fine-grained error bound on differentially private continual observation
13239,Predictive learning enables neural networks to learn complex working memory tasks
13241,On the impossibility of fairness-aware learning from corrupted data
13146,Tight bounds on the smallest Eigenvalue of the neural tangent kernel for deep ReLU networks
13147,Communication-efficient distributed optimization with quantized preconditioners
11651,Capacity releasing diffusion for speed and locality
