_id,title
15011,"How to prune your language model: Recovering accuracy on the ""Sparsity May Cry"" benchmark"
14364,Why extension-based proofs fail
14458,SparseGPT: Massive language models can be accurately pruned in one-shot
14459,"Fundamental limits of two-layer autoencoders, and achieving them with gradient methods"
14460,SparseProp: Efficient sparse backpropagation for faster training of neural networks at the edge
14461,Quantized distributed training of large models with convergence guarantees
14771,Bias in pruned vision models: In-depth analysis and countermeasures
14815,On biased compression for distributed learning
14995,Lincheck: A practical framework for testing concurrent data structures on JVM
13053,CrAM: A Compression-Aware Minimizer
13074,Efficiency and generalization of sparse neural networks
13179,CQS: A formally-verified framework for fair and abortable synchronization
13262,Provably-efficient and internally-deterministic parallel Union-Find
14260,"Lincheck: A practical framework for testing concurrent data structures on JVM"
12330,The splay-list: A distribution-adaptive concurrent skip-list
12566,Wait-free approximate agreement on graphs
12735,Fast and scalable channels in Kotlin Coroutines
12736,Unexpected scaling in path copying trees
11180,Multi-queues can be state-of-the-art priority schedulers
11181,PathCAS: An efficient middle ground for concurrent search data structures
