_id,doi,title
7802,10.1145/3350755.3400282,Graph sparsification for derandomizing massively parallel computation with low space
7803,10.1145/3382734.3405751,"Simple, deterministic, constant-round coloring in the congested clique"
8191,10.1145/3350755.3400213,Memory tagging: Minimalist synchronization for scalable concurrent data structures
8268,10.1109/TSP.2020.3010355,Compressive sensing using iterative hard thresholding with low precision data representation: Theory and applications
8383,10.1145/3382734.3405743,Brief Announcement: Why Extension-Based Proofs Fail
8722,10.1145/3332466.3374528,Taming unbalanced training workloads in deep learning with partial collective operations
8724,,On the sample complexity of adversarial multi-source PAC learning
8725,10.4230/LIPIcs.DISC.2020.3,The splay-list: A distribution-adaptive concurrent skip-list
7213,10.1007/978-3-030-36687-2_3,A persistent homology perspective to the link prediction problem
7224,10.1111/ele.13450,Habitat fragmentation and species diversity in competitive communities
7272,,Getting to the root of concurrent binary search tree performance
7605,10.4230/LIPIcs.OPODIS.2019.15,In search of the fastest concurrent union-find algorithm
7635,10.1145/3332466.3374503,Testing concurrency on the JVM with Lincheck
7636,10.1145/3332466.3374542,Non-blocking interpolation search trees with doubly-logarithmic running time
15074,10.4230/LIPIcs.DISC.2020.40,Brief announcement: Efficient load-balancing through distributed token dropping
15077,10.4230/LIPIcs.ICALP.2020.7,Dynamic averaging load balancing on cycles
9415,,Inducing and exploiting activation sparsity for fast neural network inference
9631,,Scalable belief propagation via relaxed scheduling
9632,,WoodFisher: Efficient second-order approximation for neural network compression
6673,10.1145/3323165.3323201,Efficiency guarantees for parallel incremental algorithms under relaxed schedulers
