---
_id: '7802'
abstract:
- lang: eng
  text: "The Massively Parallel Computation (MPC) model is an emerging model which
    distills core  aspects of distributed and parallel computation. It has been developed
    as a tool to solve (typically graph) problems in systems where the input is distributed
    over many machines with limited space.\r\n\t\r\nRecent work has focused on the
    regime in which machines have sublinear (in $n$, the number of nodes in the input
    graph) space, with randomized algorithms presented for fundamental graph problems
    of Maximal Matching and Maximal Independent Set. However, there have been no prior
    corresponding deterministic algorithms.\r\n\t\r\n\tA major challenge underlying
    the sublinear space setting is that the local space of each machine might be too
    small to store all the edges incident to a single node. This poses a considerable
    obstacle compared to the classical models in which each node is assumed to know
    and have easy access to its incident edges. To overcome this barrier we introduce
    a new graph sparsification technique that deterministically computes a low-degree
    subgraph with additional desired properties. The degree of the nodes in this subgraph
    is small in the sense that the edges of each node can be now stored on a single
    machine. This low-degree subgraph also has the property that solving the problem
    on this subgraph provides \\emph{significant} global progress, i.e., progress
    towards solving the problem for the original input graph.\r\n\t\r\nUsing this
    framework to derandomize the well-known randomized algorithm of Luby [SICOMP'86],
    we obtain $O(\\log \\Delta+\\log\\log n)$-round deterministic MPC algorithms for
    solving the fundamental problems of Maximal Matching and Maximal Independent Set
    with $O(n^{\\epsilon})$ space on each machine for any constant $\\epsilon > 0$.
    Based on the recent work of Ghaffari et al. [FOCS'18], this additive $O(\\log\\log
    n)$ factor is conditionally essential. These algorithms can also be shown to run
    in $O(\\log \\Delta)$ rounds in the closely related model of CONGESTED CLIQUE,
    improving upon the state-of-the-art bound of $O(\\log^2 \\Delta)$ rounds by Censor-Hillel
    et al. [DISC'17]."
article_processing_charge: No
arxiv: 1
author:
- first_name: Artur
  full_name: Czumaj, Artur
  last_name: Czumaj
  orcid: 0000-0002-5646-9524
- first_name: Peter
  full_name: Davies, Peter
  id: 11396234-BB50-11E9-B24C-90FCE5697425
  last_name: Davies
  orcid: 0000-0002-5646-9524
- first_name: Merav
  full_name: Parter, Merav
  last_name: Parter
citation:
  ama: 'Czumaj A, Davies P, Parter M. Graph sparsification for derandomizing massively
    parallel computation with low space. In: <i>Proceedings of the 32nd ACM Symposium
    on Parallelism in Algorithms and Architectures (SPAA 2020)</i>. Association for
    Computing Machinery; 2020:175-185. doi:<a href="https://doi.org/10.1145/3350755.3400282">10.1145/3350755.3400282</a>'
  apa: 'Czumaj, A., Davies, P., &#38; Parter, M. (2020). Graph sparsification for
    derandomizing massively parallel computation with low space. In <i>Proceedings
    of the 32nd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
    2020)</i> (pp. 175–185). Virtual Event, United States: Association for Computing
    Machinery. <a href="https://doi.org/10.1145/3350755.3400282">https://doi.org/10.1145/3350755.3400282</a>'
  chicago: Czumaj, Artur, Peter Davies, and Merav Parter. “Graph Sparsification for
    Derandomizing Massively Parallel Computation with Low Space.” In <i>Proceedings
    of the 32nd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
    2020)</i>, 175–85. Association for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3350755.3400282">https://doi.org/10.1145/3350755.3400282</a>.
  ieee: A. Czumaj, P. Davies, and M. Parter, “Graph sparsification for derandomizing
    massively parallel computation with low space,” in <i>Proceedings of the 32nd
    ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2020)</i>,
    Virtual Event, United States, 2020, no. 7, pp. 175–185.
  ista: 'Czumaj A, Davies P, Parter M. 2020. Graph sparsification for derandomizing
    massively parallel computation with low space. Proceedings of the 32nd ACM Symposium
    on Parallelism in Algorithms and Architectures (SPAA 2020). SPAA: Symposium on
    Parallelism in Algorithms and Architectures, 175–185.'
  mla: Czumaj, Artur, et al. “Graph Sparsification for Derandomizing Massively Parallel
    Computation with Low Space.” <i>Proceedings of the 32nd ACM Symposium on Parallelism
    in Algorithms and Architectures (SPAA 2020)</i>, no. 7, Association for Computing
    Machinery, 2020, pp. 175–85, doi:<a href="https://doi.org/10.1145/3350755.3400282">10.1145/3350755.3400282</a>.
  short: A. Czumaj, P. Davies, M. Parter, in:, Proceedings of the 32nd ACM Symposium
    on Parallelism in Algorithms and Architectures (SPAA 2020), Association for Computing
    Machinery, 2020, pp. 175–185.
conference:
  end_date: 2020-07-17
  location: Virtual Event, United States
  name: 'SPAA: Symposium on Parallelism in Algorithms and Architectures'
  start_date: 2020-07-15
date_created: 2020-05-06T08:53:34Z
date_published: 2020-07-01T00:00:00Z
date_updated: 2024-02-28T12:53:09Z
day: '01'
department:
- _id: DaAl
doi: 10.1145/3350755.3400282
ec_funded: 1
external_id:
  arxiv:
  - '1912.05390'
  isi:
  - '000744436200015'
isi: 1
issue: '7'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/1912.05390
month: '07'
oa: 1
oa_version: Preprint
page: 175-185
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '754411'
  name: ISTplus - Postdoctoral Fellowships
publication: Proceedings of the 32nd ACM Symposium on Parallelism in Algorithms and
  Architectures (SPAA 2020)
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
related_material:
  record:
  - id: '9541'
    relation: later_version
    status: public
scopus_import: '1'
status: public
title: Graph sparsification for derandomizing massively parallel computation with
  low space
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2020'
...
---
_id: '7803'
abstract:
- lang: eng
  text: "We settle the complexity of the (Δ+1)-coloring and (Δ+1)-list coloring problems
    in the CONGESTED CLIQUE model by presenting a simple deterministic algorithm for
    both problems running in a constant number of rounds. This matches the complexity
    of the recent breakthrough randomized constant-round (Δ+1)-list coloring algorithm
    due to Chang et al. (PODC'19), and significantly improves upon the state-of-the-art
    O(logΔ)-round deterministic (Δ+1)-coloring bound of Parter (ICALP'18).\r\nA remarkable
    property of our algorithm is its simplicity. Whereas the state-of-the-art randomized
    algorithms for this problem are based on the quite involved local coloring algorithm
    of Chang et al. (STOC'18), our algorithm can be described in just a few lines.
    At a high level, it applies a careful derandomization of a recursive procedure
    which partitions the nodes and their respective palettes into separate bins. We
    show that after O(1) recursion steps, the remaining uncolored subgraph within
    each bin has linear size, and thus can be solved locally by collecting it to a
    single node. This algorithm can also be implemented in the Massively Parallel
    Computation (MPC) model provided that each machine has linear (in n, the number
    of nodes in the input graph) space.\r\nWe also show an extension of our algorithm
    to the MPC regime in which machines have sublinear space: we present the first
    deterministic (Δ+1)-list coloring algorithm designed for sublinear-space MPC,
    which runs in O(logΔ+loglogn) rounds."
article_processing_charge: No
arxiv: 1
author:
- first_name: Artur
  full_name: Czumaj, Artur
  last_name: Czumaj
  orcid: 0000-0002-5646-9524
- first_name: Peter
  full_name: Davies, Peter
  id: 11396234-BB50-11E9-B24C-90FCE5697425
  last_name: Davies
  orcid: 0000-0002-5646-9524
- first_name: Merav
  full_name: Parter, Merav
  last_name: Parter
citation:
  ama: 'Czumaj A, Davies P, Parter M. Simple, deterministic, constant-round coloring
    in the congested clique. In: <i>Proceedings of the 2020 ACM Symposium on Principles
    of Distributed Computing</i>. Association for Computing Machinery; 2020:309-318.
    doi:<a href="https://doi.org/10.1145/3382734.3405751">10.1145/3382734.3405751</a>'
  apa: 'Czumaj, A., Davies, P., &#38; Parter, M. (2020). Simple, deterministic, constant-round
    coloring in the congested clique. In <i>Proceedings of the 2020 ACM Symposium
    on Principles of Distributed Computing</i> (pp. 309–318). Salerno, Italy: Association
    for Computing Machinery. <a href="https://doi.org/10.1145/3382734.3405751">https://doi.org/10.1145/3382734.3405751</a>'
  chicago: Czumaj, Artur, Peter Davies, and Merav Parter. “Simple, Deterministic,
    Constant-Round Coloring in the Congested Clique.” In <i>Proceedings of the 2020
    ACM Symposium on Principles of Distributed Computing</i>, 309–18. Association
    for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3382734.3405751">https://doi.org/10.1145/3382734.3405751</a>.
  ieee: A. Czumaj, P. Davies, and M. Parter, “Simple, deterministic, constant-round
    coloring in the congested clique,” in <i>Proceedings of the 2020 ACM Symposium
    on Principles of Distributed Computing</i>, Salerno, Italy, 2020, pp. 309–318.
  ista: 'Czumaj A, Davies P, Parter M. 2020. Simple, deterministic, constant-round
    coloring in the congested clique. Proceedings of the 2020 ACM Symposium on Principles
    of Distributed Computing. PODC: Symposium on Principles of Distributed Computing,
    309–318.'
  mla: Czumaj, Artur, et al. “Simple, Deterministic, Constant-Round Coloring in the
    Congested Clique.” <i>Proceedings of the 2020 ACM Symposium on Principles of Distributed
    Computing</i>, Association for Computing Machinery, 2020, pp. 309–18, doi:<a href="https://doi.org/10.1145/3382734.3405751">10.1145/3382734.3405751</a>.
  short: A. Czumaj, P. Davies, M. Parter, in:, Proceedings of the 2020 ACM Symposium
    on Principles of Distributed Computing, Association for Computing Machinery, 2020,
    pp. 309–318.
conference:
  end_date: 2020-08-07
  location: Salerno, Italy
  name: 'PODC: Symposium on Principles of Distributed Computing'
  start_date: 2020-08-03
date_created: 2020-05-06T09:02:14Z
date_published: 2020-07-01T00:00:00Z
date_updated: 2021-01-12T08:15:37Z
day: '01'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.1145/3382734.3405751
ec_funded: 1
external_id:
  arxiv:
  - '2009.06043'
file:
- access_level: open_access
  checksum: 46fe4fc58a64eb04068115573f631d4c
  content_type: application/pdf
  creator: pdavies
  date_created: 2020-10-08T08:17:36Z
  date_updated: 2020-10-08T08:17:36Z
  file_id: '8624'
  file_name: ColoringArxiv.pdf
  file_size: 520051
  relation: main_file
  success: 1
file_date_updated: 2020-10-08T08:17:36Z
has_accepted_license: '1'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Submitted Version
page: 309-318
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '754411'
  name: ISTplus - Postdoctoral Fellowships
publication: Proceedings of the 2020 ACM Symposium on Principles of Distributed Computing
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
status: public
title: Simple, deterministic, constant-round coloring in the congested clique
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8191'
abstract:
- lang: eng
  text: "There has been a significant amount of research on hardware and software
    support for efficient concurrent data structures; yet, the question of how to
    build correct, simple, and scalable data structures has not yet been definitively
    settled. In this paper, we revisit this question from a minimalist perspective,
    and ask: what is the smallest amount of synchronization required for correct and
    efficient concurrent search data structures, and how could this minimal synchronization
    support be provided in hardware?\r\n\r\nTo address these questions, we introduce
    memory tagging, a simple hardware mechanism which enables the programmer to \"tag\"
    a dynamic set of memory locations, at cache-line granularity, and later validate
    whether the memory has been concurrently modified, with the possibility of updating
    one of the underlying locations atomically if validation succeeds. We provide
    several examples showing that this mechanism can enable fast and arguably simple
    concurrent data structure designs, such as lists, binary search trees, balanced
    search trees, range queries, and Software Transactional Memory (STM) implementations.
    We provide an implementation of memory tags in the Graphite multi-core simulator,
    showing that the mechanism can be implemented entirely at the level of L1 cache,
    and that it can enable non-trivial speedups versus existing implementations of
    the above data structures."
article_processing_charge: No
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Trevor A
  full_name: Brown, Trevor A
  id: 3569F0A0-F248-11E8-B48F-1D18A9856A87
  last_name: Brown
- first_name: Nandini
  full_name: Singhal, Nandini
  last_name: Singhal
citation:
  ama: 'Alistarh D-A, Brown TA, Singhal N. Memory tagging: Minimalist synchronization
    for scalable concurrent data structures. In: <i>Annual ACM Symposium on Parallelism
    in Algorithms and Architectures</i>. Association for Computing Machinery; 2020:37-49.
    doi:<a href="https://doi.org/10.1145/3350755.3400213">10.1145/3350755.3400213</a>'
  apa: 'Alistarh, D.-A., Brown, T. A., &#38; Singhal, N. (2020). Memory tagging: Minimalist
    synchronization for scalable concurrent data structures. In <i>Annual ACM Symposium
    on Parallelism in Algorithms and Architectures</i> (pp. 37–49). Virtual Event,
    United States: Association for Computing Machinery. <a href="https://doi.org/10.1145/3350755.3400213">https://doi.org/10.1145/3350755.3400213</a>'
  chicago: 'Alistarh, Dan-Adrian, Trevor A Brown, and Nandini Singhal. “Memory Tagging:
    Minimalist Synchronization for Scalable Concurrent Data Structures.” In <i>Annual
    ACM Symposium on Parallelism in Algorithms and Architectures</i>, 37–49. Association
    for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3350755.3400213">https://doi.org/10.1145/3350755.3400213</a>.'
  ieee: 'D.-A. Alistarh, T. A. Brown, and N. Singhal, “Memory tagging: Minimalist
    synchronization for scalable concurrent data structures,” in <i>Annual ACM Symposium
    on Parallelism in Algorithms and Architectures</i>, Virtual Event, United States,
    2020, no. 7, pp. 37–49.'
  ista: 'Alistarh D-A, Brown TA, Singhal N. 2020. Memory tagging: Minimalist synchronization
    for scalable concurrent data structures. Annual ACM Symposium on Parallelism in
    Algorithms and Architectures. SPAA: Symposium on Parallelism in Algorithms and
    Architectures, 37–49.'
  mla: 'Alistarh, Dan-Adrian, et al. “Memory Tagging: Minimalist Synchronization for
    Scalable Concurrent Data Structures.” <i>Annual ACM Symposium on Parallelism in
    Algorithms and Architectures</i>, no. 7, Association for Computing Machinery,
    2020, pp. 37–49, doi:<a href="https://doi.org/10.1145/3350755.3400213">10.1145/3350755.3400213</a>.'
  short: D.-A. Alistarh, T.A. Brown, N. Singhal, in:, Annual ACM Symposium on Parallelism
    in Algorithms and Architectures, Association for Computing Machinery, 2020, pp.
    37–49.
conference:
  end_date: 2020-07-17
  location: Virtual Event, United States
  name: 'SPAA: Symposium on Parallelism in Algorithms and Architectures'
  start_date: 2020-07-15
date_created: 2020-08-02T22:00:58Z
date_published: 2020-07-06T00:00:00Z
date_updated: 2024-02-28T12:56:32Z
day: '06'
department:
- _id: DaAl
doi: 10.1145/3350755.3400213
external_id:
  isi:
  - '000744436200004'
isi: 1
issue: '7'
language:
- iso: eng
month: '07'
oa_version: None
page: 37-49
publication: Annual ACM Symposium on Parallelism in Algorithms and Architectures
publication_identifier:
  isbn:
  - '9781450369350'
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Memory tagging: Minimalist synchronization for scalable concurrent data structures'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8268'
abstract:
- lang: eng
  text: 'Modern scientific instruments produce vast amounts of data, which can overwhelm
    the processing ability of computer systems. Lossy compression of data is an intriguing
    solution, but comes with its own drawbacks, such as potential signal loss, and
    the need for careful optimization of the compression ratio. In this work, we focus
    on a setting where this problem is especially acute: compressive sensing frameworks
    for interferometry and medical imaging. We ask the following question: can the
    precision of the data representation be lowered for all inputs, with recovery
    guarantees and practical performance Our first contribution is a theoretical analysis
    of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data,
    meaning both the measurement matrix and the observation vector are quantized aggressively.
    We present a variant of low precision normalized IHT that, under mild conditions,
    can still provide recovery guarantees. The second contribution is the application
    of our quantization framework to radio astronomy and magnetic resonance imaging.
    We show that lowering the precision of the data can significantly accelerate image
    recovery. We evaluate our approach on telescope data and samples of brain images
    using CPU and FPGA implementations achieving up to a 9x speedup with negligible
    loss of recovery quality.'
acknowledgement: The authors would like to thank Dr. Michiel Brentjens at the Netherlands
  Institute for Radio Astronomy (ASTRON) for providing radio interferometer data and
  Dr. Josip Marjanovic and Dr. Franciszek Hennel at the Magnetic Resonance Technology
  of ETH Zurich for providing their insights on the experiments. CZ and the DS3Lab
  gratefully acknowledge the support from the Swiss Data Science Center, Alibaba,
  Google Focused Research Awards, Huawei, MeteoSwiss, Oracle Labs, Swisscom, Zurich
  Insurance, Chinese Scholarship Council, and the Department of Computer Science at
  ETH Zurich.
article_processing_charge: No
article_type: original
arxiv: 1
author:
- first_name: Nezihe Merve
  full_name: Gurel, Nezihe Merve
  last_name: Gurel
- first_name: Kaan
  full_name: Kara, Kaan
  last_name: Kara
- first_name: Alen
  full_name: Stojanov, Alen
  last_name: Stojanov
- first_name: Tyler
  full_name: Smith, Tyler
  last_name: Smith
- first_name: Thomas
  full_name: Lemmin, Thomas
  last_name: Lemmin
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Markus
  full_name: Puschel, Markus
  last_name: Puschel
- first_name: Ce
  full_name: Zhang, Ce
  last_name: Zhang
citation:
  ama: 'Gurel NM, Kara K, Stojanov A, et al. Compressive sensing using iterative hard
    thresholding with low precision data representation: Theory and applications.
    <i>IEEE Transactions on Signal Processing</i>. 2020;68:4268-4282. doi:<a href="https://doi.org/10.1109/TSP.2020.3010355">10.1109/TSP.2020.3010355</a>'
  apa: 'Gurel, N. M., Kara, K., Stojanov, A., Smith, T., Lemmin, T., Alistarh, D.-A.,
    … Zhang, C. (2020). Compressive sensing using iterative hard thresholding with
    low precision data representation: Theory and applications. <i>IEEE Transactions
    on Signal Processing</i>. IEEE. <a href="https://doi.org/10.1109/TSP.2020.3010355">https://doi.org/10.1109/TSP.2020.3010355</a>'
  chicago: 'Gurel, Nezihe Merve, Kaan Kara, Alen Stojanov, Tyler Smith, Thomas Lemmin,
    Dan-Adrian Alistarh, Markus Puschel, and Ce Zhang. “Compressive Sensing Using
    Iterative Hard Thresholding with Low Precision Data Representation: Theory and
    Applications.” <i>IEEE Transactions on Signal Processing</i>. IEEE, 2020. <a href="https://doi.org/10.1109/TSP.2020.3010355">https://doi.org/10.1109/TSP.2020.3010355</a>.'
  ieee: 'N. M. Gurel <i>et al.</i>, “Compressive sensing using iterative hard thresholding
    with low precision data representation: Theory and applications,” <i>IEEE Transactions
    on Signal Processing</i>, vol. 68. IEEE, pp. 4268–4282, 2020.'
  ista: 'Gurel NM, Kara K, Stojanov A, Smith T, Lemmin T, Alistarh D-A, Puschel M,
    Zhang C. 2020. Compressive sensing using iterative hard thresholding with low
    precision data representation: Theory and applications. IEEE Transactions on Signal
    Processing. 68, 4268–4282.'
  mla: 'Gurel, Nezihe Merve, et al. “Compressive Sensing Using Iterative Hard Thresholding
    with Low Precision Data Representation: Theory and Applications.” <i>IEEE Transactions
    on Signal Processing</i>, vol. 68, IEEE, 2020, pp. 4268–82, doi:<a href="https://doi.org/10.1109/TSP.2020.3010355">10.1109/TSP.2020.3010355</a>.'
  short: N.M. Gurel, K. Kara, A. Stojanov, T. Smith, T. Lemmin, D.-A. Alistarh, M.
    Puschel, C. Zhang, IEEE Transactions on Signal Processing 68 (2020) 4268–4282.
date_created: 2020-08-16T22:00:56Z
date_published: 2020-07-20T00:00:00Z
date_updated: 2023-08-22T08:40:08Z
day: '20'
department:
- _id: DaAl
doi: 10.1109/TSP.2020.3010355
external_id:
  arxiv:
  - '1802.04907'
  isi:
  - '000562044500001'
intvolume: '        68'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/1802.04907
month: '07'
oa: 1
oa_version: Preprint
page: 4268-4282
publication: IEEE Transactions on Signal Processing
publication_identifier:
  eissn:
  - '19410476'
  issn:
  - 1053587X
publication_status: published
publisher: IEEE
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Compressive sensing using iterative hard thresholding with low precision data
  representation: Theory and applications'
type: journal_article
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
volume: 68
year: '2020'
...
---
_id: '8383'
abstract:
- lang: eng
  text: We introduce extension-based proofs, a class of impossibility proofs that
    includes valency arguments. They are modelled as an interaction between a prover
    and a protocol. Using proofs based on combinatorial topology, it has been shown
    that it is impossible to deterministically solve k-set agreement among n > k ≥
    2 processes in a wait-free manner. However, it was unknown whether proofs based
    on simpler techniques were possible. We explain why this impossibility result
    cannot be obtained by an extension-based proof and, hence, extension-based proofs
    are limited in power.
article_processing_charge: No
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: James
  full_name: Aspnes, James
  last_name: Aspnes
- first_name: Faith
  full_name: Ellen, Faith
  last_name: Ellen
- first_name: Rati
  full_name: Gelashvili, Rati
  last_name: Gelashvili
- first_name: Leqi
  full_name: Zhu, Leqi
  last_name: Zhu
citation:
  ama: 'Alistarh D-A, Aspnes J, Ellen F, Gelashvili R, Zhu L. Brief Announcement:
    Why Extension-Based Proofs Fail. In: <i>Proceedings of the 39th Symposium on Principles
    of Distributed Computing</i>. Association for Computing Machinery; 2020:54-56.
    doi:<a href="https://doi.org/10.1145/3382734.3405743">10.1145/3382734.3405743</a>'
  apa: 'Alistarh, D.-A., Aspnes, J., Ellen, F., Gelashvili, R., &#38; Zhu, L. (2020).
    Brief Announcement: Why Extension-Based Proofs Fail. In <i>Proceedings of the
    39th Symposium on Principles of Distributed Computing</i> (pp. 54–56). Virtual,
    Italy: Association for Computing Machinery. <a href="https://doi.org/10.1145/3382734.3405743">https://doi.org/10.1145/3382734.3405743</a>'
  chicago: 'Alistarh, Dan-Adrian, James Aspnes, Faith Ellen, Rati Gelashvili, and
    Leqi Zhu. “Brief Announcement: Why Extension-Based Proofs Fail.” In <i>Proceedings
    of the 39th Symposium on Principles of Distributed Computing</i>, 54–56. Association
    for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3382734.3405743">https://doi.org/10.1145/3382734.3405743</a>.'
  ieee: 'D.-A. Alistarh, J. Aspnes, F. Ellen, R. Gelashvili, and L. Zhu, “Brief Announcement:
    Why Extension-Based Proofs Fail,” in <i>Proceedings of the 39th Symposium on Principles
    of Distributed Computing</i>, Virtual, Italy, 2020, pp. 54–56.'
  ista: 'Alistarh D-A, Aspnes J, Ellen F, Gelashvili R, Zhu L. 2020. Brief Announcement:
    Why Extension-Based Proofs Fail. Proceedings of the 39th Symposium on Principles
    of Distributed Computing. PODC: Principles of Distributed Computing, 54–56.'
  mla: 'Alistarh, Dan-Adrian, et al. “Brief Announcement: Why Extension-Based Proofs
    Fail.” <i>Proceedings of the 39th Symposium on Principles of Distributed Computing</i>,
    Association for Computing Machinery, 2020, pp. 54–56, doi:<a href="https://doi.org/10.1145/3382734.3405743">10.1145/3382734.3405743</a>.'
  short: D.-A. Alistarh, J. Aspnes, F. Ellen, R. Gelashvili, L. Zhu, in:, Proceedings
    of the 39th Symposium on Principles of Distributed Computing, Association for
    Computing Machinery, 2020, pp. 54–56.
conference:
  end_date: 2020-08-07
  location: Virtual, Italy
  name: 'PODC: Principles of Distributed Computing'
  start_date: 2020-08-03
date_created: 2020-09-13T22:01:18Z
date_published: 2020-07-31T00:00:00Z
date_updated: 2024-02-28T12:54:19Z
day: '31'
department:
- _id: DaAl
doi: 10.1145/3382734.3405743
language:
- iso: eng
month: '07'
oa_version: None
page: 54-56
publication: Proceedings of the 39th Symposium on Principles of Distributed Computing
publication_identifier:
  isbn:
  - '9781450375825'
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'Brief Announcement: Why Extension-Based Proofs Fail'
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '8722'
abstract:
- lang: eng
  text: "Load imbalance pervasively exists in distributed deep learning training systems,
    either caused by the inherent imbalance in learned tasks or by the system itself.
    Traditional synchronous Stochastic Gradient Descent (SGD)\r\nachieves good accuracy
    for a wide variety of tasks, but relies on global synchronization to accumulate
    the gradients at every training step. In this paper, we propose eager-SGD, which
    relaxes the global synchronization for\r\ndecentralized accumulation. To implement
    eager-SGD, we propose to use two partial collectives: solo and majority. With
    solo allreduce, the faster processes contribute their gradients eagerly without
    waiting for the slower processes, whereas with majority allreduce, at least half
    of the participants must contribute gradients before continuing, all without using
    a central parameter server. We theoretically prove the convergence of the algorithms
    and describe the partial collectives in detail. Experimental results on load-imbalanced
    environments (CIFAR-10, ImageNet, and UCF101 datasets) show\r\nthat eager-SGD
    achieves 1.27x speedup over the state-of-the-art synchronous SGD, without losing
    accuracy."
article_processing_charge: No
arxiv: 1
author:
- first_name: Shigang
  full_name: Li, Shigang
  last_name: Li
- first_name: Tal Ben-Nun
  full_name: Tal Ben-Nun, Tal Ben-Nun
  last_name: Tal Ben-Nun
- first_name: Salvatore Di
  full_name: Girolamo, Salvatore Di
  last_name: Girolamo
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Torsten
  full_name: Hoefler, Torsten
  last_name: Hoefler
citation:
  ama: 'Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. Taming unbalanced
    training workloads in deep learning with partial collective operations. In: <i>Proceedings
    of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming</i>.
    Association for Computing Machinery; 2020:45-61. doi:<a href="https://doi.org/10.1145/3332466.3374528">10.1145/3332466.3374528</a>'
  apa: 'Li, S., Tal Ben-Nun, T. B.-N., Girolamo, S. D., Alistarh, D.-A., &#38; Hoefler,
    T. (2020). Taming unbalanced training workloads in deep learning with partial
    collective operations. In <i>Proceedings of the 25th ACM SIGPLAN Symposium on
    Principles and Practice of Parallel Programming</i> (pp. 45–61). San Diego, CA,
    United States: Association for Computing Machinery. <a href="https://doi.org/10.1145/3332466.3374528">https://doi.org/10.1145/3332466.3374528</a>'
  chicago: Li, Shigang, Tal Ben-Nun Tal Ben-Nun, Salvatore Di Girolamo, Dan-Adrian
    Alistarh, and Torsten Hoefler. “Taming Unbalanced Training Workloads in Deep Learning
    with Partial Collective Operations.” In <i>Proceedings of the 25th ACM SIGPLAN
    Symposium on Principles and Practice of Parallel Programming</i>, 45–61. Association
    for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3332466.3374528">https://doi.org/10.1145/3332466.3374528</a>.
  ieee: S. Li, T. B.-N. Tal Ben-Nun, S. D. Girolamo, D.-A. Alistarh, and T. Hoefler,
    “Taming unbalanced training workloads in deep learning with partial collective
    operations,” in <i>Proceedings of the 25th ACM SIGPLAN Symposium on Principles
    and Practice of Parallel Programming</i>, San Diego, CA, United States, 2020,
    pp. 45–61.
  ista: 'Li S, Tal Ben-Nun TB-N, Girolamo SD, Alistarh D-A, Hoefler T. 2020. Taming
    unbalanced training workloads in deep learning with partial collective operations.
    Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel
    Programming. PPoPP: Sympopsium on Principles and Practice of Parallel Programming,
    45–61.'
  mla: Li, Shigang, et al. “Taming Unbalanced Training Workloads in Deep Learning
    with Partial Collective Operations.” <i>Proceedings of the 25th ACM SIGPLAN Symposium
    on Principles and Practice of Parallel Programming</i>, Association for Computing
    Machinery, 2020, pp. 45–61, doi:<a href="https://doi.org/10.1145/3332466.3374528">10.1145/3332466.3374528</a>.
  short: S. Li, T.B.-N. Tal Ben-Nun, S.D. Girolamo, D.-A. Alistarh, T. Hoefler, in:,
    Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel
    Programming, Association for Computing Machinery, 2020, pp. 45–61.
conference:
  end_date: 2020-02-26
  location: San Diego, CA, United States
  name: 'PPoPP: Sympopsium on Principles and Practice of Parallel Programming'
  start_date: 2020-02-22
date_created: 2020-11-05T15:25:30Z
date_published: 2020-02-01T00:00:00Z
date_updated: 2023-08-22T12:13:48Z
day: '01'
department:
- _id: DaAl
doi: 10.1145/3332466.3374528
ec_funded: 1
external_id:
  arxiv:
  - '1908.04207'
  isi:
  - '000564476500004'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/1908.04207
month: '02'
oa: 1
oa_version: Preprint
page: 45-61
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the 25th ACM SIGPLAN Symposium on Principles and Practice
  of Parallel Programming
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
status: public
title: Taming unbalanced training workloads in deep learning with partial collective
  operations
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2020'
...
---
_id: '8724'
abstract:
- lang: eng
  text: "We study the problem of learning from multiple untrusted data sources, a
    scenario of increasing practical relevance given the recent emergence of crowdsourcing
    and collaborative learning paradigms. Specifically, we analyze the situation in
    which a learning system obtains datasets from multiple sources, some of which
    might be biased or even adversarially perturbed. It is\r\nknown that in the single-source
    case, an adversary with the power to corrupt a fixed fraction of the training
    data can prevent PAC-learnability, that is, even in the limit of infinitely much
    training data, no learning system can approach the optimal test error. In this
    work we show that, surprisingly, the same is not true in the multi-source setting,
    where the adversary can arbitrarily\r\ncorrupt a fixed fraction of the data sources.
    Our main results are a generalization bound that provides finite-sample guarantees
    for this learning setting, as well as corresponding lower bounds. Besides establishing
    PAC-learnability our results also show that in a cooperative learning setting
    sharing data with other parties has provable benefits, even if some\r\nparticipants
    are malicious. "
acknowledged_ssus:
- _id: ScienComp
acknowledgement: Dan Alistarh is supported in part by the European Research Council
  (ERC) under the European Union’s Horizon 2020 research and innovation programme
  (grant agreement No 805223 ScaleML). This research was supported by the Scientific
  Service Units (SSU) of IST Austria through resources provided by Scientific Computing
  (SciComp).
article_processing_charge: No
arxiv: 1
author:
- first_name: Nikola H
  full_name: Konstantinov, Nikola H
  id: 4B9D76E4-F248-11E8-B48F-1D18A9856A87
  last_name: Konstantinov
- first_name: Elias
  full_name: Frantar, Elias
  id: 09a8f98d-ec99-11ea-ae11-c063a7b7fe5f
  last_name: Frantar
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Christoph
  full_name: Lampert, Christoph
  id: 40C20FD2-F248-11E8-B48F-1D18A9856A87
  last_name: Lampert
  orcid: 0000-0001-8622-7887
citation:
  ama: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. On the sample complexity
    of adversarial multi-source PAC learning. In: <i>Proceedings of the 37th International
    Conference on Machine Learning</i>. Vol 119. ML Research Press; 2020:5416-5425.'
  apa: 'Konstantinov, N. H., Frantar, E., Alistarh, D.-A., &#38; Lampert, C. (2020).
    On the sample complexity of adversarial multi-source PAC learning. In <i>Proceedings
    of the 37th International Conference on Machine Learning</i> (Vol. 119, pp. 5416–5425).
    Online: ML Research Press.'
  chicago: Konstantinov, Nikola H, Elias Frantar, Dan-Adrian Alistarh, and Christoph
    Lampert. “On the Sample Complexity of Adversarial Multi-Source PAC Learning.”
    In <i>Proceedings of the 37th International Conference on Machine Learning</i>,
    119:5416–25. ML Research Press, 2020.
  ieee: N. H. Konstantinov, E. Frantar, D.-A. Alistarh, and C. Lampert, “On the sample
    complexity of adversarial multi-source PAC learning,” in <i>Proceedings of the
    37th International Conference on Machine Learning</i>, Online, 2020, vol. 119,
    pp. 5416–5425.
  ista: 'Konstantinov NH, Frantar E, Alistarh D-A, Lampert C. 2020. On the sample
    complexity of adversarial multi-source PAC learning. Proceedings of the 37th International
    Conference on Machine Learning. ICML: International Conference on Machine Learning
    vol. 119, 5416–5425.'
  mla: Konstantinov, Nikola H., et al. “On the Sample Complexity of Adversarial Multi-Source
    PAC Learning.” <i>Proceedings of the 37th International Conference on Machine
    Learning</i>, vol. 119, ML Research Press, 2020, pp. 5416–25.
  short: N.H. Konstantinov, E. Frantar, D.-A. Alistarh, C. Lampert, in:, Proceedings
    of the 37th International Conference on Machine Learning, ML Research Press, 2020,
    pp. 5416–5425.
conference:
  end_date: 2020-07-18
  location: Online
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2020-07-12
date_created: 2020-11-05T15:25:58Z
date_published: 2020-07-12T00:00:00Z
date_updated: 2023-09-07T13:42:08Z
day: '12'
ddc:
- '000'
department:
- _id: DaAl
- _id: ChLa
ec_funded: 1
external_id:
  arxiv:
  - '2002.10384'
file:
- access_level: open_access
  checksum: cc755d0054bc4b2be778ea7aa7884d2f
  content_type: application/pdf
  creator: dernst
  date_created: 2021-02-15T09:00:01Z
  date_updated: 2021-02-15T09:00:01Z
  file_id: '9120'
  file_name: 2020_PMLR_Konstantinov.pdf
  file_size: 281286
  relation: main_file
  success: 1
file_date_updated: 2021-02-15T09:00:01Z
has_accepted_license: '1'
intvolume: '       119'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 5416-5425
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the 37th International Conference on Machine Learning
publication_identifier:
  issn:
  - 2640-3498
publication_status: published
publisher: ML Research Press
quality_controlled: '1'
related_material:
  link:
  - relation: supplementary_material
    url: http://proceedings.mlr.press/v119/konstantinov20a/konstantinov20a-supp.pdf
  record:
  - id: '10799'
    relation: dissertation_contains
    status: public
scopus_import: '1'
status: public
title: On the sample complexity of adversarial multi-source PAC learning
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 119
year: '2020'
...
---
_id: '8725'
abstract:
- lang: eng
  text: "The design and implementation of efficient concurrent data structures have\r\nseen
    significant attention. However, most of this work has focused on\r\nconcurrent
    data structures providing good \\emph{worst-case} guarantees. In real\r\nworkloads,
    objects are often accessed at different rates, since access\r\ndistributions may
    be non-uniform. Efficient distribution-adaptive data\r\nstructures are known in
    the sequential case, e.g. the splay-trees; however,\r\nthey often are hard to
    translate efficiently in the concurrent case.\r\n  In this paper, we investigate
    distribution-adaptive concurrent data\r\nstructures and propose a new design called
    the splay-list. At a high level, the\r\nsplay-list is similar to a standard skip-list,
    with the key distinction that\r\nthe height of each element adapts dynamically
    to its access rate: popular\r\nelements ``move up,'' whereas rarely-accessed elements
    decrease in height. We\r\nshow that the splay-list provides order-optimal amortized
    complexity bounds for\r\na subset of operations while being amenable to efficient
    concurrent\r\nimplementation. Experimental results show that the splay-list can
    leverage\r\ndistribution-adaptivity to improve on the performance of classic concurrent\r\ndesigns,
    and can outperform the only previously-known distribution-adaptive\r\ndesign in
    certain settings."
acknowledgement: "Vitaly Aksenov: Government of Russian Federation (Grant 08-08).\r\nDan
  Alistarh: ERC Starting Grant 805223 ScaleML."
article_processing_charge: No
arxiv: 1
author:
- first_name: Vitaly
  full_name: Aksenov, Vitaly
  last_name: Aksenov
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Alexandra
  full_name: Drozdova, Alexandra
  last_name: Drozdova
- first_name: Amirkeivan
  full_name: Mohtashami, Amirkeivan
  last_name: Mohtashami
citation:
  ama: 'Aksenov V, Alistarh D-A, Drozdova A, Mohtashami A. The splay-list: A distribution-adaptive
    concurrent skip-list. In: <i>34th International Symposium on Distributed Computing</i>.
    Vol 179. LIPIcs. Schloss Dagstuhl - Leibniz-Zentrum für Informatik; 2020:3:1-3:18.
    doi:<a href="https://doi.org/10.4230/LIPIcs.DISC.2020.3">10.4230/LIPIcs.DISC.2020.3</a>'
  apa: 'Aksenov, V., Alistarh, D.-A., Drozdova, A., &#38; Mohtashami, A. (2020). The
    splay-list: A distribution-adaptive concurrent skip-list. In <i>34th International
    Symposium on Distributed Computing</i> (Vol. 179, p. 3:1-3:18). Freiburg, Germany:
    Schloss Dagstuhl - Leibniz-Zentrum für Informatik. <a href="https://doi.org/10.4230/LIPIcs.DISC.2020.3">https://doi.org/10.4230/LIPIcs.DISC.2020.3</a>'
  chicago: 'Aksenov, Vitaly, Dan-Adrian Alistarh, Alexandra Drozdova, and Amirkeivan
    Mohtashami. “The Splay-List: A Distribution-Adaptive Concurrent Skip-List.” In
    <i>34th International Symposium on Distributed Computing</i>, 179:3:1-3:18. LIPIcs.
    Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. <a href="https://doi.org/10.4230/LIPIcs.DISC.2020.3">https://doi.org/10.4230/LIPIcs.DISC.2020.3</a>.'
  ieee: 'V. Aksenov, D.-A. Alistarh, A. Drozdova, and A. Mohtashami, “The splay-list:
    A distribution-adaptive concurrent skip-list,” in <i>34th International Symposium
    on Distributed Computing</i>, Freiburg, Germany, 2020, vol. 179, p. 3:1-3:18.'
  ista: 'Aksenov V, Alistarh D-A, Drozdova A, Mohtashami A. 2020. The splay-list:
    A distribution-adaptive concurrent skip-list. 34th International Symposium on
    Distributed Computing. DISC: Symposium on Distributed ComputingLIPIcs vol. 179,
    3:1-3:18.'
  mla: 'Aksenov, Vitaly, et al. “The Splay-List: A Distribution-Adaptive Concurrent
    Skip-List.” <i>34th International Symposium on Distributed Computing</i>, vol.
    179, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020, p. 3:1-3:18, doi:<a
    href="https://doi.org/10.4230/LIPIcs.DISC.2020.3">10.4230/LIPIcs.DISC.2020.3</a>.'
  short: V. Aksenov, D.-A. Alistarh, A. Drozdova, A. Mohtashami, in:, 34th International
    Symposium on Distributed Computing, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
    2020, p. 3:1-3:18.
conference:
  end_date: 2020-10-16
  location: Freiburg, Germany
  name: 'DISC: Symposium on Distributed Computing'
  start_date: 2020-10-12
date_created: 2020-11-05T15:26:17Z
date_published: 2020-08-03T00:00:00Z
date_updated: 2023-02-23T13:41:40Z
day: '03'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.4230/LIPIcs.DISC.2020.3
ec_funded: 1
external_id:
  arxiv:
  - '2008.01009'
file:
- access_level: open_access
  checksum: a626a9c47df52b6f6d97edd910dae4ba
  content_type: application/pdf
  creator: dernst
  date_created: 2021-03-11T12:33:35Z
  date_updated: 2021-03-11T12:33:35Z
  file_id: '9237'
  file_name: 2020_LIPIcs_Aksenov.pdf
  file_size: 740358
  relation: main_file
  success: 1
file_date_updated: 2021-03-11T12:33:35Z
has_accepted_license: '1'
intvolume: '       179'
language:
- iso: eng
license: https://creativecommons.org/licenses/by/3.0/
month: '08'
oa: 1
oa_version: Published Version
page: 3:1-3:18
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: 34th International Symposium on Distributed Computing
publication_identifier:
  isbn:
  - '9783959771689'
  issn:
  - 1868-8969
publication_status: published
publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik
quality_controlled: '1'
series_title: LIPIcs
status: public
title: 'The splay-list: A distribution-adaptive concurrent skip-list'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/3.0/legalcode
  name: Creative Commons Attribution 3.0 Unported (CC BY 3.0)
  short: CC BY (3.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 179
year: '2020'
...
---
_id: '7213'
abstract:
- lang: eng
  text: Persistent homology is a powerful tool in Topological Data Analysis (TDA)
    to capture the topological properties of data succinctly at different spatial
    resolutions. For graphical data, the shape, and structure of the neighborhood
    of individual data items (nodes) are an essential means of characterizing their
    properties. We propose the use of persistent homology methods to capture structural
    and topological properties of graphs and use it to address the problem of link
    prediction. We achieve encouraging results on nine different real-world datasets
    that attest to the potential of persistent homology-based methods for network
    analysis.
alternative_title:
- SCI
article_processing_charge: No
author:
- first_name: Sumit
  full_name: Bhatia, Sumit
  last_name: Bhatia
- first_name: Bapi
  full_name: Chatterjee, Bapi
  id: 3C41A08A-F248-11E8-B48F-1D18A9856A87
  last_name: Chatterjee
  orcid: 0000-0002-2742-4028
- first_name: Deepak
  full_name: Nathani, Deepak
  last_name: Nathani
- first_name: Manohar
  full_name: Kaul, Manohar
  last_name: Kaul
citation:
  ama: 'Bhatia S, Chatterjee B, Nathani D, Kaul M. A persistent homology perspective
    to the link prediction problem. In: <i>Complex Networks and Their Applications
    VIII</i>. Vol 881. Springer Nature; 2020:27-39. doi:<a href="https://doi.org/10.1007/978-3-030-36687-2_3">10.1007/978-3-030-36687-2_3</a>'
  apa: 'Bhatia, S., Chatterjee, B., Nathani, D., &#38; Kaul, M. (2020). A persistent
    homology perspective to the link prediction problem. In <i>Complex Networks and
    their applications VIII</i> (Vol. 881, pp. 27–39). Lisbon, Portugal: Springer
    Nature. <a href="https://doi.org/10.1007/978-3-030-36687-2_3">https://doi.org/10.1007/978-3-030-36687-2_3</a>'
  chicago: Bhatia, Sumit, Bapi Chatterjee, Deepak Nathani, and Manohar Kaul. “A Persistent
    Homology Perspective to the Link Prediction Problem.” In <i>Complex Networks and
    Their Applications VIII</i>, 881:27–39. Springer Nature, 2020. <a href="https://doi.org/10.1007/978-3-030-36687-2_3">https://doi.org/10.1007/978-3-030-36687-2_3</a>.
  ieee: S. Bhatia, B. Chatterjee, D. Nathani, and M. Kaul, “A persistent homology
    perspective to the link prediction problem,” in <i>Complex Networks and their
    applications VIII</i>, Lisbon, Portugal, 2020, vol. 881, pp. 27–39.
  ista: 'Bhatia S, Chatterjee B, Nathani D, Kaul M. 2020. A persistent homology perspective
    to the link prediction problem. Complex Networks and their applications VIII.
    COMPLEX: International Conference on Complex Networks and their Applications,
    SCI, vol. 881, 27–39.'
  mla: Bhatia, Sumit, et al. “A Persistent Homology Perspective to the Link Prediction
    Problem.” <i>Complex Networks and Their Applications VIII</i>, vol. 881, Springer
    Nature, 2020, pp. 27–39, doi:<a href="https://doi.org/10.1007/978-3-030-36687-2_3">10.1007/978-3-030-36687-2_3</a>.
  short: S. Bhatia, B. Chatterjee, D. Nathani, M. Kaul, in:, Complex Networks and
    Their Applications VIII, Springer Nature, 2020, pp. 27–39.
conference:
  end_date: 2019-12-12
  location: Lisbon, Portugal
  name: 'COMPLEX: International Conference on Complex Networks and their Applications'
  start_date: 2019-12-10
date_created: 2019-12-29T23:00:45Z
date_published: 2020-01-01T00:00:00Z
date_updated: 2024-02-22T13:16:06Z
day: '01'
ddc:
- '004'
department:
- _id: DaAl
doi: 10.1007/978-3-030-36687-2_3
ec_funded: 1
external_id:
  isi:
  - '000843927300003'
file:
- access_level: open_access
  checksum: 8951f094c8c7dae9ff8db885199bc296
  content_type: application/pdf
  creator: bchatter
  date_created: 2020-10-08T08:16:48Z
  date_updated: 2020-10-08T08:16:48Z
  file_id: '8625'
  file_name: main.pdf
  file_size: 310598
  relation: main_file
  success: 1
file_date_updated: 2020-10-08T08:16:48Z
has_accepted_license: '1'
intvolume: '       881'
isi: 1
language:
- iso: eng
month: '01'
oa: 1
oa_version: Submitted Version
page: 27-39
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '754411'
  name: ISTplus - Postdoctoral Fellowships
publication: Complex Networks and their applications VIII
publication_identifier:
  eissn:
  - '18609503'
  isbn:
  - '9783030366865'
  issn:
  - 1860949X
publication_status: published
publisher: Springer Nature
quality_controlled: '1'
scopus_import: '1'
status: public
title: A persistent homology perspective to the link prediction problem
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 881
year: '2020'
...
---
_id: '7224'
abstract:
- lang: eng
  text: 'Habitat loss is one of the key drivers of the ongoing decline of biodiversity.
    However, ecologists still argue about how fragmentation of habitat (independent
    of habitat loss) affects species richness. The recently proposed habitat amount
    hypothesis posits that species richness only depends on the total amount of habitat
    in a local landscape. In contrast, empirical studies report contrasting patterns:
    some find positive and others negative effects of fragmentation per se on species
    richness. To explain this apparent disparity, we devise a stochastic, spatially
    explicit model of competitive species communities in heterogeneous habitats. The
    model shows that habitat loss and fragmentation have complex effects on species
    diversity in competitive communities. When the total amount of habitat is large,
    fragmentation per se tends to increase species diversity, but if the total amount
    of habitat is small, the situation is reversed: fragmentation per se decreases
    species diversity.'
article_processing_charge: Yes (via OA deal)
article_type: original
author:
- first_name: Joel
  full_name: Rybicki, Joel
  id: 334EFD2E-F248-11E8-B48F-1D18A9856A87
  last_name: Rybicki
  orcid: 0000-0002-6432-6646
- first_name: Nerea
  full_name: Abrego, Nerea
  last_name: Abrego
- first_name: Otso
  full_name: Ovaskainen, Otso
  last_name: Ovaskainen
citation:
  ama: Rybicki J, Abrego N, Ovaskainen O. Habitat fragmentation and species diversity
    in competitive communities. <i>Ecology Letters</i>. 2020;23(3):506-517. doi:<a
    href="https://doi.org/10.1111/ele.13450">10.1111/ele.13450</a>
  apa: Rybicki, J., Abrego, N., &#38; Ovaskainen, O. (2020). Habitat fragmentation
    and species diversity in competitive communities. <i>Ecology Letters</i>. Wiley.
    <a href="https://doi.org/10.1111/ele.13450">https://doi.org/10.1111/ele.13450</a>
  chicago: Rybicki, Joel, Nerea Abrego, and Otso Ovaskainen. “Habitat Fragmentation
    and Species Diversity in Competitive Communities.” <i>Ecology Letters</i>. Wiley,
    2020. <a href="https://doi.org/10.1111/ele.13450">https://doi.org/10.1111/ele.13450</a>.
  ieee: J. Rybicki, N. Abrego, and O. Ovaskainen, “Habitat fragmentation and species
    diversity in competitive communities,” <i>Ecology Letters</i>, vol. 23, no. 3.
    Wiley, pp. 506–517, 2020.
  ista: Rybicki J, Abrego N, Ovaskainen O. 2020. Habitat fragmentation and species
    diversity in competitive communities. Ecology Letters. 23(3), 506–517.
  mla: Rybicki, Joel, et al. “Habitat Fragmentation and Species Diversity in Competitive
    Communities.” <i>Ecology Letters</i>, vol. 23, no. 3, Wiley, 2020, pp. 506–17,
    doi:<a href="https://doi.org/10.1111/ele.13450">10.1111/ele.13450</a>.
  short: J. Rybicki, N. Abrego, O. Ovaskainen, Ecology Letters 23 (2020) 506–517.
date_created: 2020-01-04T11:04:30Z
date_published: 2020-03-01T00:00:00Z
date_updated: 2023-09-05T16:04:30Z
day: '01'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.1111/ele.13450
ec_funded: 1
external_id:
  isi:
  - '000503625200001'
file:
- access_level: open_access
  checksum: 372f67f2744f4b6049e9778364766c22
  content_type: application/pdf
  creator: dernst
  date_created: 2020-02-14T12:02:50Z
  date_updated: 2020-07-14T12:47:54Z
  file_id: '7486'
  file_name: 2020_EcologyLetters_Rybicki.pdf
  file_size: 3005474
  relation: main_file
file_date_updated: 2020-07-14T12:47:54Z
has_accepted_license: '1'
intvolume: '        23'
isi: 1
issue: '3'
language:
- iso: eng
month: '03'
oa: 1
oa_version: Published Version
page: 506-517
project:
- _id: 260C2330-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '754411'
  name: ISTplus - Postdoctoral Fellowships
- _id: 26A5D39A-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '840605'
  name: Coordination in constrained and natural distributed systems
publication: Ecology Letters
publication_identifier:
  eissn:
  - 1461-0248
  issn:
  - 1461-023X
publication_status: published
publisher: Wiley
quality_controlled: '1'
scopus_import: '1'
status: public
title: Habitat fragmentation and species diversity in competitive communities
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/4.0/legalcode
  name: Creative Commons Attribution 4.0 International Public License (CC-BY 4.0)
  short: CC BY (4.0)
type: journal_article
user_id: c635000d-4b10-11ee-a964-aac5a93f6ac1
volume: 23
year: '2020'
...
---
_id: '7272'
abstract:
- lang: eng
  text: "Many systems rely on optimistic concurrent search trees for multi-core scalability.
    In principle, optimistic trees have a simple performance story: searches are read-only
    and so run in parallel, with writes to shared memory occurring only when modifying
    the data structure. However, this paper shows that in practice, obtaining the
    full performance benefits of optimistic search trees is not so simple.\r\n\r\nWe
    focus on optimistic binary search trees (BSTs) and perform a detailed performance
    analysis of 10 state-of-the-art BSTs on large scale x86-64 hardware, using both
    microbenchmarks and an in-memory database system. We find and explain significant
    unexpected performance differences between BSTs with similar tree structure and
    search implementations, which we trace to subtle performance-degrading interactions
    of BSTs with systems software and hardware subsystems. We further derive a prescriptive
    approach to avoid this performance degradation, as well as algorithmic insights
    on optimistic BST design. Our work underlines the gap between the theory and practice
    of multi-core performance, and calls for further research to help bridge this
    gap."
article_processing_charge: No
author:
- first_name: Maya
  full_name: Arbel-Raviv, Maya
  last_name: Arbel-Raviv
- first_name: Trevor A
  full_name: Brown, Trevor A
  id: 3569F0A0-F248-11E8-B48F-1D18A9856A87
  last_name: Brown
- first_name: Adam
  full_name: Morrison, Adam
  last_name: Morrison
citation:
  ama: 'Arbel-Raviv M, Brown TA, Morrison A. Getting to the root of concurrent binary
    search tree performance. In: <i>Proceedings of the 2018 USENIX Annual Technical
    Conference</i>. USENIX Association; 2020:295-306.'
  apa: 'Arbel-Raviv, M., Brown, T. A., &#38; Morrison, A. (2020). Getting to the root
    of concurrent binary search tree performance. In <i>Proceedings of the 2018 USENIX
    Annual Technical Conference</i> (pp. 295–306). Boston, MA, United States: USENIX
    Association.'
  chicago: Arbel-Raviv, Maya, Trevor A Brown, and Adam Morrison. “Getting to the Root
    of Concurrent Binary Search Tree Performance.” In <i>Proceedings of the 2018 USENIX
    Annual Technical Conference</i>, 295–306. USENIX Association, 2020.
  ieee: M. Arbel-Raviv, T. A. Brown, and A. Morrison, “Getting to the root of concurrent
    binary search tree performance,” in <i>Proceedings of the 2018 USENIX Annual Technical
    Conference</i>, Boston, MA, United States, 2020, pp. 295–306.
  ista: 'Arbel-Raviv M, Brown TA, Morrison A. 2020. Getting to the root of concurrent
    binary search tree performance. Proceedings of the 2018 USENIX Annual Technical
    Conference. USENIX: Annual Technical Conference, 295–306.'
  mla: Arbel-Raviv, Maya, et al. “Getting to the Root of Concurrent Binary Search
    Tree Performance.” <i>Proceedings of the 2018 USENIX Annual Technical Conference</i>,
    USENIX Association, 2020, pp. 295–306.
  short: M. Arbel-Raviv, T.A. Brown, A. Morrison, in:, Proceedings of the 2018 USENIX
    Annual Technical Conference, USENIX Association, 2020, pp. 295–306.
conference:
  end_date: 2018-07-13
  location: Boston, MA, United States
  name: 'USENIX: Annual Technical Conference'
  start_date: 2018-07-11
date_created: 2020-01-14T07:27:08Z
date_published: 2020-01-01T00:00:00Z
date_updated: 2021-01-11T15:25:48Z
day: '01'
ddc:
- '000'
department:
- _id: DaAl
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://www.usenix.org/system/files/conference/atc18/atc18-arbel-raviv.pdf
month: '01'
oa: 1
oa_version: Published Version
page: 295-306
project:
- _id: 26450934-B435-11E9-9278-68D0E5697425
  name: NSERC Postdoctoral fellowship
publication: Proceedings of the 2018 USENIX Annual Technical Conference
publication_identifier:
  isbn:
  - '9781939133021'
publication_status: published
publisher: USENIX Association
quality_controlled: '1'
scopus_import: '1'
status: public
title: Getting to the root of concurrent binary search tree performance
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7605'
abstract:
- lang: eng
  text: 'Union-Find (or Disjoint-Set Union) is one of the fundamental problems in
    computer science; it has been well-studied from both theoretical and practical
    perspectives in the sequential case. Recently, there has been mounting interest
    in analyzing this problem in the concurrent scenario, and several asymptotically-efficient
    algorithms have been proposed. Yet, to date, there is very little known about
    the practical performance of concurrent Union-Find. This work addresses this gap.
    We evaluate and analyze the performance of several concurrent Union-Find algorithms
    and optimization strategies across a wide range of platforms (Intel, AMD, and
    ARM) and workloads (social, random, and road networks, as well as integrations
    into more complex algorithms). We first observe that, due to the limited computational
    cost, the number of induced cache misses is the critical determining factor for
    the performance of existing algorithms. We introduce new techniques to reduce
    this cost by storing node priorities implicitly and by using plain reads and writes
    in a way that does not affect the correctness of the algorithms. Finally, we show
    that Union-Find implementations are an interesting application for Transactional
    Memory (TM): one of the fastest algorithm variants we discovered is a sequential
    one that uses coarse-grained locking with the lock elision optimization to reduce
    synchronization cost and increase scalability. '
alternative_title:
- LIPIcs
article_processing_charge: No
arxiv: 1
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Alexander
  full_name: Fedorov, Alexander
  last_name: Fedorov
- first_name: Nikita
  full_name: Koval, Nikita
  id: 2F4DB10C-F248-11E8-B48F-1D18A9856A87
  last_name: Koval
citation:
  ama: 'Alistarh D-A, Fedorov A, Koval N. In search of the fastest concurrent union-find
    algorithm. In: <i>23rd International Conference on Principles of Distributed Systems</i>.
    Vol 153. Schloss Dagstuhl - Leibniz-Zentrum für Informatik; 2020:15:1-15:16. doi:<a
    href="https://doi.org/10.4230/LIPIcs.OPODIS.2019.15">10.4230/LIPIcs.OPODIS.2019.15</a>'
  apa: 'Alistarh, D.-A., Fedorov, A., &#38; Koval, N. (2020). In search of the fastest
    concurrent union-find algorithm. In <i>23rd International Conference on Principles
    of Distributed Systems</i> (Vol. 153, p. 15:1-15:16). Neuchatal, Switzerland:
    Schloss Dagstuhl - Leibniz-Zentrum für Informatik. <a href="https://doi.org/10.4230/LIPIcs.OPODIS.2019.15">https://doi.org/10.4230/LIPIcs.OPODIS.2019.15</a>'
  chicago: Alistarh, Dan-Adrian, Alexander Fedorov, and Nikita Koval. “In Search of
    the Fastest Concurrent Union-Find Algorithm.” In <i>23rd International Conference
    on Principles of Distributed Systems</i>, 153:15:1-15:16. Schloss Dagstuhl - Leibniz-Zentrum
    für Informatik, 2020. <a href="https://doi.org/10.4230/LIPIcs.OPODIS.2019.15">https://doi.org/10.4230/LIPIcs.OPODIS.2019.15</a>.
  ieee: D.-A. Alistarh, A. Fedorov, and N. Koval, “In search of the fastest concurrent
    union-find algorithm,” in <i>23rd International Conference on Principles of Distributed
    Systems</i>, Neuchatal, Switzerland, 2020, vol. 153, p. 15:1-15:16.
  ista: 'Alistarh D-A, Fedorov A, Koval N. 2020. In search of the fastest concurrent
    union-find algorithm. 23rd International Conference on Principles of Distributed
    Systems. OPODIS: International Conference on Principles of Distributed Systems,
    LIPIcs, vol. 153, 15:1-15:16.'
  mla: Alistarh, Dan-Adrian, et al. “In Search of the Fastest Concurrent Union-Find
    Algorithm.” <i>23rd International Conference on Principles of Distributed Systems</i>,
    vol. 153, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020, p. 15:1-15:16,
    doi:<a href="https://doi.org/10.4230/LIPIcs.OPODIS.2019.15">10.4230/LIPIcs.OPODIS.2019.15</a>.
  short: D.-A. Alistarh, A. Fedorov, N. Koval, in:, 23rd International Conference
    on Principles of Distributed Systems, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
    2020, p. 15:1-15:16.
conference:
  end_date: 2019-12-19
  location: Neuchatal, Switzerland
  name: 'OPODIS: International Conference on Principles of Distributed Systems'
  start_date: 2019-12-17
date_created: 2020-03-22T23:00:46Z
date_published: 2020-02-01T00:00:00Z
date_updated: 2023-02-23T13:12:12Z
day: '01'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.4230/LIPIcs.OPODIS.2019.15
external_id:
  arxiv:
  - '1911.06347'
file:
- access_level: open_access
  checksum: d66f07ecb609d9f02433e39f80a447e9
  content_type: application/pdf
  creator: dernst
  date_created: 2020-03-23T09:22:48Z
  date_updated: 2020-07-14T12:48:01Z
  file_id: '7609'
  file_name: 2019_LIPIcs_Alistarh.pdf
  file_size: 13074131
  relation: main_file
file_date_updated: 2020-07-14T12:48:01Z
has_accepted_license: '1'
intvolume: '       153'
language:
- iso: eng
month: '02'
oa: 1
oa_version: Published Version
page: 15:1-15:16
publication: 23rd International Conference on Principles of Distributed Systems
publication_identifier:
  isbn:
  - '9783959771337'
  issn:
  - '18688969'
publication_status: published
publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik
quality_controlled: '1'
scopus_import: '1'
status: public
title: In search of the fastest concurrent union-find algorithm
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/3.0/legalcode
  name: Creative Commons Attribution 3.0 Unported (CC BY 3.0)
  short: CC BY (3.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 153
year: '2020'
...
---
_id: '7635'
abstract:
- lang: eng
  text: Concurrent programming can be notoriously complex and error-prone. Programming
    bugs can arise from a variety of sources, such as operation re-reordering, or
    incomplete understanding of the memory model. A variety of formal and model checking
    methods have been developed to address this fundamental difficulty. While technically
    interesting, existing academic methods are still hard to apply to the large codebases
    typical of industrial deployments, which limits their practical impact.
article_processing_charge: No
author:
- first_name: Nikita
  full_name: Koval, Nikita
  id: 2F4DB10C-F248-11E8-B48F-1D18A9856A87
  last_name: Koval
- first_name: Mariia
  full_name: Sokolova, Mariia
  id: 26217AE4-77FF-11EA-8101-AD24D49E41F4
  last_name: Sokolova
- first_name: Alexander
  full_name: Fedorov, Alexander
  last_name: Fedorov
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Dmitry
  full_name: Tsitelov, Dmitry
  last_name: Tsitelov
citation:
  ama: 'Koval N, Sokolova M, Fedorov A, Alistarh D-A, Tsitelov D. Testing concurrency
    on the JVM with Lincheck. In: <i>Proceedings of the ACM SIGPLAN Symposium on Principles
    and Practice of Parallel Programming, PPOPP</i>. Association for Computing Machinery;
    2020:423-424. doi:<a href="https://doi.org/10.1145/3332466.3374503">10.1145/3332466.3374503</a>'
  apa: 'Koval, N., Sokolova, M., Fedorov, A., Alistarh, D.-A., &#38; Tsitelov, D.
    (2020). Testing concurrency on the JVM with Lincheck. In <i>Proceedings of the
    ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP</i>
    (pp. 423–424). San Diego, CA, United States: Association for Computing Machinery.
    <a href="https://doi.org/10.1145/3332466.3374503">https://doi.org/10.1145/3332466.3374503</a>'
  chicago: Koval, Nikita, Mariia Sokolova, Alexander Fedorov, Dan-Adrian Alistarh,
    and Dmitry Tsitelov. “Testing Concurrency on the JVM with Lincheck.” In <i>Proceedings
    of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming,
    PPOPP</i>, 423–24. Association for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3332466.3374503">https://doi.org/10.1145/3332466.3374503</a>.
  ieee: N. Koval, M. Sokolova, A. Fedorov, D.-A. Alistarh, and D. Tsitelov, “Testing
    concurrency on the JVM with Lincheck,” in <i>Proceedings of the ACM SIGPLAN Symposium
    on Principles and Practice of Parallel Programming, PPOPP</i>, San Diego, CA,
    United States, 2020, pp. 423–424.
  ista: 'Koval N, Sokolova M, Fedorov A, Alistarh D-A, Tsitelov D. 2020. Testing concurrency
    on the JVM with Lincheck. Proceedings of the ACM SIGPLAN Symposium on Principles
    and Practice of Parallel Programming, PPOPP. PPOPP: Principles and Practice of
    Parallel Programming, 423–424.'
  mla: Koval, Nikita, et al. “Testing Concurrency on the JVM with Lincheck.” <i>Proceedings
    of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming,
    PPOPP</i>, Association for Computing Machinery, 2020, pp. 423–24, doi:<a href="https://doi.org/10.1145/3332466.3374503">10.1145/3332466.3374503</a>.
  short: N. Koval, M. Sokolova, A. Fedorov, D.-A. Alistarh, D. Tsitelov, in:, Proceedings
    of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming,
    PPOPP, Association for Computing Machinery, 2020, pp. 423–424.
conference:
  end_date: 2020-02-26
  location: San Diego, CA, United States
  name: 'PPOPP: Principles and Practice of Parallel Programming'
  start_date: 2020-02-22
date_created: 2020-04-05T22:00:48Z
date_published: 2020-02-19T00:00:00Z
date_updated: 2024-02-28T12:53:46Z
day: '19'
department:
- _id: DaAl
doi: 10.1145/3332466.3374503
language:
- iso: eng
month: '02'
oa_version: None
page: 423-424
publication: Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of
  Parallel Programming, PPOPP
publication_identifier:
  isbn:
  - '9781450368186'
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: Testing concurrency on the JVM with Lincheck
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '7636'
abstract:
- lang: eng
  text: "Balanced search trees typically use key comparisons to guide their operations,
    and achieve logarithmic running time. By relying on numerical properties of the
    keys, interpolation search achieves lower search complexity and better performance.
    Although interpolation-based data structures were investigated in the past, their
    non-blocking concurrent variants have received very little attention so far.\r\nIn
    this paper, we propose the first non-blocking implementation of the classic interpolation
    search tree (IST) data structure. For arbitrary key distributions, the data structure
    ensures worst-case O(log n + p) amortized time for search, insertion and deletion
    traversals. When the input key distributions are smooth, lookups run in expected
    O(log log n + p) time, and insertion and deletion run in expected amortized O(log
    log n + p) time, where p is a bound on the number of threads. To improve the scalability
    of concurrent insertion and deletion, we propose a novel parallel rebuilding technique,
    which should be of independent interest.\r\nWe evaluate whether the theoretical
    improvements translate to practice by implementing the concurrent interpolation
    search tree, and benchmarking it on uniform and nonuniform key distributions,
    for dataset sizes in the millions to billions of keys. Relative to the state-of-the-art
    concurrent data structures, the concurrent interpolation search tree achieves
    performance improvements of up to 15% under high update rates, and of up to 50%
    under moderate update rates. Further, ISTs exhibit up to 2X less cache-misses,
    and consume 1.2 -- 2.6X less memory compared to the next best alternative on typical
    dataset sizes. We find that the results are surprisingly robust to distributional
    skew, which suggests that our data structure can be a promising alternative to
    classic concurrent search structures."
acknowledgement: "This project has received funding from the European Research Council
  (ERC) under the European Union Horizon 2020 research and innovation program, grant
  agreement No 805223, ERC Starting Grant ScaleML. We acknowledge the support of the
  Natural Sciences and\r\nEngineering Research Council of Canada (NSERC). "
article_processing_charge: No
author:
- first_name: Trevor A
  full_name: Brown, Trevor A
  id: 3569F0A0-F248-11E8-B48F-1D18A9856A87
  last_name: Brown
- first_name: Aleksandar
  full_name: Prokopec, Aleksandar
  last_name: Prokopec
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
citation:
  ama: 'Brown TA, Prokopec A, Alistarh D-A. Non-blocking interpolation search trees
    with doubly-logarithmic running time. In: <i>Proceedings of the ACM SIGPLAN Symposium
    on Principles and Practice of Parallel Programming</i>. Association for Computing
    Machinery; 2020:276-291. doi:<a href="https://doi.org/10.1145/3332466.3374542">10.1145/3332466.3374542</a>'
  apa: 'Brown, T. A., Prokopec, A., &#38; Alistarh, D.-A. (2020). Non-blocking interpolation
    search trees with doubly-logarithmic running time. In <i>Proceedings of the ACM
    SIGPLAN Symposium on Principles and Practice of Parallel Programming</i> (pp.
    276–291). San Diego, CA, United States: Association for Computing Machinery. <a
    href="https://doi.org/10.1145/3332466.3374542">https://doi.org/10.1145/3332466.3374542</a>'
  chicago: Brown, Trevor A, Aleksandar Prokopec, and Dan-Adrian Alistarh. “Non-Blocking
    Interpolation Search Trees with Doubly-Logarithmic Running Time.” In <i>Proceedings
    of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming</i>,
    276–91. Association for Computing Machinery, 2020. <a href="https://doi.org/10.1145/3332466.3374542">https://doi.org/10.1145/3332466.3374542</a>.
  ieee: T. A. Brown, A. Prokopec, and D.-A. Alistarh, “Non-blocking interpolation
    search trees with doubly-logarithmic running time,” in <i>Proceedings of the ACM
    SIGPLAN Symposium on Principles and Practice of Parallel Programming</i>, San
    Diego, CA, United States, 2020, pp. 276–291.
  ista: 'Brown TA, Prokopec A, Alistarh D-A. 2020. Non-blocking interpolation search
    trees with doubly-logarithmic running time. Proceedings of the ACM SIGPLAN Symposium
    on Principles and Practice of Parallel Programming. PPOPP: Principles and Practice
    of Parallel Programming, 276–291.'
  mla: Brown, Trevor A., et al. “Non-Blocking Interpolation Search Trees with Doubly-Logarithmic
    Running Time.” <i>Proceedings of the ACM SIGPLAN Symposium on Principles and Practice
    of Parallel Programming</i>, Association for Computing Machinery, 2020, pp. 276–91,
    doi:<a href="https://doi.org/10.1145/3332466.3374542">10.1145/3332466.3374542</a>.
  short: T.A. Brown, A. Prokopec, D.-A. Alistarh, in:, Proceedings of the ACM SIGPLAN
    Symposium on Principles and Practice of Parallel Programming, Association for
    Computing Machinery, 2020, pp. 276–291.
conference:
  end_date: 2020-02-26
  location: San Diego, CA, United States
  name: 'PPOPP: Principles and Practice of Parallel Programming'
  start_date: 2020-02-22
date_created: 2020-04-05T22:00:49Z
date_published: 2020-02-19T00:00:00Z
date_updated: 2024-02-28T12:55:14Z
day: '19'
department:
- _id: DaAl
doi: 10.1145/3332466.3374542
ec_funded: 1
external_id:
  isi:
  - '000564476500020'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://doi.org/10.1145/3332466.3374542
month: '02'
oa: 1
oa_version: Published Version
page: 276-291
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of
  Parallel Programming
publication_identifier:
  isbn:
  - '9781450368186'
publication_status: published
publisher: Association for Computing Machinery
quality_controlled: '1'
scopus_import: '1'
status: public
title: Non-blocking interpolation search trees with doubly-logarithmic running time
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
year: '2020'
...
---
_id: '15074'
abstract:
- lang: eng
  text: We introduce a new graph problem, the token dropping game, and we show how
    to solve it efficiently in a distributed setting. We use the token dropping game
    as a tool to design an efficient distributed algorithm for the stable orientation
    problem, which is a special case of the more general locally optimal semi-matching
    problem. The prior work by Czygrinow et al. (DISC 2012) finds a locally optimal
    semi-matching in O(Δ⁵) rounds in graphs of maximum degree Δ, which directly implies
    an algorithm with the same runtime for stable orientations. We improve the runtime
    to O(Δ⁴) for stable orientations and prove a lower bound of Ω(Δ) rounds.
alternative_title:
- LIPIcs
article_number: '40'
article_processing_charge: No
arxiv: 1
author:
- first_name: Sebastian
  full_name: Brandt, Sebastian
  last_name: Brandt
- first_name: Barbara
  full_name: Keller, Barbara
  last_name: Keller
- first_name: Joel
  full_name: Rybicki, Joel
  id: 334EFD2E-F248-11E8-B48F-1D18A9856A87
  last_name: Rybicki
  orcid: 0000-0002-6432-6646
- first_name: Jukka
  full_name: Suomela, Jukka
  last_name: Suomela
- first_name: Jara
  full_name: Uitto, Jara
  last_name: Uitto
citation:
  ama: 'Brandt S, Keller B, Rybicki J, Suomela J, Uitto J. Brief announcement: Efficient
    load-balancing through distributed token dropping. In: <i>34th International Symposium
    on Distributed Computing</i>. Vol 179. Schloss Dagstuhl - Leibniz-Zentrum für
    Informatik; 2020. doi:<a href="https://doi.org/10.4230/LIPIcs.DISC.2020.40">10.4230/LIPIcs.DISC.2020.40</a>'
  apa: 'Brandt, S., Keller, B., Rybicki, J., Suomela, J., &#38; Uitto, J. (2020).
    Brief announcement: Efficient load-balancing through distributed token dropping.
    In <i>34th International Symposium on Distributed Computing</i> (Vol. 179). Virtual:
    Schloss Dagstuhl - Leibniz-Zentrum für Informatik. <a href="https://doi.org/10.4230/LIPIcs.DISC.2020.40">https://doi.org/10.4230/LIPIcs.DISC.2020.40</a>'
  chicago: 'Brandt, Sebastian, Barbara Keller, Joel Rybicki, Jukka Suomela, and Jara
    Uitto. “Brief Announcement: Efficient Load-Balancing through Distributed Token
    Dropping.” In <i>34th International Symposium on Distributed Computing</i>, Vol.
    179. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020. <a href="https://doi.org/10.4230/LIPIcs.DISC.2020.40">https://doi.org/10.4230/LIPIcs.DISC.2020.40</a>.'
  ieee: 'S. Brandt, B. Keller, J. Rybicki, J. Suomela, and J. Uitto, “Brief announcement:
    Efficient load-balancing through distributed token dropping,” in <i>34th International
    Symposium on Distributed Computing</i>, Virtual, 2020, vol. 179.'
  ista: 'Brandt S, Keller B, Rybicki J, Suomela J, Uitto J. 2020. Brief announcement:
    Efficient load-balancing through distributed token dropping. 34th International
    Symposium on Distributed Computing. DISC: Symposium on Distributed Computing,
    LIPIcs, vol. 179, 40.'
  mla: 'Brandt, Sebastian, et al. “Brief Announcement: Efficient Load-Balancing through
    Distributed Token Dropping.” <i>34th International Symposium on Distributed Computing</i>,
    vol. 179, 40, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020, doi:<a
    href="https://doi.org/10.4230/LIPIcs.DISC.2020.40">10.4230/LIPIcs.DISC.2020.40</a>.'
  short: S. Brandt, B. Keller, J. Rybicki, J. Suomela, J. Uitto, in:, 34th International
    Symposium on Distributed Computing, Schloss Dagstuhl - Leibniz-Zentrum für Informatik,
    2020.
conference:
  end_date: 2020-10-16
  location: Virtual
  name: 'DISC: Symposium on Distributed Computing'
  start_date: 2020-10-12
date_created: 2024-03-05T07:09:12Z
date_published: 2020-10-07T00:00:00Z
date_updated: 2024-03-05T07:13:13Z
day: '07'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.4230/LIPIcs.DISC.2020.40
external_id:
  arxiv:
  - '2005.07761'
file:
- access_level: open_access
  checksum: 23e2d9321aef53092dc1e24a8ab82d72
  content_type: application/pdf
  creator: dernst
  date_created: 2024-03-05T07:08:27Z
  date_updated: 2024-03-05T07:08:27Z
  file_id: '15075'
  file_name: 2020_LIPIcs_Brandt.pdf
  file_size: 303529
  relation: main_file
  success: 1
file_date_updated: 2024-03-05T07:08:27Z
has_accepted_license: '1'
intvolume: '       179'
language:
- iso: eng
month: '10'
oa: 1
oa_version: Published Version
publication: 34th International Symposium on Distributed Computing
publication_status: published
publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik
quality_controlled: '1'
related_material:
  record:
  - id: '9678'
    relation: later_version
    status: public
scopus_import: '1'
status: public
title: 'Brief announcement: Efficient load-balancing through distributed token dropping'
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/3.0/legalcode
  name: Creative Commons Attribution 3.0 Unported (CC BY 3.0)
  short: CC BY (3.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 179
year: '2020'
...
---
_id: '15077'
abstract:
- lang: eng
  text: "We consider the following dynamic load-balancing process: given an underlying
    graph G with n nodes, in each step t≥ 0, one unit of load is created, and placed
    at a randomly chosen graph node. In the same step, the chosen node picks a random
    neighbor, and the two nodes balance their loads by averaging them. We are interested
    in the expected gap between the minimum and maximum loads at nodes as the process
    progresses, and its dependence on n and on the graph structure. Variants of the
    above graphical balanced allocation process have been studied previously by Peres,
    Talwar, and Wieder [Peres et al., 2015], and by Sauerwald and Sun [Sauerwald and
    Sun, 2015]. These authors left as open the question of characterizing the gap
    in the case of cycle graphs in the dynamic case, where weights are created during
    the algorithm’s execution. For this case, the only known upper bound is of \U0001D4AA(n
    log n), following from a majorization argument due to [Peres et al., 2015], which
    analyzes a related graphical allocation process. In this paper, we provide an
    upper bound of \U0001D4AA (√n log n) on the expected gap of the above process
    for cycles of length n. We introduce a new potential analysis technique, which
    enables us to bound the difference in load between k-hop neighbors on the cycle,
    for any k ≤ n/2. We complement this with a \"gap covering\" argument, which bounds
    the maximum value of the gap by bounding its value across all possible subsets
    of a certain structure, and recursively bounding the gaps within each subset.
    We provide analytical and experimental evidence that our upper bound on the gap
    is tight up to a logarithmic factor."
acknowledgement: "The authors sincerely thank Thomas Sauerwald and George Giakkoupis
  for insightful discussions, and Mohsen Ghaffari, Yuval Peres, and Udi Wieder for
  feedback on earlier\r\nversions of this draft. We also thank the ICALP anonymous
  reviewers for their very useful comments.\r\nFunding: European Research Council
  funding award PR1042ERC01"
alternative_title:
- LIPIcs
article_number: '7'
article_processing_charge: No
arxiv: 1
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Giorgi
  full_name: Nadiradze, Giorgi
  id: 3279A00C-F248-11E8-B48F-1D18A9856A87
  last_name: Nadiradze
  orcid: 0000-0001-5634-0731
- first_name: Amirmojtaba
  full_name: Sabour, Amirmojtaba
  id: bcc145fd-e77f-11ea-ae8b-80d661dbff67
  last_name: Sabour
citation:
  ama: 'Alistarh D-A, Nadiradze G, Sabour A. Dynamic averaging load balancing on cycles.
    In: <i>47th International Colloquium on Automata, Languages, and Programming</i>.
    Vol 168. Schloss Dagstuhl - Leibniz-Zentrum für Informatik; 2020. doi:<a href="https://doi.org/10.4230/LIPIcs.ICALP.2020.7">10.4230/LIPIcs.ICALP.2020.7</a>'
  apa: 'Alistarh, D.-A., Nadiradze, G., &#38; Sabour, A. (2020). Dynamic averaging
    load balancing on cycles. In <i>47th International Colloquium on Automata, Languages,
    and Programming</i> (Vol. 168). Saarbrücken, Germany, Virtual: Schloss Dagstuhl
    - Leibniz-Zentrum für Informatik. <a href="https://doi.org/10.4230/LIPIcs.ICALP.2020.7">https://doi.org/10.4230/LIPIcs.ICALP.2020.7</a>'
  chicago: Alistarh, Dan-Adrian, Giorgi Nadiradze, and Amirmojtaba Sabour. “Dynamic
    Averaging Load Balancing on Cycles.” In <i>47th International Colloquium on Automata,
    Languages, and Programming</i>, Vol. 168. Schloss Dagstuhl - Leibniz-Zentrum für
    Informatik, 2020. <a href="https://doi.org/10.4230/LIPIcs.ICALP.2020.7">https://doi.org/10.4230/LIPIcs.ICALP.2020.7</a>.
  ieee: D.-A. Alistarh, G. Nadiradze, and A. Sabour, “Dynamic averaging load balancing
    on cycles,” in <i>47th International Colloquium on Automata, Languages, and Programming</i>,
    Saarbrücken, Germany, Virtual, 2020, vol. 168.
  ista: 'Alistarh D-A, Nadiradze G, Sabour A. 2020. Dynamic averaging load balancing
    on cycles. 47th International Colloquium on Automata, Languages, and Programming.
    ICALP: International Colloquium on Automata, Languages, and Programming, LIPIcs,
    vol. 168, 7.'
  mla: Alistarh, Dan-Adrian, et al. “Dynamic Averaging Load Balancing on Cycles.”
    <i>47th International Colloquium on Automata, Languages, and Programming</i>,
    vol. 168, 7, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2020, doi:<a href="https://doi.org/10.4230/LIPIcs.ICALP.2020.7">10.4230/LIPIcs.ICALP.2020.7</a>.
  short: D.-A. Alistarh, G. Nadiradze, A. Sabour, in:, 47th International Colloquium
    on Automata, Languages, and Programming, Schloss Dagstuhl - Leibniz-Zentrum für
    Informatik, 2020.
conference:
  end_date: 2020-07-11
  location: Saarbrücken, Germany, Virtual
  name: 'ICALP: International Colloquium on Automata, Languages, and Programming'
  start_date: 2020-07-08
date_created: 2024-03-05T07:25:37Z
date_published: 2020-06-29T00:00:00Z
date_updated: 2024-03-05T07:35:53Z
day: '29'
ddc:
- '000'
department:
- _id: DaAl
doi: 10.4230/LIPIcs.ICALP.2020.7
ec_funded: 1
external_id:
  arxiv:
  - '2003.09297'
file:
- access_level: open_access
  checksum: e5eb16199f4ccfd77a321977eb3f026f
  content_type: application/pdf
  creator: dernst
  date_created: 2024-03-05T07:25:15Z
  date_updated: 2024-03-05T07:25:15Z
  file_id: '15078'
  file_name: 2020_LIPIcs_Alistarh.pdf
  file_size: 782987
  relation: main_file
  success: 1
file_date_updated: 2024-03-05T07:25:15Z
has_accepted_license: '1'
intvolume: '       168'
language:
- iso: eng
month: '06'
oa: 1
oa_version: Published Version
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: 47th International Colloquium on Automata, Languages, and Programming
publication_status: published
publisher: Schloss Dagstuhl - Leibniz-Zentrum für Informatik
quality_controlled: '1'
related_material:
  record:
  - id: '8286'
    relation: later_version
    status: public
scopus_import: '1'
status: public
title: Dynamic averaging load balancing on cycles
tmp:
  image: /images/cc_by.png
  legal_code_url: https://creativecommons.org/licenses/by/3.0/legalcode
  name: Creative Commons Attribution 3.0 Unported (CC BY 3.0)
  short: CC BY (3.0)
type: conference
user_id: 2DF688A6-F248-11E8-B48F-1D18A9856A87
volume: 168
year: '2020'
...
---
_id: '9415'
abstract:
- lang: eng
  text: 'Optimizing convolutional neural networks for fast inference has recently
    become an extremely active area of research. One of the go-to solutions in this
    context is weight pruning, which aims to reduce computational and memory footprint
    by removing large subsets of the connections in a neural network. Surprisingly,
    much less attention has been given to exploiting sparsity in the activation maps,
    which tend to be naturally sparse in many settings thanks to the structure of
    rectified linear (ReLU) activation functions. In this paper, we present an in-depth
    analysis of methods for maximizing the sparsity of the activations in a trained
    neural network, and show that, when coupled with an efficient sparse-input convolution
    algorithm, we can leverage this sparsity for significant performance gains. To
    induce highly sparse activation maps without accuracy loss, we introduce a new
    regularization technique, coupled with a new threshold-based sparsification method
    based on a parameterized activation function called Forced-Activation-Threshold
    Rectified Linear Unit (FATReLU). We examine the impact of our methods on popular
    image classification models, showing that most architectures can adapt to significantly
    sparser activation maps without any accuracy loss. Our second contribution is
    showing that these these compression gains can be translated into inference speedups:
    we provide a new algorithm to enable fast convolution operations over networks
    with sparse activations, and show that it can enable significant speedups for
    end-to-end inference on a range of popular models on the large-scale ImageNet
    image classification task on modern Intel CPUs, with little or no retraining cost. '
article_processing_charge: No
author:
- first_name: Mark
  full_name: Kurtz, Mark
  last_name: Kurtz
- first_name: Justin
  full_name: Kopinsky, Justin
  last_name: Kopinsky
- first_name: Rati
  full_name: Gelashvili, Rati
  last_name: Gelashvili
- first_name: Alexander
  full_name: Matveev, Alexander
  last_name: Matveev
- first_name: John
  full_name: Carr, John
  last_name: Carr
- first_name: Michael
  full_name: Goin, Michael
  last_name: Goin
- first_name: William
  full_name: Leiserson, William
  last_name: Leiserson
- first_name: Sage
  full_name: Moore, Sage
  last_name: Moore
- first_name: Bill
  full_name: Nell, Bill
  last_name: Nell
- first_name: Nir
  full_name: Shavit, Nir
  last_name: Shavit
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
citation:
  ama: 'Kurtz M, Kopinsky J, Gelashvili R, et al. Inducing and exploiting activation
    sparsity for fast neural network inference. In: <i>37th International Conference
    on Machine Learning, ICML 2020</i>. Vol 119. ; 2020:5533-5543.'
  apa: Kurtz, M., Kopinsky, J., Gelashvili, R., Matveev, A., Carr, J., Goin, M., …
    Alistarh, D.-A. (2020). Inducing and exploiting activation sparsity for fast neural
    network inference. In <i>37th International Conference on Machine Learning, ICML
    2020</i> (Vol. 119, pp. 5533–5543). Online.
  chicago: Kurtz, Mark, Justin Kopinsky, Rati Gelashvili, Alexander Matveev, John
    Carr, Michael Goin, William Leiserson, et al. “Inducing and Exploiting Activation
    Sparsity for Fast Neural Network Inference.” In <i>37th International Conference
    on Machine Learning, ICML 2020</i>, 119:5533–43, 2020.
  ieee: M. Kurtz <i>et al.</i>, “Inducing and exploiting activation sparsity for fast
    neural network inference,” in <i>37th International Conference on Machine Learning,
    ICML 2020</i>, Online, 2020, vol. 119, pp. 5533–5543.
  ista: 'Kurtz M, Kopinsky J, Gelashvili R, Matveev A, Carr J, Goin M, Leiserson W,
    Moore S, Nell B, Shavit N, Alistarh D-A. 2020. Inducing and exploiting activation
    sparsity for fast neural network inference. 37th International Conference on Machine
    Learning, ICML 2020. ICML: International Conference on Machine Learning vol. 119,
    5533–5543.'
  mla: Kurtz, Mark, et al. “Inducing and Exploiting Activation Sparsity for Fast Neural
    Network Inference.” <i>37th International Conference on Machine Learning, ICML
    2020</i>, vol. 119, 2020, pp. 5533–43.
  short: M. Kurtz, J. Kopinsky, R. Gelashvili, A. Matveev, J. Carr, M. Goin, W. Leiserson,
    S. Moore, B. Nell, N. Shavit, D.-A. Alistarh, in:, 37th International Conference
    on Machine Learning, ICML 2020, 2020, pp. 5533–5543.
conference:
  end_date: 2020-07-18
  location: Online
  name: 'ICML: International Conference on Machine Learning'
  start_date: 2020-07-12
date_created: 2021-05-23T22:01:45Z
date_published: 2020-07-12T00:00:00Z
date_updated: 2023-02-23T13:57:24Z
day: '12'
ddc:
- '000'
department:
- _id: DaAl
file:
- access_level: open_access
  checksum: 2aaaa7d7226e49161311d91627cf783b
  content_type: application/pdf
  creator: kschuh
  date_created: 2021-05-25T09:51:36Z
  date_updated: 2021-05-25T09:51:36Z
  file_id: '9421'
  file_name: 2020_PMLR_Kurtz.pdf
  file_size: 741899
  relation: main_file
  success: 1
file_date_updated: 2021-05-25T09:51:36Z
has_accepted_license: '1'
intvolume: '       119'
language:
- iso: eng
month: '07'
oa: 1
oa_version: Published Version
page: 5533-5543
publication: 37th International Conference on Machine Learning, ICML 2020
publication_identifier:
  issn:
  - 2640-3498
quality_controlled: '1'
scopus_import: '1'
status: public
title: Inducing and exploiting activation sparsity for fast neural network inference
type: conference
user_id: 3E5EF7F0-F248-11E8-B48F-1D18A9856A87
volume: 119
year: '2020'
...
---
_id: '9631'
abstract:
- lang: eng
  text: The ability to leverage large-scale hardware parallelism has been one of the
    key enablers of the accelerated recent progress in machine learning. Consequently,
    there has been considerable effort invested into developing efficient parallel
    variants of classic machine learning algorithms. However, despite the wealth of
    knowledge on parallelization, some classic machine learning algorithms often prove
    hard to parallelize efficiently while maintaining convergence. In this paper,
    we focus on efficient parallel algorithms for the key machine learning task of
    inference on graphical models, in particular on the fundamental belief propagation
    algorithm. We address the challenge of efficiently parallelizing this classic
    paradigm by showing how to leverage scalable relaxed schedulers in this context.
    We present an extensive empirical study, showing that our approach outperforms
    previous parallel belief propagation implementations both in terms of scalability
    and in terms of wall-clock convergence time, on a range of practical applications.
acknowledgement: "We thank Marco Mondelli for discussions related to LDPC decoding,
  and Giorgi Nadiradze for discussions on analysis of relaxed schedulers. This project
  has received funding from the European Research Council (ERC) under the European\r\nUnion’s
  Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML)."
article_processing_charge: No
arxiv: 1
author:
- first_name: Vitaly
  full_name: Aksenov, Vitaly
  last_name: Aksenov
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Janne
  full_name: Korhonen, Janne
  id: C5402D42-15BC-11E9-A202-CA2BE6697425
  last_name: Korhonen
citation:
  ama: 'Aksenov V, Alistarh D-A, Korhonen J. Scalable belief propagation via relaxed
    scheduling. In: <i>Advances in Neural Information Processing Systems</i>. Vol
    33. Curran Associates; 2020:22361-22372.'
  apa: 'Aksenov, V., Alistarh, D.-A., &#38; Korhonen, J. (2020). Scalable belief propagation
    via relaxed scheduling. In <i>Advances in Neural Information Processing Systems</i>
    (Vol. 33, pp. 22361–22372). Vancouver, Canada: Curran Associates.'
  chicago: Aksenov, Vitaly, Dan-Adrian Alistarh, and Janne Korhonen. “Scalable Belief
    Propagation via Relaxed Scheduling.” In <i>Advances in Neural Information Processing
    Systems</i>, 33:22361–72. Curran Associates, 2020.
  ieee: V. Aksenov, D.-A. Alistarh, and J. Korhonen, “Scalable belief propagation
    via relaxed scheduling,” in <i>Advances in Neural Information Processing Systems</i>,
    Vancouver, Canada, 2020, vol. 33, pp. 22361–22372.
  ista: 'Aksenov V, Alistarh D-A, Korhonen J. 2020. Scalable belief propagation via
    relaxed scheduling. Advances in Neural Information Processing Systems. NeurIPS:
    Conference on Neural Information Processing Systems vol. 33, 22361–22372.'
  mla: Aksenov, Vitaly, et al. “Scalable Belief Propagation via Relaxed Scheduling.”
    <i>Advances in Neural Information Processing Systems</i>, vol. 33, Curran Associates,
    2020, pp. 22361–72.
  short: V. Aksenov, D.-A. Alistarh, J. Korhonen, in:, Advances in Neural Information
    Processing Systems, Curran Associates, 2020, pp. 22361–22372.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Conference on Neural Information Processing Systems'
  start_date: 2020-12-06
date_created: 2021-07-04T22:01:26Z
date_published: 2020-12-06T00:00:00Z
date_updated: 2023-02-23T14:03:03Z
day: '06'
department:
- _id: DaAl
ec_funded: 1
external_id:
  arxiv:
  - '2002.11505'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://proceedings.neurips.cc/paper/2020/hash/fdb2c3bab9d0701c4a050a4d8d782c7f-Abstract.html
month: '12'
oa: 1
oa_version: Published Version
page: 22361-22372
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Advances in Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713829546'
  issn:
  - '10495258'
publication_status: published
publisher: Curran Associates
quality_controlled: '1'
scopus_import: '1'
status: public
title: Scalable belief propagation via relaxed scheduling
type: conference
user_id: 6785fbc1-c503-11eb-8a32-93094b40e1cf
volume: 33
year: '2020'
...
---
_id: '9632'
abstract:
- lang: eng
  text: "Second-order information, in the form of Hessian- or Inverse-Hessian-vector
    products, is a fundamental tool for solving optimization problems. Recently, there
    has been significant interest in utilizing this information in the context of
    deep\r\nneural networks; however, relatively little is known about the quality
    of existing approximations in this context. Our work examines this question, identifies
    issues with existing approaches, and proposes a method called WoodFisher to compute
    a faithful and efficient estimate of the inverse Hessian. Our main application
    is to neural network compression, where we build on the classic Optimal Brain
    Damage/Surgeon framework. We demonstrate that WoodFisher significantly outperforms
    popular state-of-the-art methods for oneshot pruning. Further, even when iterative,
    gradual pruning is allowed, our method results in a gain in test accuracy over
    the state-of-the-art approaches, for standard image classification datasets such
    as ImageNet ILSVRC. We examine how our method can be extended to take into account
    first-order information, as well as\r\nillustrate its ability to automatically
    set layer-wise pruning thresholds and perform compression in the limited-data
    regime. The code is available at the following link, https://github.com/IST-DASLab/WoodFisher."
acknowledgement: This project has received funding from the European Research Council
  (ERC) under the European Union’s Horizon 2020 research and innovation programme
  (grant agreement No 805223 ScaleML). Also, we would like to thank Alexander Shevchenko,
  Alexandra Peste, and other members of the group for fruitful discussions.
article_processing_charge: No
arxiv: 1
author:
- first_name: Sidak Pal
  full_name: Singh, Sidak Pal
  id: DD138E24-D89D-11E9-9DC0-DEF6E5697425
  last_name: Singh
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
citation:
  ama: 'Singh SP, Alistarh D-A. WoodFisher: Efficient second-order approximation for
    neural network compression. In: <i>Advances in Neural Information Processing Systems</i>.
    Vol 33. Curran Associates; 2020:18098-18109.'
  apa: 'Singh, S. P., &#38; Alistarh, D.-A. (2020). WoodFisher: Efficient second-order
    approximation for neural network compression. In <i>Advances in Neural Information
    Processing Systems</i> (Vol. 33, pp. 18098–18109). Vancouver, Canada: Curran Associates.'
  chicago: 'Singh, Sidak Pal, and Dan-Adrian Alistarh. “WoodFisher: Efficient Second-Order
    Approximation for Neural Network Compression.” In <i>Advances in Neural Information
    Processing Systems</i>, 33:18098–109. Curran Associates, 2020.'
  ieee: 'S. P. Singh and D.-A. Alistarh, “WoodFisher: Efficient second-order approximation
    for neural network compression,” in <i>Advances in Neural Information Processing
    Systems</i>, Vancouver, Canada, 2020, vol. 33, pp. 18098–18109.'
  ista: 'Singh SP, Alistarh D-A. 2020. WoodFisher: Efficient second-order approximation
    for neural network compression. Advances in Neural Information Processing Systems.
    NeurIPS: Conference on Neural Information Processing Systems vol. 33, 18098–18109.'
  mla: 'Singh, Sidak Pal, and Dan-Adrian Alistarh. “WoodFisher: Efficient Second-Order
    Approximation for Neural Network Compression.” <i>Advances in Neural Information
    Processing Systems</i>, vol. 33, Curran Associates, 2020, pp. 18098–109.'
  short: S.P. Singh, D.-A. Alistarh, in:, Advances in Neural Information Processing
    Systems, Curran Associates, 2020, pp. 18098–18109.
conference:
  end_date: 2020-12-12
  location: Vancouver, Canada
  name: 'NeurIPS: Conference on Neural Information Processing Systems'
  start_date: 2020-12-06
date_created: 2021-07-04T22:01:26Z
date_published: 2020-12-06T00:00:00Z
date_updated: 2023-02-23T14:03:06Z
day: '06'
department:
- _id: DaAl
- _id: ToHe
ec_funded: 1
external_id:
  arxiv:
  - '2004.14340'
intvolume: '        33'
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://proceedings.neurips.cc/paper/2020/hash/d1ff1ec86b62cd5f3903ff19c3a326b2-Abstract.html
month: '12'
oa: 1
oa_version: Published Version
page: 18098-18109
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: Advances in Neural Information Processing Systems
publication_identifier:
  isbn:
  - '9781713829546'
  issn:
  - '10495258'
publication_status: published
publisher: Curran Associates
quality_controlled: '1'
scopus_import: '1'
status: public
title: 'WoodFisher: Efficient second-order approximation for neural network compression'
type: conference
user_id: 6785fbc1-c503-11eb-8a32-93094b40e1cf
volume: 33
year: '2020'
...
---
_id: '6673'
abstract:
- lang: eng
  text: Several classic problems in graph processing and computational geometry are
    solved via incremental algorithms, which split computation into a series of small
    tasks acting on shared state, which gets updated progressively. While the sequential
    variant of such algorithms usually specifies a fixed (but sometimes random) order
    in which the tasks should be performed, a standard approach to parallelizing such
    algorithms is to relax this constraint to allow for out-of-order parallel execution.
    This is the case for parallel implementations of Dijkstra's single-source shortest-paths
    (SSSP) algorithm, and for parallel Delaunay mesh triangulation. While many software
    frameworks parallelize incremental computation in this way, it is still not well
    understood whether this relaxed ordering approach can still provide any complexity
    guarantees. In this paper, we address this problem, and analyze the efficiency
    guarantees provided by a range of incremental algorithms when parallelized via
    relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation
    and sorting by insertion, schedulers with a maximum relaxation factor of k in
    terms of the maximum priority inversion allowed will introduce a maximum amount
    of wasted work of O(łog n poly(k)), where n is the number of tasks to be executed.
    For SSSP, we show that the additional work is O(poly(k), dmax / wmin), where dmax
    is the maximum distance between two nodes, and wmin is the minimum such distance.
    In practical settings where n >> k, this suggests that the overheads of relaxation
    will be outweighed by the improved scalability of the relaxed scheduler. On the
    negative side, we provide lower bounds showing that certain algorithms will inherently
    incur a non-trivial amount of wasted work due to scheduler relaxation, even for
    relatively benign relaxed schedulers.
article_processing_charge: No
arxiv: 1
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Giorgi
  full_name: Nadiradze, Giorgi
  id: 3279A00C-F248-11E8-B48F-1D18A9856A87
  last_name: Nadiradze
  orcid: 0000-0001-5634-0731
- first_name: Nikita
  full_name: Koval, Nikita
  id: 2F4DB10C-F248-11E8-B48F-1D18A9856A87
  last_name: Koval
citation:
  ama: 'Alistarh D-A, Nadiradze G, Koval N. Efficiency guarantees for parallel incremental
    algorithms under relaxed schedulers. In: <i>31st ACM Symposium on Parallelism
    in Algorithms and Architectures</i>. ACM Press; 2019:145-154. doi:<a href="https://doi.org/10.1145/3323165.3323201">10.1145/3323165.3323201</a>'
  apa: 'Alistarh, D.-A., Nadiradze, G., &#38; Koval, N. (2019). Efficiency guarantees
    for parallel incremental algorithms under relaxed schedulers. In <i>31st ACM Symposium
    on Parallelism in Algorithms and Architectures</i> (pp. 145–154). Phoenix, AZ,
    United States: ACM Press. <a href="https://doi.org/10.1145/3323165.3323201">https://doi.org/10.1145/3323165.3323201</a>'
  chicago: Alistarh, Dan-Adrian, Giorgi Nadiradze, and Nikita Koval. “Efficiency Guarantees
    for Parallel Incremental Algorithms under Relaxed Schedulers.” In <i>31st ACM
    Symposium on Parallelism in Algorithms and Architectures</i>, 145–54. ACM Press,
    2019. <a href="https://doi.org/10.1145/3323165.3323201">https://doi.org/10.1145/3323165.3323201</a>.
  ieee: D.-A. Alistarh, G. Nadiradze, and N. Koval, “Efficiency guarantees for parallel
    incremental algorithms under relaxed schedulers,” in <i>31st ACM Symposium on
    Parallelism in Algorithms and Architectures</i>, Phoenix, AZ, United States, 2019,
    pp. 145–154.
  ista: 'Alistarh D-A, Nadiradze G, Koval N. 2019. Efficiency guarantees for parallel
    incremental algorithms under relaxed schedulers. 31st ACM Symposium on Parallelism
    in Algorithms and Architectures. SPAA: Symposium on Parallelism in Algorithms
    and Architectures, 145–154.'
  mla: Alistarh, Dan-Adrian, et al. “Efficiency Guarantees for Parallel Incremental
    Algorithms under Relaxed Schedulers.” <i>31st ACM Symposium on Parallelism in
    Algorithms and Architectures</i>, ACM Press, 2019, pp. 145–54, doi:<a href="https://doi.org/10.1145/3323165.3323201">10.1145/3323165.3323201</a>.
  short: D.-A. Alistarh, G. Nadiradze, N. Koval, in:, 31st ACM Symposium on Parallelism
    in Algorithms and Architectures, ACM Press, 2019, pp. 145–154.
conference:
  end_date: 2019-06-24
  location: Phoenix, AZ, United States
  name: 'SPAA: Symposium on Parallelism in Algorithms and Architectures'
  start_date: 2019-06-22
date_created: 2019-07-24T08:59:36Z
date_published: 2019-06-01T00:00:00Z
date_updated: 2023-09-07T13:31:39Z
day: '01'
department:
- _id: DaAl
doi: 10.1145/3323165.3323201
ec_funded: 1
external_id:
  arxiv:
  - '2003.09363'
  isi:
  - '000507618500018'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2003.09363
month: '06'
oa: 1
oa_version: Preprint
page: 145-154
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: 31st ACM Symposium on Parallelism in Algorithms and Architectures
publication_identifier:
  isbn:
  - '9781450361842'
publication_status: published
publisher: ACM Press
quality_controlled: '1'
related_material:
  record:
  - id: '10429'
    relation: dissertation_contains
    status: public
scopus_import: '1'
status: public
title: Efficiency guarantees for parallel incremental algorithms under relaxed schedulers
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2019'
...
