---
_id: '6673'
abstract:
- lang: eng
  text: Several classic problems in graph processing and computational geometry are
    solved via incremental algorithms, which split computation into a series of small
    tasks acting on shared state, which gets updated progressively. While the sequential
    variant of such algorithms usually specifies a fixed (but sometimes random) order
    in which the tasks should be performed, a standard approach to parallelizing such
    algorithms is to relax this constraint to allow for out-of-order parallel execution.
    This is the case for parallel implementations of Dijkstra's single-source shortest-paths
    (SSSP) algorithm, and for parallel Delaunay mesh triangulation. While many software
    frameworks parallelize incremental computation in this way, it is still not well
    understood whether this relaxed ordering approach can still provide any complexity
    guarantees. In this paper, we address this problem, and analyze the efficiency
    guarantees provided by a range of incremental algorithms when parallelized via
    relaxed schedulers. We show that, for algorithms such as Delaunay mesh triangulation
    and sorting by insertion, schedulers with a maximum relaxation factor of k in
    terms of the maximum priority inversion allowed will introduce a maximum amount
    of wasted work of O(łog n poly(k)), where n is the number of tasks to be executed.
    For SSSP, we show that the additional work is O(poly(k), dmax / wmin), where dmax
    is the maximum distance between two nodes, and wmin is the minimum such distance.
    In practical settings where n >> k, this suggests that the overheads of relaxation
    will be outweighed by the improved scalability of the relaxed scheduler. On the
    negative side, we provide lower bounds showing that certain algorithms will inherently
    incur a non-trivial amount of wasted work due to scheduler relaxation, even for
    relatively benign relaxed schedulers.
article_processing_charge: No
arxiv: 1
author:
- first_name: Dan-Adrian
  full_name: Alistarh, Dan-Adrian
  id: 4A899BFC-F248-11E8-B48F-1D18A9856A87
  last_name: Alistarh
  orcid: 0000-0003-3650-940X
- first_name: Giorgi
  full_name: Nadiradze, Giorgi
  id: 3279A00C-F248-11E8-B48F-1D18A9856A87
  last_name: Nadiradze
  orcid: 0000-0001-5634-0731
- first_name: Nikita
  full_name: Koval, Nikita
  id: 2F4DB10C-F248-11E8-B48F-1D18A9856A87
  last_name: Koval
citation:
  ama: 'Alistarh D-A, Nadiradze G, Koval N. Efficiency guarantees for parallel incremental
    algorithms under relaxed schedulers. In: <i>31st ACM Symposium on Parallelism
    in Algorithms and Architectures</i>. ACM Press; 2019:145-154. doi:<a href="https://doi.org/10.1145/3323165.3323201">10.1145/3323165.3323201</a>'
  apa: 'Alistarh, D.-A., Nadiradze, G., &#38; Koval, N. (2019). Efficiency guarantees
    for parallel incremental algorithms under relaxed schedulers. In <i>31st ACM Symposium
    on Parallelism in Algorithms and Architectures</i> (pp. 145–154). Phoenix, AZ,
    United States: ACM Press. <a href="https://doi.org/10.1145/3323165.3323201">https://doi.org/10.1145/3323165.3323201</a>'
  chicago: Alistarh, Dan-Adrian, Giorgi Nadiradze, and Nikita Koval. “Efficiency Guarantees
    for Parallel Incremental Algorithms under Relaxed Schedulers.” In <i>31st ACM
    Symposium on Parallelism in Algorithms and Architectures</i>, 145–54. ACM Press,
    2019. <a href="https://doi.org/10.1145/3323165.3323201">https://doi.org/10.1145/3323165.3323201</a>.
  ieee: D.-A. Alistarh, G. Nadiradze, and N. Koval, “Efficiency guarantees for parallel
    incremental algorithms under relaxed schedulers,” in <i>31st ACM Symposium on
    Parallelism in Algorithms and Architectures</i>, Phoenix, AZ, United States, 2019,
    pp. 145–154.
  ista: 'Alistarh D-A, Nadiradze G, Koval N. 2019. Efficiency guarantees for parallel
    incremental algorithms under relaxed schedulers. 31st ACM Symposium on Parallelism
    in Algorithms and Architectures. SPAA: Symposium on Parallelism in Algorithms
    and Architectures, 145–154.'
  mla: Alistarh, Dan-Adrian, et al. “Efficiency Guarantees for Parallel Incremental
    Algorithms under Relaxed Schedulers.” <i>31st ACM Symposium on Parallelism in
    Algorithms and Architectures</i>, ACM Press, 2019, pp. 145–54, doi:<a href="https://doi.org/10.1145/3323165.3323201">10.1145/3323165.3323201</a>.
  short: D.-A. Alistarh, G. Nadiradze, N. Koval, in:, 31st ACM Symposium on Parallelism
    in Algorithms and Architectures, ACM Press, 2019, pp. 145–154.
conference:
  end_date: 2019-06-24
  location: Phoenix, AZ, United States
  name: 'SPAA: Symposium on Parallelism in Algorithms and Architectures'
  start_date: 2019-06-22
date_created: 2019-07-24T08:59:36Z
date_published: 2019-06-01T00:00:00Z
date_updated: 2023-09-07T13:31:39Z
day: '01'
department:
- _id: DaAl
doi: 10.1145/3323165.3323201
ec_funded: 1
external_id:
  arxiv:
  - '2003.09363'
  isi:
  - '000507618500018'
isi: 1
language:
- iso: eng
main_file_link:
- open_access: '1'
  url: https://arxiv.org/abs/2003.09363
month: '06'
oa: 1
oa_version: Preprint
page: 145-154
project:
- _id: 268A44D6-B435-11E9-9278-68D0E5697425
  call_identifier: H2020
  grant_number: '805223'
  name: Elastic Coordination for Scalable Machine Learning
publication: 31st ACM Symposium on Parallelism in Algorithms and Architectures
publication_identifier:
  isbn:
  - '9781450361842'
publication_status: published
publisher: ACM Press
quality_controlled: '1'
related_material:
  record:
  - id: '10429'
    relation: dissertation_contains
    status: public
scopus_import: '1'
status: public
title: Efficiency guarantees for parallel incremental algorithms under relaxed schedulers
type: conference
user_id: 4359f0d1-fa6c-11eb-b949-802e58b17ae8
year: '2019'
...
