{"oa_version":"Preprint","issue":"4","title":"A deamortization approach for dynamic spanner and dynamic maximal matching","scopus_import":"1","quality_controlled":"1","main_file_link":[{"url":"https://arxiv.org/abs/1810.10932","open_access":"1"}],"language":[{"iso":"eng"}],"intvolume":" 17","date_published":"2021-10-04T00:00:00Z","day":"04","article_type":"original","publisher":"Association for Computing Machinery","year":"2021","article_number":"29","abstract":[{"text":"Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability.\r\n\r\nIn this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem.\r\n\r\n(1)\r\n\r\nFor dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3 (n) high-probability worst-case time bound for maintaining a (2k-1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ(n1+1/k) edges.)\r\n\r\n(2)\r\n\r\nFor dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log 5 (n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal.\r\n\r\nOur results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires ϴ (f(n)) time to process, for arbitrarily high f(n). In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O(log 2(n)).\r\n\r\nThus, we achieve our results in two steps:\r\n\r\n(1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times.\r\n\r\n(2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.","lang":"eng"}],"publication":"ACM Transactions on Algorithms","publication_status":"published","doi":"10.1145/3469833","author":[{"last_name":"Bernstein","full_name":"Bernstein, Aaron","first_name":"Aaron"},{"first_name":"Sebastian","full_name":"Forster, Sebastian","last_name":"Forster"},{"last_name":"Henzinger","id":"540c9bbd-f2de-11ec-812d-d04a5be85630","first_name":"Monika H","full_name":"Henzinger, Monika H","orcid":"0000-0002-5008-6530"}],"_id":"11663","publication_identifier":{"eissn":["1549-6333"],"issn":["1549-6325"]},"type":"journal_article","extern":"1","volume":17,"date_updated":"2022-09-09T11:35:44Z","oa":1,"user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","external_id":{"arxiv":["1810.10932"]},"status":"public","date_created":"2022-07-27T11:09:06Z","month":"10","citation":{"ieee":"A. Bernstein, S. Forster, and M. H. Henzinger, “A deamortization approach for dynamic spanner and dynamic maximal matching,” ACM Transactions on Algorithms, vol. 17, no. 4. Association for Computing Machinery, 2021.","short":"A. Bernstein, S. Forster, M.H. Henzinger, ACM Transactions on Algorithms 17 (2021).","ama":"Bernstein A, Forster S, Henzinger MH. A deamortization approach for dynamic spanner and dynamic maximal matching. ACM Transactions on Algorithms. 2021;17(4). doi:10.1145/3469833","mla":"Bernstein, Aaron, et al. “A Deamortization Approach for Dynamic Spanner and Dynamic Maximal Matching.” ACM Transactions on Algorithms, vol. 17, no. 4, 29, Association for Computing Machinery, 2021, doi:10.1145/3469833.","apa":"Bernstein, A., Forster, S., & Henzinger, M. H. (2021). A deamortization approach for dynamic spanner and dynamic maximal matching. ACM Transactions on Algorithms. Association for Computing Machinery. https://doi.org/10.1145/3469833","chicago":"Bernstein, Aaron, Sebastian Forster, and Monika H Henzinger. “A Deamortization Approach for Dynamic Spanner and Dynamic Maximal Matching.” ACM Transactions on Algorithms. Association for Computing Machinery, 2021. https://doi.org/10.1145/3469833.","ista":"Bernstein A, Forster S, Henzinger MH. 2021. A deamortization approach for dynamic spanner and dynamic maximal matching. ACM Transactions on Algorithms. 17(4), 29."},"article_processing_charge":"No","acknowledgement":"The conference version of this article [10] had an error in the analysis of the dynamic matching algorithm. In particular, Lemma 4.5 assumed an independence between adversarial updates to the hierarchy that is in fact true, but which requires a sophisticated proof. We are very grateful to the anonymous reviewers of Transactions on Algorithms for pointing out this mistake in our analysis. The mistake is fixed in Section 4.5. Almost the entire fix is a matter of analysis: the only change to the algorithm itself is the introduction of responsible bits in Algorithm 2. The first author would like to thank Mikkel Thorup and Alan Roytman for a very helpful discussion of the proof of Theorem 1.1."}