2020
DOI: 10.1007/s00453-020-00736-0
|View full text |Cite
|
Sign up to set email alerts
|

The Power of Linear-Time Data Reduction for Maximum Matching

Abstract: Finding maximum-cardinality matchings in undirected graphs is arguably one of the most central graph primitives. For m-edge and n-vertex graphs, it is well-known to be solvable in $$O(m\sqrt{n})$$ O ( m n )  time; however, for several applications this running time is still too slow. We investigate how linear-time (and almost linear-time) data reduction (used as preprocessing) can alleviate the situation. More specifically, we focus on linear-time kernelization. We start a deeper and systematic study both … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
23
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
3
1

Relationship

4
5

Authors

Journals

citations
Cited by 29 publications
(24 citation statements)
references
References 25 publications
1
23
0
Order By: Relevance
“…In this article we adopt an experimental approach to answering the following question: do the new reduction rules from Kelk and Linz (2020) produce smaller kernels in practice than, say, when only the subtree and chain reductions are applied? This mirrors several recent articles in the algorithmic graph theory literature where the practical effectiveness of kernelization has also been analyzed (Fellows et al 2018;Ferizovic et al 2020;Henzinger et al 2020;Mertzios et al 2020;Alber et al 2006). The question is relevant, since earlier studies of kernelization in phylogenetics have noted that, despite its theoretical importance, in an empirical setting the chain reduction seems to have very limited effect compared to the subtree reduction (Hickey et al 2008;van Iersel et al 2016).…”
Section: Introductionsupporting
confidence: 69%
“…In this article we adopt an experimental approach to answering the following question: do the new reduction rules from Kelk and Linz (2020) produce smaller kernels in practice than, say, when only the subtree and chain reductions are applied? This mirrors several recent articles in the algorithmic graph theory literature where the practical effectiveness of kernelization has also been analyzed (Fellows et al 2018;Ferizovic et al 2020;Henzinger et al 2020;Mertzios et al 2020;Alber et al 2006). The question is relevant, since earlier studies of kernelization in phylogenetics have noted that, despite its theoretical importance, in an empirical setting the chain reduction seems to have very limited effect compared to the subtree reduction (Hickey et al 2008;van Iersel et al 2016).…”
Section: Introductionsupporting
confidence: 69%
“…Data reduction algorithms seem to lend themselves well to realization on GPUs since, often, they consist of a fixed set of data reduction rules, applied to different parts of the data. We thus expect that, although previously more often studied in the context of attacking NP-hard problems [8][9][10], parallel kernelization will have a stronger real-world impact in the context of speeding up polynomial-time algorithms on large data sets, such as linear-time data reduction was applied to speed up matching algorithms [42], or in the context of designing parallel fixed-parameter algorithms [9] for P-complete problems, which do not give in to massive parallelization.…”
Section: Discussionmentioning
confidence: 99%
“…, for some ε > 0. This kind of FPT algorithms for polynomial problems have attracted recent attention [5,16,19,20,21]. We stress that Maximum-Cardinality Matching has been proposed in [21] as the "drosophila" of the study of these FPT algorithms in P. We continue advancing in this research direction.…”
Section: :3mentioning
confidence: 92%