2019
DOI: 10.1109/tsp.2019.2953593
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Learning in Network-Structured Data via Total Variation Minimization

Abstract: We provide an analysis and interpretation of total variation (TV) minimization for semi-supervised learning from partially-labeled network-structured data. Our approach exploits an intrinsic duality between TV minimization and network flow problems. In particular, we use Fenchel duality to establish a precise equivalence of TV minimization and a minimum cost flow problem. This provides a link between modern convex optimization methods for non-smooth Lasso-type problems and maximum flow algorithms. We show how … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 51 publications
(150 reference statements)
0
16
0
Order By: Relevance
“…Besides the latent space models, sparsity [24] or clustering assumptions [25] have been used to impose low-dimensional structures in single-relational networks. An MRN can be seen as a combination of several heterogeneous single-relational networks.…”
Section: Discussionmentioning
confidence: 99%
“…Besides the latent space models, sparsity [24] or clustering assumptions [25] have been used to impose low-dimensional structures in single-relational networks. An MRN can be seen as a combination of several heterogeneous single-relational networks.…”
Section: Discussionmentioning
confidence: 99%
“…We have recently explored the relation between network flow problems and TV minimization [17], [18]. Loosely speaking, the solution of TV minimization is piece-wise constant over clusters whose boundaries have a small total weight.…”
Section: Network Lasso and Its Dualmentioning
confidence: 99%
“…The special case of (6) when α = 0 is studied in [17]. We can also interpret (6) as TV minimization using soft constraints instead of hard constraints [18]. While [18] enforcesx i for each seed node i ∈ S k , (6) uses soft constraints such that typicallŷ x i < 1 at seed nodes i ∈ S k .…”
Section: Network Lasso and Its Dualmentioning
confidence: 99%
See 2 more Smart Citations