2018
DOI: 10.1109/tsipn.2018.2819018
|View full text |Cite
|
Sign up to set email alerts
|

Robust Semisupervised Graph Classifier Learning With Negative Edge Weights

Abstract: In a semi-supervised learning scenario, (possibly noisy) partially observed labels are used as input to train a classifier, in order to assign labels to unclassified samples. In this paper, we construct a complete graph-based binary classifier given only samples' feature vectors and partial labels. Specifically, we first build appropriate similarity graphs with positive and negative edge weights connecting all samples based on inter-node feature distances. By viewing a binary classifier as a piecewise constant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
33
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 32 publications
(33 citation statements)
references
References 48 publications
0
33
0
Order By: Relevance
“…In this paper, we further extend our conference contribution [24] to a more generalized end-to-end CNN-based approach given noisy binary classifier signal, to perform iteratively GLR (similar to [22]) as a classifier signal restoration operator, update the underlying graph and regularize CNNs. Compared to the previous graph-based classifiers [18], [20], [21], [23], [29], [40], [41], [45], [46], by adopting edge convolution, iteratively updating graph and operating GLR, we learn a deeper feature representation, and assign the degree of freedom for learning the underlying data structure. Given noisy training labels, in contrast to the classical robust DNN-based classifiers [4], [7], [15], [16], [35], [39], we bring together the regularization benefits of GLR and the benefits of the proposed loss functions to perform more robust deep metric learning.…”
Section: Novelty With Respect To Reviewed Literaturementioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we further extend our conference contribution [24] to a more generalized end-to-end CNN-based approach given noisy binary classifier signal, to perform iteratively GLR (similar to [22]) as a classifier signal restoration operator, update the underlying graph and regularize CNNs. Compared to the previous graph-based classifiers [18], [20], [21], [23], [29], [40], [41], [45], [46], by adopting edge convolution, iteratively updating graph and operating GLR, we learn a deeper feature representation, and assign the degree of freedom for learning the underlying data structure. Given noisy training labels, in contrast to the classical robust DNN-based classifiers [4], [7], [15], [16], [35], [39], we bring together the regularization benefits of GLR and the benefits of the proposed loss functions to perform more robust deep metric learning.…”
Section: Novelty With Respect To Reviewed Literaturementioning
confidence: 99%
“…Typically, the edge weight is computed using a Gaussian kernel function with a fixed scaling factor σ, i.e., exp − xi−xj 2 2 2σ 2 , to quantify the node-to-node correlation. Instead of using a fixed σ as in [18], [20], [21], motivated by [47], we introduce an auto-sigma Gaussian kernel function to assign edge weight w r i,j in G r by maximizing the margin between the edge weights assigned to P-edges and Q-edges, as:…”
Section: Cnnmentioning
confidence: 99%
“…Leveraging on the advance of graph signal processing (GSP) [2]- [5], recent works [6], [7] pose binary classifier learning as a signal restoration problem on graphs, where the undirected graph consists of a set of nodes (each associated with a feature vector) and a set of weighted graph edges connecting similar nodes in the high-dimensional feature space. A graph smoothness prior is typically adopted to regularize the ill-posed signal restoration problem [8]- [12].…”
Section: Introductionmentioning
confidence: 99%
“…In a recent article [32], the authors stated that to the best of their knowledge, the construction of similarity graphs with both positive and negative edges from feature vectors for classification has not been studied in the graph-based classifier literature ( [32, pp. 716-717]).…”
mentioning
confidence: 99%
“…One unfortunate consequence of negative edge weights is that the graph Laplacian matrix L can be indefinite. Cheung et al [32] presented a perturbation matrix so that L + is positively semidefinite for their algorithm.…”
mentioning
confidence: 99%