2006
DOI: 10.4018/jdwm.2006070104
|View full text |Cite
|
Sign up to set email alerts
|

Partially Supervised Classification

Abstract: This paper addresses a new classification technique: Partially Supervised Classification (PSC), which is used to identify a specific land-cover class of interest from a remotely sensed image using unique training samples that belongs to a specified class. This paper also presents and discusses a newly proposed novel Support Vector Machine (SVM) algorithm for PSC. Accordingly, its training set includes labeled samples that belong to the class of interest and unlabeled samples of all classes randomly selected fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
56
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 29 publications
(56 citation statements)
references
References 18 publications
0
56
0
Order By: Relevance
“…The positive class expansion problem appears to have some relationship with PULearning [12,17], concept drift [9,10], and covariate shift [8,1]. But in fact it is very different from these tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The positive class expansion problem appears to have some relationship with PULearning [12,17], concept drift [9,10], and covariate shift [8,1]. But in fact it is very different from these tasks.…”
Section: Related Workmentioning
confidence: 99%
“…A large part of works addressing PU-Learning follow two steps, e.g. [12,17]. First, strong negative instances are discovered from the unlabeled data.…”
Section: Related Workmentioning
confidence: 99%
“…PU learning has been investigated by several researchers (Liu et al, 2002;Denis et al, 2002;Li and Liu, 2003;Elkan and Noto, 2008;Hsieh et al, 2015). A popular approach follows a two-stage strategy: (i) identifying a set of reliable N class instances RN from the unlabeled set U ; and (ii) building a classifier using P (P class) and RN (N class) and Q = U − RN (unlabeled) by applying a learning algorithm (e.g., SVM) iteratively.…”
Section: Traditional Pu Learningmentioning
confidence: 99%
“…This is important because it gives us a formal model to tackle the problem. PU learning is stated as follows (Liu et al, 2002): given a set P of examples of a particular class (we also use P to denote the class) and a set U of unlabeled examples which contains hidden instances from both classes P and not-P (called N ), we want to build a classifier using P and U to classify the data in U or future test data into the two classes, i.e., P and N (or not-P). In our case, P is the existing sentiment lexicon, and U is a set of candidate words from a social media corpus.…”
Section: Introductionmentioning
confidence: 99%
“…The objective in this situation is to perform standard machine learning activities despite this data restriction. The problem has been referred to in the literature under several names, including partially supervised classification [15], positive example based learning [19] and positive unlabelled learning [9]. A typical application has been text classification -given a number of query documents belonging to a particular class (e.g.…”
Section: Introductionmentioning
confidence: 99%