2019
DOI: 10.1609/aaai.v33i01.33013534
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Feature Selection by Pareto Optimization

Abstract: Dimensionality reduction is often employed to deal with the data with a huge number of features, which can be generally divided into two categories: feature transformation and feature selection. Due to the interpretability, the efficiency during inference and the abundance of unlabeled data, unsupervised feature selection has attracted much attention. In this paper, we consider its natural formulation, column subset selection (CSS), which is to minimize the reconstruction error of a data matrix by selecting a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 29 publications
1
7
0
Order By: Relevance
“…We can see that the best performance on each data set is always achieved by PORSS o or PORSS u . By the sign-test (Demšar 2006) with confidence level 0.05, POSS is significantly better than the greedy algorithm, consistent with the previous results (Feng, Qian, and Tang 2019), and significantly worse than PORSS o and PORSS u , showing the usefulness of recombination. The rank of each algorithm on each data set is also computed as in (Demšar 2006), and averaged in the last row of Table 1.…”
Section: Empirical Studysupporting
confidence: 82%
See 2 more Smart Citations
“…We can see that the best performance on each data set is always achieved by PORSS o or PORSS u . By the sign-test (Demšar 2006) with confidence level 0.05, POSS is significantly better than the greedy algorithm, consistent with the previous results (Feng, Qian, and Tang 2019), and significantly worse than PORSS o and PORSS u , showing the usefulness of recombination. The rank of each algorithm on each data set is also computed as in (Demšar 2006), and averaged in the last row of Table 1.…”
Section: Empirical Studysupporting
confidence: 82%
“…For subset selection with monotone objective functions, POSS has been proved to achieve the same general approximation guarantee as the greedy algorithm in polynomial expected running time, i.e., to achieve the optimal polynomialtime approximation guarantee (Qian, Yu, and Zhou 2015). Furthermore, it has been empirically shown that POSS can achieve significantly better performance than the greedy algorithm in some applications, e.g., unsupervised feature selection (Feng, Qian, and Tang 2019) and sparse regression (Qian, Yu, and Zhou 2015).…”
Section: The Poss Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…where both the objective function f : 2 V → R and the cost function c : 2 V → R are monotone, but not necessarily submodular. This problem is NP-hard in general, and has various applications, such as influence maximization [Kempe et al, 2003], sensor placement [Krause et al, 2008], document summarization [Lin and Bilmes, 2011] and unsupervised feature selection [Feng et al, 2019], just to name a few. A well-known special case of this problem is subset selection with cardinality constraints, that is, c(X) = |X|.…”
Section: Introductionmentioning
confidence: 99%
“…Besides relaxing the constraints, non-submodular objective functions have also been studied, e.g., (Bian et al 2017;Elenberg et al 2018;Bogunovic, Zhao, and Cevher 2018;Qian et al 2019). They have many applications, such as Bayesian experimental design (Krause, Singh, and Guestrin 2008), dictionary selection (Krause and Cevher 2010), sparse regression (Das and Kempe 2011), and unsupervised feature selection (Feng, Qian, and Tang 2019).…”
Section: Introductionmentioning
confidence: 99%