2019
DOI: 10.1109/jas.2019.1911417
|View full text |Cite
|
Sign up to set email alerts
|

Graph regularized Lp smooth non-negative matrix factorization for data representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 69 publications
(22 citation statements)
references
References 32 publications
0
17
0
Order By: Relevance
“…And the third one is that some extracted causal pathways contain the implicit mediators (or the implicit effect/causative-concept EDUs) as the implicit wrdCoc features which have to be represented by the explicit wedCoc features for clear comprehensible pathways. Moreover, our implicit wrdCoc features are the qualitative data whereas the previous research [16] discovered the hidden semantics or the latent semantics as the implicit features by the graph regularization where the latent semantics of [16] is the quantitative data.…”
Section: Edu1 "เมืmentioning
confidence: 94%
“…And the third one is that some extracted causal pathways contain the implicit mediators (or the implicit effect/causative-concept EDUs) as the implicit wrdCoc features which have to be represented by the explicit wedCoc features for clear comprehensible pathways. Moreover, our implicit wrdCoc features are the qualitative data whereas the previous research [16] discovered the hidden semantics or the latent semantics as the implicit features by the graph regularization where the latent semantics of [16] is the quantitative data.…”
Section: Edu1 "เมืmentioning
confidence: 94%
“…An LF model is widely adopted to implement an RS [17,18]. So far, various sophisticated LF models have been proposed, including a bias-based one [18], a nonparametric one [31], a non-negativity-constrained one [19], a probabilistic one [30], a dual-regularization-based one [33], a posterior-neighborhood-regularized one [42], a randomized one [43], a graph regularized one [44], a neighborhood-and-location integrated one [32], a confidence-driven one [34], and a data characteristic-aware one [35]. Although they are different from each other in objective functions or learning algorithms, they all adopt an L 2 norm-oriented Loss that is highly sensitive to outliers [25,27,28].…”
Section: B Related Workmentioning
confidence: 99%
“…Tensors can be expressed in either a fibrous or a slice form [24,25]. Any two dimensions in the third-order tensor are kept unchanged.…”
Section: Tensor Theorymentioning
confidence: 99%