2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5178973
|View full text |Cite
|
Sign up to set email alerts
|

An incremental affinity propagation algorithm and its applications for text clustering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
14
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 17 publications
1
14
0
Order By: Relevance
“…Although there are many types of clustering algorithms, we focus on exemplar-based clustering algorithm in this paper due to its wide applications and outstanding advantages [10,[21][22][23][24]. By exemplars we mean those cluster centers are chosen from actual data.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although there are many types of clustering algorithms, we focus on exemplar-based clustering algorithm in this paper due to its wide applications and outstanding advantages [10,[21][22][23][24]. By exemplars we mean those cluster centers are chosen from actual data.…”
Section: Introductionmentioning
confidence: 99%
“…Exemplar-based clustering algorithms have gained numerous achievements in recent years [17][18][19][20][21][22][23][24]. However, these algorithms need to store a similarity matrix of the entire dataset, which indeed limits the ability of exemplarbased algorithms for processing large data.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To test the only effect of the feature space learning procedure, the result of FSAP is compared with that of IAP in Shi et al 2009. The F-measure and entropy comparison results are shown in Tables 4 and 5.…”
Section: Fsl Vs Incremental Semi-supervised Algorithmmentioning
confidence: 99%
“…(Jain et al 1999;Huang et al 2018;Wu et al 2018;Guan et al 2011) Semi-supervised clustering Pros: Use the information both from labeled data and unlabeled data. Cons: Unlabeled data should be explored carefully and it performed without feature learning procedure Li and Zhou (2015); Tang et al (2007);Wang et al (2012); Guan et al (2011);Shi et al (2009);Xue et al (2011) effectiveness of the proposed algorithms. As a result, topical feature space for each cluster can be found accompanying the end of clustering.…”
Section: Introductionmentioning
confidence: 99%