Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation 2007
DOI: 10.1145/1276958.1277326
|View full text |Cite
|
Sign up to set email alerts
|

Towards clustering with XCS

Abstract: This paper presents a novel approach to clustering using an accuracy-based Learning Classifier System. Our approach achieves this by exploiting the generalization mechanisms inherent to such systems. The purpose of the work is to develop an approach to learning rules which accurately describe clusters without prior assumptions as to their number within a given dataset. Favourable comparisons to the commonly used k-means algorithm are demonstrated on a number of synthetic datasets.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0
2

Year Published

2008
2008
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 20 publications
(21 reference statements)
1
6
0
2
Order By: Relevance
“…Hence the set pressure encourages the evolution of rules which cover many data points and the fitness pressure acts as a limit upon the separation of such data points, i.e., the error. Tammee et al [2006] began by using a slightly simplified version of XCS as the underlying LCS (YCS) [Bull, 2005], but found that XCS's relative accuracy fitness function was more effective than a function directly inversely proportional to error [Tammee et al, 2007]  is set high, e.g., 0.1, in less-separated data the contiguous clusters are covered by the same rules. They therefore developed an adaptive threshold parameter scheme which uses the average error of the current [M]:…”
Section: Xcsc: Unsupervised Learningmentioning
confidence: 99%
“…Hence the set pressure encourages the evolution of rules which cover many data points and the fitness pressure acts as a limit upon the separation of such data points, i.e., the error. Tammee et al [2006] began by using a slightly simplified version of XCS as the underlying LCS (YCS) [Bull, 2005], but found that XCS's relative accuracy fitness function was more effective than a function directly inversely proportional to error [Tammee et al, 2007]  is set high, e.g., 0.1, in less-separated data the contiguous clusters are covered by the same rules. They therefore developed an adaptive threshold parameter scheme which uses the average error of the current [M]:…”
Section: Xcsc: Unsupervised Learningmentioning
confidence: 99%
“…Generally, they can be characterised by handling sequential decision tasks with a rule-based representation and by the use of evolutionary computation methods (for example, [167,95]), although some variants also perform supervised learning (for example, [161]) or unsupervised learning (for example, [211]), or do not rely on evolutionary computation (for example, [89]). Generally, they can be characterised by handling sequential decision tasks with a rule-based representation and by the use of evolutionary computation methods (for example, [167,95]), although some variants also perform supervised learning (for example, [161]) or unsupervised learning (for example, [211]), or do not rely on evolutionary computation (for example, [89]).…”
Section: Learning Classifier Systemsmentioning
confidence: 99%
“…CRA2 yielded similar performance to CRA but ran much faster when tested on the same dataset. Other similarly themed LCS rule compaction strategies include an approach for continuous-valued problem spaces (Wyatt et al, 2004), an approach for online rule compaction (Gao et al, 2006), an approach for clustering in XCS (Tamee et al, 2007), an approach which adds entropy calculation (Kharbat et al, 2008), and an approach designed for fuzzy rule representations (Shoeleh et al, 2011).…”
Section: Introductionmentioning
confidence: 99%