2000
DOI: 10.1007/3-540-44527-7_13
|View full text |Cite
|
Sign up to set email alerts
|

Genetic Algorithms to Optimise CBR Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
26
0

Year Published

2001
2001
2013
2013

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(27 citation statements)
references
References 11 publications
1
26
0
Order By: Relevance
“…The likelihood of seeing stronger similarities is much higher if the number of cases is substantially higher than 29. Also, the study concentrated on optimizing attribute weights, while it should also be possible to improve attribute selection by using GA ͑Jarmulak and Craw 1999; Jarmulak et al 2000͒, vary the number and combination of attributes, increase the number of cases considered, adjust the number and size of training and test sets, and try nondefault values for the parameters used in the various weight generation approaches. Considering the large number of alternative combinations of these variations, the study presented in this paper is limited to the basics and the experimentation with these variations are left for future research.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The likelihood of seeing stronger similarities is much higher if the number of cases is substantially higher than 29. Also, the study concentrated on optimizing attribute weights, while it should also be possible to improve attribute selection by using GA ͑Jarmulak and Craw 1999; Jarmulak et al 2000͒, vary the number and combination of attributes, increase the number of cases considered, adjust the number and size of training and test sets, and try nondefault values for the parameters used in the various weight generation approaches. Considering the large number of alternative combinations of these variations, the study presented in this paper is limited to the basics and the experimentation with these variations are left for future research.…”
Section: Discussionmentioning
confidence: 99%
“…Constructing CBR systems requires a significant knowledge engineering effort ͑Cunningham and Bonzana 1999͒. The knowledge acquisition effort can be minimized by determining the most representative case attributes, optimizing the case base organization and case retrieval, and refining the process of similarity assessment ͑Jar-mulak and Craw 1999; Jarmulak et al 2000͒. Similarity assessment involves a systematic comparison of the attributes of a test case with the attributes of all cases in the case base.…”
Section: Introductionmentioning
confidence: 99%
“…The application of GA to case-based reasoning has been studied to the retrieval phase for feature and instance learning, as in [54,55,56]. We understand that in the future we need to complement our approach with these studies and consider the synergies between feature learning and classifier weight learning, together with local similarity measures, such as those studied in [57].…”
Section: Comparison With Similar Toolsmentioning
confidence: 99%
“…CBR's benefits in solving construction management-related problems over other prediction techniques have been demonstrated by Arditi and Tokdemir ͑1999a,b͒ and Yau and Yang ͑1998͒. Recent research studies about the effectiveness of integrated machine learning approaches indicate that CBR systems could achieve better results when enhanced by other techniques ͑Cardie 1993; Jarmulak and Craw 1999;Jarmulak et al 2000;Ling et al 1997;Shin and Han 2002͒. CBR directly interprets past experiences. In other words, CBR systems predict the outcome of new situations by retrieving previously stored outcomes of similar situations from a case base.…”
Section: Introductionmentioning
confidence: 99%