2016
DOI: 10.1111/cogs.12395
|View full text |Cite
|
Sign up to set email alerts
|

Memory‐Based Simple Heuristics as Attribute Substitution: Competitive Tests of Binary Choice Inference Models

Abstract: Some researchers on binary choice inference have argued that people make inferences based on simple heuristics, such as recognition, fluency, or familiarity. Others have argued that people make inferences based on available knowledge. To examine the boundary between heuristic and knowledge usage, we examine binary choice inference processes in terms of attribute substitution in heuristic use (Kahneman & Frederick, 2005). In this framework, it is predicted that people will rely on heuristic or knowledge-based i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
44
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 15 publications
(46 citation statements)
references
References 81 publications
(92 reference statements)
1
44
1
Order By: Relevance
“…As for BIC, p denotes the number of free parameters (only one parameter, ε , for each strategy and, therefore, p = 1), and n denotes the number of choice pairs ( n = 60 for each difficulty level). According to previous studies (e.g., Honda et al, ; Raftery, ), we assumed w M 0.99 as “very strong” evidence, 0.95 w M < 0.99 as “strong” evidence, 0.75 w M < 0.95 as “positive” evidence, and 0.50 w M < 0.75 as “weak” evidence. If w M < 0.50, or the value of w M was equal between two or more models, we assumed that one model could not explain her/his inference patterns that was defined as “Not classified.”…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…As for BIC, p denotes the number of free parameters (only one parameter, ε , for each strategy and, therefore, p = 1), and n denotes the number of choice pairs ( n = 60 for each difficulty level). According to previous studies (e.g., Honda et al, ; Raftery, ), we assumed w M 0.99 as “very strong” evidence, 0.95 w M < 0.99 as “strong” evidence, 0.75 w M < 0.95 as “positive” evidence, and 0.50 w M < 0.75 as “weak” evidence. If w M < 0.50, or the value of w M was equal between two or more models, we assumed that one model could not explain her/his inference patterns that was defined as “Not classified.”…”
Section: Resultsmentioning
confidence: 99%
“…Herbert Simon proposed the metaphor that effective behaviors are generated when context (environmental structures) and cognition (human computational capabilities) fit together like the blades of a pair of scissors—one blade being environmental structures and the other being human cognitive capacity (i.e., “Simon's scissors” metaphor) (Simon, ; see also Kozyreva & Hertwig, ; Lockton, ; Todd & Brighton, ). In fact, many previous works have demonstrated that, using binary choice tasks, the effectiveness of people's subjective memory experiences as inference cues is clearly explained in terms of the real‐world environmental structures (e.g., Goldstein & Gigerenzer, ; Hertwig, Herzog, Schooler & Reimer, ; Herzog & Hertwig, ; Honda, Abe, Matsuka & Yamagishi, ; Honda, Matsuka & Ueda, ; Schooler & Hertwig, ; Xu, González‐Vallejo, Weinhardt, Chimeli & Karadogan, ; as a review, see Gigerenzer & Goldstein, ) . For example, consider a binary choice question: “Which city has a larger population, Tokyo or Chiba?” People can make inferences based on their specific knowledge such as whether there are famous soccer teams in the cities.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Stocco () compared two competing action selection models, both using the ACT‐R cognitive architecture, and used a thoroughly reported grid search to fit the parameters. Honda, Matsuka, and Ueda () modeled binary choice in terms of attribute substitution in heuristic use, and fitted one parameter with grid search. There are also examples of heuristic extensions to grid search for improving the scalability.…”
Section: Probabilistic Inference For Computational Cognitive Modelsmentioning
confidence: 99%