2018
DOI: 10.1111/cogs.12613
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Information Theory Meets Human Cognition: Introducing a Unified Framework to Model Uncertainty and Information Search

Abstract: Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people's goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic unc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
79
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 73 publications
(89 citation statements)
references
References 132 publications
(201 reference statements)
3
79
1
Order By: Relevance
“…Our modeling results paralleled earlier findings from other tasks (Crupi et al, 2018) suggesting that it is easier to identify the value of the degree parameter than of the order parameter in the Sharma-Mittal space. Interestingly, to identify the order parameter a different type of question could be asked, translating higher entropy into difficulty of game play in the sense of the number of queries required to guess the secret code (for the underlying mathematical result see Crupi et al, 2018). Participants could be directly asked which of two code jars would be harder to play Mastermind with ( Figure 6).…”
Section: Discussionsupporting
confidence: 87%
See 4 more Smart Citations
“…Our modeling results paralleled earlier findings from other tasks (Crupi et al, 2018) suggesting that it is easier to identify the value of the degree parameter than of the order parameter in the Sharma-Mittal space. Interestingly, to identify the order parameter a different type of question could be asked, translating higher entropy into difficulty of game play in the sense of the number of queries required to guess the secret code (for the underlying mathematical result see Crupi et al, 2018). Participants could be directly asked which of two code jars would be harder to play Mastermind with ( Figure 6).…”
Section: Discussionsupporting
confidence: 87%
“…This analysis therefore corroborated our previous finding that the lower degree entropies better matched participants' queries. In relation to previous work modeling behavior with the Sharma-Mittal framework, Entropy Mastermind appears to be more similar to experiencebased than to description-based probabilistic classification tasks (see Crupi et al, 2018, Fig. 7).…”
Section: Computational Modelingsupporting
confidence: 60%
See 3 more Smart Citations