2020
DOI: 10.1111/tops.12505
|View full text |Cite
|
Sign up to set email alerts
|

Structuring Memory Through Inference‐Based Event Segmentation

Abstract: Although the stream of information we encounter is continuous, our experiences tend to be discretized into meaningful clusters, altering how we represent our past. Event segmentation theory proposes that clustering ongoing experience in this way is adaptive in that it promotes efficient online processing as well as later reconstruction of relevant information. A growing literature supports this theory by demonstrating its important behavioral consequences. Yet the exact mechanisms of segmentation remain elusiv… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
59
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 85 publications
(76 citation statements)
references
References 119 publications
(158 reference statements)
0
59
0
Order By: Relevance
“…This exemplifies a fundamental principle of Bayesian inference known as the Bayesian Ockham's razor (MacKay, 2003, Chapter 28): Although we try to explain incoming data as simply as possible, if we encounter new information that is very unlikely given these prior assumptions, we will always infer (retrieve or learn) the hypothesis that assigns the highest likelihood to the data (see also Shin & DuBrow, 2021, this issue for a discussion in relation to classical rational models of categorization, cf. Anderson, 1991 and Sanborn, Griffiths, & Navarro, 2010).…”
Section: Engaging Hierarchical Generative Models To Comprehend Sequenmentioning
confidence: 99%
See 1 more Smart Citation
“…This exemplifies a fundamental principle of Bayesian inference known as the Bayesian Ockham's razor (MacKay, 2003, Chapter 28): Although we try to explain incoming data as simply as possible, if we encounter new information that is very unlikely given these prior assumptions, we will always infer (retrieve or learn) the hypothesis that assigns the highest likelihood to the data (see also Shin & DuBrow, 2021, this issue for a discussion in relation to classical rational models of categorization, cf. Anderson, 1991 and Sanborn, Griffiths, & Navarro, 2010).…”
Section: Engaging Hierarchical Generative Models To Comprehend Sequenmentioning
confidence: 99%
“…In all these cases, we need to be able to infer what kinds of prior experiences are most relevant for the present situation (see Kleinschmidt & Jaeger, 2015, for a detailed discussion of relevant issues). As discussed by Shin and DuBrow (2021, this issue), the Dirichlet process infinite mixture models described above provide insights into how we might be able to learn and extract abstract features that are common to different event clusters, allowing for this type of generalization. These types of models can also explain specific patterns of memory and decision biases (see Franklin et al, 2020).…”
Section: From Event Comprehension To Event Production and Learningmentioning
confidence: 99%
“…As we alluded to earlier, evidence for information optimization in event perception points to a necessary revision to EST. We note that Shin and Dubrow (2021) also suggest important, but different, revisions to EST.…”
Section: Learning: Discovering Events Within Noveltymentioning
confidence: 70%
“…On this view, event perception is a form of data compression, which can have great psychological value if—as recent findings clearly confirm—the chunks derived from this compression process are amenable to other cognitive operations, such as memory encoding, memory retrieval, hierarchical organization, linguistic labeling, and causal inference (e.g., Brunec, Moscovitch, & Barense, 2018; Buchsbaum et al, 2015; Chekaf, Cowan, & Mathy, 2016; Christiansen, 2019; Christiansen & Chater, 2016; Miller, 1956; Shin & Dubrow, 2021).…”
Section: Learning: Discovering Events Within Noveltymentioning
confidence: 99%
“…It may be that inferring an event begins with selecting a discrete model or policy, which minimizes, or is expected to minimize, prediction error, and which can be (passively or actively) tested against sensory input (for discussion, see Baldwin and Kosie, 2021; Shin and DuBrow, 2021). The “coherence” of the interrelated causes of an event would then relate to the actual or expected (local) prediction error minimum achieved by inferring the event—events are then causal models that are particularly good at explaining away actual or expected sensory input.…”
mentioning
confidence: 99%