2021
DOI: 10.31234/osf.io/dftkv
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Post-Encoding Pre-Production Reinstatement (PEPPR) Model of Dual-List Free Recall

Abstract: Recent events are easy to recall, but they also interfere with recall of more distant, non-recent events. Many computational models recall non-recent memories by using the context associated with those events as a cue. But some models do little to explain how people initially activate non-recent contexts in the service of accurate recall. We addressed this limitation by evaluating two candidate mechanisms within the Context-Maintenance and Retrieval model. The first is a Backward-Walk mechanism that iterativel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 88 publications
0
0
0
Order By: Relevance
“…Lohnas et al (2015) solved this problem by assuming that subjects cue recall using the state of context at the end of the intervening list and then use context-similarity to filter out inappropriate recalls. Healey and Wahlheim (2023) proposed a more direct process of targeting the appropriate list, by assuming that subjects encode then partially reinstate a start-of-list context (similar to Lewandowsky & Murdock, 1989). Direct reinstatement of the target-list context then promotes recall of other target-list items.…”
Section: Inter-list Repetitionmentioning
confidence: 99%
See 1 more Smart Citation
“…Lohnas et al (2015) solved this problem by assuming that subjects cue recall using the state of context at the end of the intervening list and then use context-similarity to filter out inappropriate recalls. Healey and Wahlheim (2023) proposed a more direct process of targeting the appropriate list, by assuming that subjects encode then partially reinstate a start-of-list context (similar to Lewandowsky & Murdock, 1989). Direct reinstatement of the target-list context then promotes recall of other target-list items.…”
Section: Inter-list Repetitionmentioning
confidence: 99%
“…Although Healey and Wahlheim (2023) assessed this mechanism only with a single set of list-lengths and a pause between lists, other models assume similar automatic targetlist context reinstatement which varies with intervening list length and task between lists (Jang & Huber, 2008;Lehman & Malmberg, 2009). As a more direct investigation of targetlist context reinstatement, Unsworth and colleagues tested the account that the target-list context can be isolated and reinstated, by examining recall performance and recall latencies.…”
Section: Inter-list Repetitionmentioning
confidence: 99%
“…In general, the broader range of time in which a correct answer is allowed, the more accurate TCMs can be. For instance, if TCMs have to produce information about when an item was studied (e.g., Healey & Kahana, 2016;Healey & Wahlheim, 2023;Howard et al, 2015;Lohnas et al, 2015), asking in which list an item was studied allows for a large margin of error with respect to the temporal context representations which could be retrieved yet still be within the correct list. By contrast, asking for the exact absolute time or position when a stimulus was a presented, or reconstructing order, would be more challenging because a smaller subset of temporal context representations could provide the correct answer; in the simplest variant of TCM, only retrieving the single correct temporal representation for each studied item would yield an accurate result.…”
Section: Future Directions In Experiments Designmentioning
confidence: 99%
“…TCMs were not designed to explain how a rich environment of potential content and context turns into representations of content and context. However, TCMs have had much success in adjudicating between different assumptions of context representations based on their impact to memory performance (e.g., Cohen & Kahana, 2022;Healey & Wahlheim, 2023;Horwath et al, 2023;Polyn et al, 2009;Talmi et al, 2019).…”
Section: Future Directions In Experiments Designmentioning
confidence: 99%