2021
DOI: 10.1101/2021.11.01.466796
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Surprise: a unified theory and experimental predictions

Abstract: Surprising events trigger measurable brain activity and influence human behavior by affecting learning, memory, and decision-making. Currently there is, however, no consensus on the definition of surprise. Here we identify 16 mathematical definitions of surprise in a unifying framework, show how these definitions relate to each other, and prove under what conditions they are indistinguishable. We classify these surprise measures into four main categories: (i) change-point detection surprise, (ii) information g… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(14 citation statements)
references
References 117 publications
2
12
0
Order By: Relevance
“…3B3). These observations confirm the inefficiency of seeking surprise and the efficiency of seeking information gain in dealing with noise 25,29 .…”
Section: Simulating Intrinsically Motivated Agentssupporting
confidence: 73%
See 1 more Smart Citation
“…3B3). These observations confirm the inefficiency of seeking surprise and the efficiency of seeking information gain in dealing with noise 25,29 .…”
Section: Simulating Intrinsically Motivated Agentssupporting
confidence: 73%
“…4 and Fig. 6A), we define the intrinsic reward function as the Shannon surprise 29 where p ( t ) ( s ′| s, a ) is defined in Eq. 6.…”
Section: Methodsmentioning
confidence: 99%
“…The sign of this can be observed in the fragmented panorama of different theories and models proposed in the literature. In recent years, theoretical neuroscientists have formulated new frameworks attempting at providing more general explanations to aspects concerning intelligence and learning [41,42]. In this work we contribute to this generalization effort by providing a general framework that is capable to account for different learning approaches by modulating two sensible parameters, the of the feedback error propagation R and the tolerance to precise spike timing τ ?…”
Section: Discussionmentioning
confidence: 99%
“…Following earlier work on the N400 (Rabovsky et al, 2018), we implemented our semantic surprise measure as the Bayesian surprise, which indexes the extent to which a stimulus causes the learner to update its beliefs (Modirshanechi et al, 2021). If, for example, a learner has previously been presented with only land animals, seeing another land animal will not cause a large adjustment to the learner’s beliefs, resulting in a small BS.…”
Section: Methodsmentioning
confidence: 99%
“…Seeing a tool will, however, prompt the learner to reconsider the probabilities of the respective semantic categories, and will thus cause a larger Bayesian surprise. The BS is defined as the Kullback-Leibler (KL) divergence between the model’s beliefs prior to, and after, the presentation of a stimulus (Modirshanechi et al, 2021). An example of how BS varies over a sequence of stimuli in the roving paradigm is displayed in the upper panel of Figure 1.…”
Section: Methodsmentioning
confidence: 99%