2018
DOI: 10.1162/cpsy_a_00022
|View full text |Cite
|
Sign up to set email alerts
|

Active Inference and Auditory Hallucinations

Abstract: Auditory verbal hallucinations (AVH) are often distressing symptoms of several neuropsychiatric conditions, including schizophrenia. Using a Markov decision process formulation of active inference, we develop a novel model of AVH as false (positive) inference. Active inference treats perception as a process of hypothesis testing, in which sensory data are used to disambiguate between alternative hypotheses about the world. Crucially, this depends upon a delicate balance between prior beliefs about unobserved (… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
58
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

4
4

Authors

Journals

citations
Cited by 57 publications
(59 citation statements)
references
References 61 publications
1
58
0
Order By: Relevance
“…Despite decades of phenomenological, neurobiological and computational research, it is still not clear precisely how AVH develop and what mechanism underwrites them. Some investigators have turned to computational modelling to better understand the information processing deficits underlying AVH [2,3,4,5,6] Models based on Bayesian inference–inference via a combination of prior beliefs and sensory information [2,3,6]–support the notion that the overweighting of prior beliefs (e.g. that a certain word or sentence will be heard), relative to sensory information is necessary for hallucinations to emerge.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Despite decades of phenomenological, neurobiological and computational research, it is still not clear precisely how AVH develop and what mechanism underwrites them. Some investigators have turned to computational modelling to better understand the information processing deficits underlying AVH [2,3,4,5,6] Models based on Bayesian inference–inference via a combination of prior beliefs and sensory information [2,3,6]–support the notion that the overweighting of prior beliefs (e.g. that a certain word or sentence will be heard), relative to sensory information is necessary for hallucinations to emerge.…”
Section: Introductionmentioning
confidence: 99%
“…that a certain word or sentence will be heard), relative to sensory information is necessary for hallucinations to emerge. We have shown that hallucinations emerge when an overweighting of prior beliefs is coupled with a decrease in the precision of, or confidence in, sensory information [2]. In other words, computational agents hallucinate when they believe they should be hearing something and are unable to use sensory information (e.g.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For a person to actively maintain themselves, it follows that they should actively seek out sensory data that provide self-evidence. It is this notion of self-evidencing that underwrites the first-principles account on offer here and is the same principle that has been used to reproduce a range of other phenomena in neuroscience (e.g., visual search behavior ( Mirza et al 2016 ), navigation and planning ( Bruineberg et al 2018 ; Kaplan and Friston 2018 ), curiosity ( Friston et al 2017b ), reading ( Friston et al 2017d ), action-observation ( Friston et al 2011 ), neglect syndromes ( Parr and Friston 2017b ), and hallucinations ( Adams et al 2013 ; Benrimoh et al 2018 ; Parr et al 2018 )). This appeal to a common objective function (i.e., evidence for a generative model of the sensorium) across all these domains distinguishes the approach used here from alternative approaches to modeling behavior.…”
Section: Active Inferencementioning
confidence: 86%
“…In particular, aberrant precision tuning has been pointed to be very relevant for understanding hallucinations (Powers et al, 2017) and psychosis (Sterzer et al, 2018). In recent years, more and more studies have provided formal, computational explanations for an increasing number of psychiatric symptoms such as those of schizophrenia (Sterzer et al, 2019;Wacongne, 2016;Benrimoh et al, 2018), autism (Palmer et al, 2017;Robic et al, 2015;Sapey-Triomphe et al, 2018;Constant et al, 2018), and depression (Stephan et al, 2016). Such precisions can be linked to hyperparameters of generative models that explain the behaviours of subjects.…”
Section: Implications For Computational Psychiatrymentioning
confidence: 99%