Interoception refers to the sensing of signals concerning the internal state of the body. Individual differences in interoceptive sensitivity are proposed to account for differences in affective processing, including the expression of anxiety. The majority of investigations of interoceptive accuracy focus on cardiac signals, typically using heartbeat detection tests and self-report measures. Consequently, little is known about how different organ-specific axes of interoception relate to each other or to symptoms of anxiety. Here, we compare interoception for cardiac and respiratory signals. We demonstrate a dissociation between cardiac and respiratory measures of interoceptive accuracy (i.e. task performance), yet a positive relationship between cardiac and respiratory measures of interoceptive awareness (i.e. metacognitive insight into own interoceptive ability). Neither interoceptive accuracy nor metacognitive awareness for cardiac and respiratory measures was related to touch acuity, an exteroceptive sense. Specific measures of interoception were found to be predictive of anxiety symptoms. Poor respiratory accuracy was associated with heightened anxiety score, while good metacognitive awareness for cardiac interoception was associated with reduced anxiety. These findings highlight that detection accuracies across different sensory modalities are dissociable and future work can better delineate their relationship to affective and cognitive constructs. This article is part of the themed issue ‘Interoception beyond homeostasis: affect, cognition and mental health’.
There is a widespread tendency to associate certain properties of sound with those of colour (e.g., higher pitches with lighter colours). Yet it is an open question how sound influences chroma or hue when properly controlling for lightness. To examine this, we asked participants to adjust physically equiluminant colours until they ‘went best’ with certain sounds. For pure tones, complex sine waves and vocal timbres, increases in frequency were associated with increases in chroma. Increasing the loudness of pure tones also increased chroma. Hue associations varied depending on the type of stimuli. In stimuli that involved only limited bands of frequencies (pure tones, vocal timbres), frequency correlated with hue, such that low frequencies gave blue hues and progressed to yellow hues at 800 Hz. Increasing the loudness of a pure tone was also associated with a shift from blue to yellow. However, for complex sounds that share the same bandwidth of frequencies (100–3200 Hz) but that vary in terms of which frequencies have the most power, all stimuli were associated with yellow hues. This suggests that the presence of high frequencies (above 800 Hz) consistently yields yellow hues. Overall we conclude that while pitch–chroma associations appear to flexibly re-apply themselves across a variety of contexts, frequencies above 800 Hz appear to produce yellow hues irrespective of context. These findings reveal new sound–colour correspondences previously obscured through not controlling for lightness. Findings are discussed in relation to understanding the underlying rules of cross-modal correspondences, synaesthesia, and optimising the sensory substitution of visual information through sound.
Cross-modal correspondences describe the widespread tendency for attributes in one sensory modality to be consistently matched to those in another modality. For example, high pitched sounds tend to be matched to spiky shapes, small sizes, and high elevations. However, the extent to which these correspondences depend on sensory experience (e.g. regularities in the perceived environment) remains controversial. Two recent studies involving blind participants have argued that visual experience is necessary for the emergence of correspondences, wherein such correspondences were present (although attenuated) in late blind individuals but absent in the early blind. Here, using a similar approach and a large sample of early and late blind participants (N = 59) and sighted controls (N = 63), we challenge this view. Examining five auditory-tactile correspondences, we show that only one requires visual experience to emerge (pitch-shape), two are independent of visual experience (pitch-size, pitch-weight), and two appear to emerge in response to blindness (pitch-texture, pitch-softness). These effects tended to be more pronounced in the early blind than late blind group, and the duration of vision loss among the late blind did not mediate the strength of these correspondences. Our results suggest that altered sensory input can affect cross-modal correspondences in a more complex manner than previously thought and cannot solely be explained by a reduction in visually-mediated environmental correlations. We propose roles of visual calibration, neuroplasticity and structurally-innate associations in accounting for our findings.
Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.
Visual sensory substitution devices (SSDs) allow visually-deprived individuals to navigate and recognise the 'visual world'; SSDs also provide opportunities for psychologists to study modality-independent theories of perception. At present most research has focused on encoding greyscale vision. However at the low spatial resolutions received by SSD users, colour information enhances object-ground segmentation, and provides more stable cues for scene and object recognition. Many attempts have been made to encode colour information in tactile or auditory modalities, but many of these studies exist in isolation. This review brings together a wide variety of tactile and auditory approaches to representing colour. We examine how each device constructs 'colour' relative to veridical human colour perception and report previous experiments using these devices. Theoretical approaches to encoding and transferring colour information through sound or touch are discussed for future devices, covering alternative stimulation approaches, perceptually distinct dimensions and intuitive cross-modal correspondences.
Sensory Substitution Devices (SSDs) convert visual information into another sensory channel (e.g. sound) to improve the everyday functioning of blind and visually impaired persons (BVIP). However, the range of possible functions and options for translating vision into sound is largely open-ended. To provide constraints on the design of this technology, we interviewed ten BVIPs who were briefly trained in the use of three novel devices that, collectively, showcase a large range of design permutations. The SSDs include the 'Depth-vOICe,' 'Synaestheatre' and 'Creole' that offer high spatial, temporal, and colour resolutions respectively via a variety of sound outputs (electronic tones, instruments, vocals). The participants identified a range of practical concerns in relation to the devices (e.g. curb detection, recognition, mental effort) but also highlighted experiential aspects. This included both curiosity about the visual world (e.g. understanding shades of colour, the shape of cars, seeing the night sky) and the desire for the substituting sound to be responsive to movement of the device and aesthetically engaging.
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 Â 8-depth map into spatially
It is accepted knowledge that, for a given equivalent sound pressure level, sounds produced by planes are worse received from local communities than other sources related to transportation. Very little is known on the reasons for this special status, including any interactions that non-acoustical factors may have in listener assessments. Here we focus on one of such factors, the multisensory aspect of aircraft events. We propose a method to assess the visual impact of perceived aircraft height and size, beyond the objective increase in sound pressure level for a plane flying lower than another. We utilize a soundscape approach, based on acoustical indicators (dBs, LA, max, background sound pressure level) and social surveys: a combination of postal questionnaires (related to long-term exposure) and field interviews (related to the contextual perception), complementing well-established questions with others designed to capture new multisensory relationships. For the first time, we report how the perceived visual height of airplanes can be established using a combination of visual size, airplane size, reading distance, and airplane distance. Visual and acoustic assessments are complemented and contextualized by additional questions probing the subjective, objective, and descriptive assessments made by observers as well as how changes in airplane height over time may have influenced these perceptions. The flexibility of the proposed method allows a comparison of how participant reporting can vary across live viewing and memory recall conditions, allowing an examination of listeners' acoustic memory and expectations. The compresence of different assessment methods allows a comparison between the “objective” and the “perceptual” sphere and helps underscore the multisensory nature of observers' perceptual and emotive evaluations. In this study, we discuss pro and cons of our method, as assessed during a community survey conducted in the summer 2017 around Gatwick airport, and compare the different assessments of the community perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.