Adenosine deaminases acting on RNA (ADARs) catalyze the hydrolytic deamination of adenosine to inosine in double-stranded RNA (dsRNA) and thereby potentially alter the information content and structure of cellular RNAs. Notably, although the overwhelming majority of such editing events occur in transcripts derived from Alu repeat elements, the biological function of non-coding RNA editing remains uncertain. Here, we show that mutations in ADAR1 (also known as ADAR) cause the autoimmune disorder Aicardi-Goutières syndrome (AGS). As in Adar1-null mice, the human disease state is associated with upregulation of interferon-stimulated genes, indicating a possible role for ADAR1 as a suppressor of type I interferon signaling. Considering recent insights derived from the study of other AGS-related proteins, we speculate that ADAR1 may limit the cytoplasmic accumulation of the dsRNA generated from genomic repetitive elements.
In this paper, we review the empirical literature concerning the important question of whether or not food color influences taste and flavor perception in humans. Although a superficial reading of the literature on this topic would appear to give a somewhat mixed answer, we argue that this is, at least in part, due to the fact that many researchers have failed to distinguish between two qualitatively distinct research questions. The first concerns the role that food coloring plays in the perception of the intensity of a particular flavor (e.g., strawberry, banana, etc.) or taste attribute (e.g., sweetness, saltiness, etc.). The second concerns the role that food coloring plays in the perception of flavor identity. The empirical evidence regarding the first question is currently rather ambiguous. While some researchers have reported a significant crossmodal effect of changing the intensity of a food or drink's coloring on people's judgments of taste or flavor intensity, many others have failed to demonstrate any such effect. By contrast, the research findings concerning the second question clearly support the view that people's judgments of flavor identity are often affected by the changing of a food or drink's color (be it appropriate, inappropriate, or absent). We discuss the possible mechanisms underlying these crossmodal effects and suggest some of the key directions for future research in order to move our understanding in this area forward.
We investigated whether the perception of the crispness and staleness of potato chips can be affected by modifying the sounds produced during the biting action. Participants in our study bit into potato chips with their front teeth while rating either their crispness or freshness using a computer‐based visual analog scale. The results demonstrate that the perception of both the crispness and staleness was systematically altered by varying the loudness and/or frequency composition of the auditory feedback elicited during the biting action. The potato chips were perceived as being both crisper and fresher when either the overall sound level was increased, or when just the high frequency sounds (in the range of 2 kHz−20 kHz) were selectively amplified. These results highlight the significant role that auditory cues can play in modulating the perception and evaluation of foodstuffs (despite the fact that consumers are often unaware of the influence of such auditory cues). The paradigm reported here also provides a novel empiric methodology for assessing such multisensory contributions to food perception.
The relative spatiotemporal correspondence between sensory events affects multisensory integration across a variety of species; integration is maximal when stimuli in different sensory modalities are presented from approximately the same position at about the same time. In the present study, we investigated the influence of spatial and temporal factors on audio-visual simultaneity perception in humans. Participants made unspeeded simultaneous versus successive discrimination responses to pairs of auditory and visual stimuli presented at varying stimulus onset asynchronies from either the same or different spatial positions using either the method of constant stimuli (Experiments 1 and 2) or psychophysical staircases (Experiment 3). The participants in all three experiments were more likely to report the stimuli as being simultaneous when they originated from the same spatial position than when they came from different positions, demonstrating that the apparent perception of multisensory simultaneity is dependent on the relative spatial position from which stimuli are presented.
Summary The ventral stream refers to a neural pathway that projects from early visual areas through to anterior temporal cortex, and comprises regions in ventral and lateral occipital-temporal cortex. The ventral stream is critical for recognizing visually presented objects. Functional imaging studies of the human brain have shown that different regions within the ventral stream show differential activation to nonliving (tools, houses) and living stimuli (animals, faces). The causes of these category preferences are widely debated. Using functional magnetic resonance imaging, we find that the same regions of the ventral stream that show category preferences for nonliving stimuli and animals in sighted adults, show the same category preferences in adults who are blind since birth. Both blind and sighted participants had larger blood oxygen-level dependent (BOLD) responses in the medial fusiform gyrus for nonliving stimuli compared to animal stimuli, and differential BOLD responses in lateral occipital cortex for animal stimuli compared to nonliving stimuli. These findings demonstrate that the medial-to-lateral bias by conceptual domain in the ventral stream does not require visual experience in order to develop, and suggest the operation of innately determined domain-specific constraints on the organization of object knowledge.
In two experiments, we examined the extent to which audiovisual temporal order judgments (TOJs) were affected by spatial factors and by the dimension along which TOJs were made. Pairs of auditory and visual stimuli were presented from either the left and/or right of fixation at varying stimulus onset asynchronies (SOAs), and participants made unspeeded TOJs regarding either "Which modality was presented first?" (experiment 1), or "Which side was presented first?" (experiment 2). Modality TOJs were more accurate (i.e. just-noticeable differences, JNDs, were smaller) when the auditory and visual stimuli were presented from different spatial positions rather than from the same position, highlighting an important potential confound inherent in previous research. By contrast, spatial TOJs were unaffected by whether or not the two stimuli were presented in different modalities. A between-experiments comparison revealed more accurate performance (i.e. smaller JNDs) when people reported which modality came first than when they reported which side came first for identical bimodal stimulus pairs. These results demonstrate that multisensory TOJs are critically dependent on both the relative spatial position from which stimuli are presented and on the particular dimension being judged.
The speeding-up of neural processing associated with attended events (i.e., the prior-entry effect) has long been proposed as a viable mechanism by which attention can prioritize our perception and action. In the brain, this has been thought to be regulated through a sensory gating mechanism, increasing the amplitudes of early evoked potentials while leaving their latencies unaffected. However, the majority of previous research has emphasized speeded responding and has failed to emphasize fine temporal discrimination, thereby potentially lacking the sensitivity to reveal putative modulations in the timing of neural processing. In the present study, we used a cross-modal temporal order judgment task while shifting attention between the visual and tactile modalities to investigate the mechanisms underlying selective attention electrophysiologically. Our results indicate that attention can indeed speed up neural processes during visual perception, thereby providing the first electrophysiological support for the existence of prior entry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.