GPR41 and GPR43 are related members of a homologous family of orphan G protein-coupled receptors that are tandemly encoded at a single chromosomal locus in both humans and mice. We identified the acetate anion as an agonist of human GPR43 during routine ligand bank screening in yeast. This activity was confirmed after transient transfection of GPR43 into mammalian cells using Ca 2؉ mobilization and [ 35 S]guanosine 5-O-(3-thiotriphosphate) binding assays and by coexpression with GIRK G protein-regulated potassium channels in Xenopus laevis oocytes. Other short chain carboxylic acid anions such as formate, propionate, butyrate, and pentanoate also had agonist activity. GPR41 is related to GPR43 (52% similarity; 43% identity) and was activated by similar ligands but with differing specificity for carbon chain length, with pentanoate being the most potent agonist. A third family member, GPR42, is most likely a recent gene duplication of GPR41 and may be a pseudogene. GPR41 was expressed primarily in adipose tissue, whereas the highest levels of GPR43 were found in immune cells. The identity of the cognate physiological ligands for these receptors is not clear, although propionate is known to occur in vivo at high concentrations under certain pathophysiological conditions.Within family A of the G protein-coupled receptor (GPCR) 1 gene superfamily (also classified as family 1), there is a phylogenetically related group of ϳ90 receptors that respond to an unusually wide variety of ligand types, considering the relatively close similarity of their primary sequences (1). The group includes receptors that respond to purinergic or pyrimidinergic nucleotides (P2Y 1 , P2Y 2 , P2Y 4 , P2Y 6 , P2Y 11 , P2Y 12 , and P2Y 13 ), modified nucleotides (UDP-glucose), lipids (plateletactivating factor receptor), leukotrienes (BLT 1 and BLT 2 and CysLT 1 and CysLT 2 ), proteases (protease-activated receptor-1-4), chemoattractants (FPR1), and chemokines. To date, these receptors have no clear homologs in invertebrates, unlike the monoamine or neuropeptide receptors, suggesting a relatively recent evolutionary origin (2, 3). At least 50 GPCRs whose cognate ligands are unknown (orphans) (4) are categorized within this group on the basis of sequence homology. Often, these orphans fall into subsets, being more related to each other than to receptors with known ligands; and this, combined with the ligand diversity noted above, makes it difficult to predict the chemical nature of their ligands. One subset comprises GPR40 -43, which were identified as tandemly encoded genes present on cosmids isolated from human chromosomal locus 19q13.1 (5). GPR42 differs from GPR41 at only six amino acid positions; otherwise, the four members of this subfamily share ϳ30% minimum identity. BLAST searches have identified the next most closely related receptors as the proteaseactivated receptors. However, the long N-terminal extracellular domains that serve as protease substrates and that are characteristic of protease-activated receptors are absent in the GPR...
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map.H ow do we find our way in darkness? To answer demands that we identify the internal representations used to guide navigation. Some claim that spatial navigation depends on a coherent multimodal mental model or cognitive map (1-5). Others suggest that such representations do not exist and have been incorrectly inferred from the presence of simpler navigational mechanisms (6-9). One such mechanism is path integration, i.e., the ability to return to the start of a recently walked path by using internal motion-related information, such as proprioceptive and vestibular representations and motor efference copy (referred to hereafter as interoception), which are produced during the outbound path (10-18). Although path integration is primarily thought to depend on interoceptive information, visual information can be used intermittently to prevent the accumulation of error, if available (4,10,12,19).We asked whether path integration tasks show evidence of a coherent multimodal representation, or whether they reflect separate processes of interoceptive path integration and intermittent use of vision. Our starting point was studies showing that walking on a treadmill in illuminated conditions can affect one's subsequent perception of the translational and rotational speed of walking in darkness (20-23). These studies indicate that control of walking reflects tight coupling between visual and vestibular representations on the one hand and motoric and proprioceptive representations on the other, such that altering the correspondence between these two sets of representations has long-lasting consequences (22,2...
As we move through the world, our eyes acquire a sequence of images. The information from this sequence is sufficient to determine the structure of a three-dimensional scene, up to a scale factor determined by the distance that the eyes have moved. Previous evidence shows that the human visual system accounts for the distance the observer has walked and the separation of the eyes when judging the scale, shape, and distance of objects. However, in an immersive virtual-reality environment, observers failed to notice when a scene expanded or contracted, despite having consistent information about scale from both distance walked and binocular vision. This failure led to large errors in judging the size of objects. The pattern of errors cannot be explained by assuming a visual reconstruction of the scene with an incorrect estimate of interocular separation or distance walked. Instead, it is consistent with a Bayesian model of cue integration in which the efficacy of motion and disparity cues is greater at near viewing distances. Our results imply that observers are more willing to adjust their estimate of interocular separation or distance walked than to accept that the scene has changed in size.
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space.
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk).
Experience indicates that the sense of presence in a virtual environment is enhanced when the participants are able to actively move through it. When exploring a virtual world by walking, the size of the model is usually limited by the size of the available tracking space. A promising way to overcome these limitations are motion compression techniques, which decouple the position in the real and virtual world by introducing imperceptible visual-proprioceptive conflicts. Such techniques usually precalculate the redirection factors, greatly reducing their robustness. We propose a novel way to determine the instantaneous rotational gains using a controller based on an optimization problem. We present a psychophysical study that measures the sensitivity of visual-proprioceptive conflicts during walking and use this to calibrate a real-time controller. We show the validity of our approach by allowing users to walk through virtual environments vastly larger than the tracking space
Dynamic learning in humans has been extensively studied using externally applied force fields to perturb movements of the arm. These studies have focused on unimanual learning in which a force field is applied to only one arm. Here we examine dynamic learning during bimanual movements. Specifically we examine learning of a force field in one arm when the other arm makes movements in a null field or in a force field. For both the dominant and non-dominant arms, the learning (change in performance over the exposure period) was the same regardless of whether the other arm moved in a force field, equivalent either in intrinsic or extrinsic coordinates, or moved in a null field. Moreover there were no significant differences in learning in these bimanual tasks compared to unimanual learning, when one arm experienced a force field and the other arm was at rest. Although the learning was the same, there was an overall increase in error for the non-dominant arm for all bimanual conditions compared to the unimanual condition. This increase in error was the result of bimanual movement alone and was present even in the initial training phase before any forces were introduced. We conclude that, during bimanual movements, the application of a force field to one arm neither interferes with nor facilitates simultaneous learning of a force field applied to the other arm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.