By performing high-resolution two-color photoassociation spectroscopy, we have successfully determined the binding energies of several of the last bound states of the homonuclear dimers of six different isotopes of ytterbium. These spectroscopic data are in excellent agreement with theoretical calculations based on a simple model potential, which very precisely predicts the s-wave scattering lengths of all 28 pairs of the seven stable isotopes. The s-wave scattering lengths for collision of two atoms of the same isotopic species are 13.33 (18) nm for 168 Yb, 3.38(11) nm for 170 Yb, −0.15(19) nm for 171 Yb, −31.7(3.4) nm for 172 Yb, 10.55(11) nm for 173 Yb, 5.55(8) nm for 174 Yb, and −1.28(23) nm for 176 Yb. The coefficient of the lead term of the long-range van der Waals potential of the Yb2 molecule is C6 = 1932(30) atomic units (E h a 6 0 ≈ 9.573 × 10 −26 J nm 6 ).
We report control of the scattering wave function by an optical Feshbach resonance effect using ytterbium atoms. The narrow intercombination line (1S0-3P1) is used for efficient control as proposed by Ciuryło et al. [Phys. Rev. A 71, 030701(R) (2005)10.1103/PhysRevA.71.030701]. The manipulation of the scattering wave function is monitored with the change of a photoassociation rate caused by another laser. The optical Feshbach resonance is especially efficient for isotopes with large negative scattering lengths such as 172Yb, and we have confirmed that the scattering phase shift divided by the wave number, which gives the scattering length in the zero energy limit, is changed by about 30 nm.
The basal ganglia play key roles in adaptive behaviors guided by reward and punishment. However, despite accumulating knowledge, few studies have tested how heterogeneous signals in the basal ganglia are organized and coordinated for goal-directed behavior. In this study, we investigated neuronal signals of the direct and indirect pathways of the basal ganglia as rats performed a lever push/pull task for a probabilistic reward. In the dorsomedial striatum, we found that optogenetically and electrophysiologically identified direct pathway neurons encoded reward outcomes, whereas indirect pathway neurons encoded no-reward outcome and next-action selection. Outcome coding occurred in association with the chosen action. In support of pathway-specific neuronal coding, light activation induced a bias on repeat selection of the same action in the direct pathway, but on switch selection in the indirect pathway. Our data reveal the mechanisms underlying monitoring and updating of action selection for goal-directed behavior through basal ganglia circuits.
Midbrain dopamine neurons signal reward value, their prediction error, and the salience of events. If they play a critical role in achieving specific distant goals, long-term future rewards should also be encoded as suggested in reinforcement learning theories. Here, we address this experimentally untested issue. We recorded 185 dopamine neurons in three monkeys that performed a multistep choice task in which they explored a reward target among alternatives and then exploited that knowledge to receive one or two additional rewards by choosing the same target in a set of subsequent trials. An analysis of anticipatory licking for reward water indicated that the monkeys did not anticipate an immediately expected reward in individual trials; rather, they anticipated the sum of immediate and multiple future rewards. In accordance with this behavioral observation, the dopamine responses to the start cues and reinforcer beeps reflected the expected values of the multiple future rewards and their errors, respectively. More specifically, when monkeys learned the multistep choice task over the course of several weeks, the responses of dopamine neurons encoded the sum of the immediate and expected multiple future rewards. The dopamine responses were quantitatively predicted by theoretical descriptions of the value function with time discounting in reinforcement learning. These findings demonstrate that dopamine neurons learn to encode the long-term value of multiple future rewards with distant rewards discounted.decision making | basal ganglia | temporal difference learning | primate
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.