In bistable perception, observers experience alternations between two interpretations of an unchanging stimulus. Neurophysiological studies of bistable perception typically partition neural measurements into stimulus-based epochs and assess neuronal differences between epochs based on subjects’ perceptual reports. Computational studies replicate statistical properties of percept durations with modeling principles like competitive attractors or Bayesian inference. However, bridging neuro-behavioral findings with modeling theory requires the analysis of single-trial dynamic data. Here, we propose an algorithm for extracting non-stationary timeseries features from single-trial electrocorticography (ECoG) data. We applied the proposed algorithm to 5-minute ECoG recordings from human primary auditory cortex obtained during perceptual alternations in an auditory triplet streaming task (six subjects: 4 male, 2 female). We report two ensembles of emergent neuronal features in all trial blocks. One ensemble consists of periodic functions that encode a stereotypical response to the stimulus. The other comprises more transient features and encodes dynamics associated with bistable perception at multiple time scales: minutes (within-trial alternations), seconds (duration of individual percepts), and milliseconds (switches between percepts). Within the second ensemble, we identified a slowly drifting rhythm that correlates with the perceptual states and several oscillators with phase shifts near perceptual switches. Projections of single-trial ECoG data onto these features establish low-dimensional attractor-like geometric structures invariant across subjects and stimulus types. These findings provide supporting neural evidence for computational models with oscillatory-driven attractor-based principles. The feature extraction techniques described here generalize across recording modality and are appropriate when hypothesized low-dimensional dynamics characterize an underlying neural system.Significance StatementIrrespective of the sensory modality, neurophysiological studies of multi-stable perception have typically investigated events time-locked to the perceptual switching rather than the time course of the perceptual states per se. Here, we propose an algorithm that extracts neuronal features of bistable auditory perception from largescale single-trial data while remaining agnostic to the subject’s perceptual reports. The algorithm captures the dynamics of perception at multiple timescales—minutes (within-trial alternations), seconds (durations of individual percepts), and milliseconds (timing of switches)—and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Finally, our analysis identifies a set of latent variables that exhibit alternating dynamics along a low-dimensional manifold, similar to trajectories in attractor-based models for perceptual bistability.
Lagrangian methods to solve the inviscid Euler equations produce numerical oscillations near shock waves. A common approach to reducing these oscillations is to add artificial viscosity (AV) to the discrete equations. The AV term acts as a dissipative mechanism that attenuates oscillations by smearing the shock across a finite number of computational cells. However, AV introduces several control parameters that are not determined by the underlying physical model, and hence, in practice are tuned to the characteristics of a given problem. We seek to improve the standard quadratic-linear AV form by replacing it with a learned neural function that reduces oscillations relative to exact solutions of the Euler equations, resulting in a hybrid numerical-neural hydrodynamic solver. Because AV is an artificial construct that exists solely to improve the numerical properties of a hydrodynamic code, there is no offline ‘viscosity data’ against which a neural network can be trained before inserting into a numerical simulation, thus requiring online training. We achieve this via differentiable programming, i.e. end-to-end backpropagation or adjoint solution through both the neural and differential equation code, using automatic differentiation of the hybrid code in the Julia programming language to calculate the necessary loss function gradients. A novel offline pre-training step accelerates training by initializing the neural network to the default numerical AV scheme, which can be learned rapidly by space-filling sampling over the AV input space. We find that online training over early time steps of simulation is sufficient to learn a neural AV function that reduces numerical oscillations in long-term hydrodynamic shock simulations. These results offer an early proof-of-principle that online differentiable training of hybrid numerical schemes with novel neural network components can improve certain performance aspects existing in purely numerical schemes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.