Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior-posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior-posterior axis of MTL that depends on the nature of event content.
In this paper we introduce the TorontoCity benchmark, which covers the full greater Toronto area (GTA) with 712.5km 2 of land, 8439km of road and around 400, 000 buildings. Our benchmark provides different perspectives of the world captured from airplanes, drones and cars driving around the city. Manually labeling such a large scale dataset is infeasible. Instead, we propose to utilize different sources of high-precision maps to create our ground truth. Towards this goal, we develop algorithms that allow us to align all data sources with the maps while requiring minimal human supervision. We have designed a wide variety of tasks including building height estimation (reconstruction), road centerline and curb extraction, building instance segmentation, building contour extraction (reorganization), semantic labeling and scene type classification (recognition). Our pilot study shows that most of these tasks are still difficult for modern convolutional neural networks.
Lossless-encoding compressed ultrafast photography captures a movie of a photonic Mach cone at 100 billion frames per second.
Current embodiments of photoacoustic imaging require either serial detection with a singleelement ultrasonic transducer or parallel detection with an ultrasonic array, necessitating a tradeoff between cost and throughput. Here, we present photoacoustic topography through an ergodic relay (PATER) for low-cost high-throughput snapshot widefield imaging. Encoding spatial information with randomized temporal signatures through ergodicity, PATER requires only a single-element ultrasonic transducer to capture a widefield image with a single laser shot. We applied PATER to demonstrate both functional imaging of hemodynamic responses and highspeed imaging of blood pulse wave propagation in mice in vivo. Leveraging the high frame rate of 2 kHz, PATER tracked and localized moving melanoma tumor cells in the mouse brain in vivo, which enabled flow velocity quantification and super-resolution imaging. Among the potential biomedical applications of PATER, wearable monitoring of human vital signs in particular is envisaged.Optical imaging reveals structures and molecular information in biological tissues. Tomographic optical microscopy technologies, such as confocal microscopy, multiphoton
Single-shot ultrafast optical imaging can capture two-dimensional transient scenes in the optical spectral range at ≥100 million frames per second. This rapidly evolving field surpasses conventional pump-probe methods by possessing the real-time imaging capability, which is indispensable for recording non-repeatable and difficult-to-reproduce events and for understanding physical, chemical, and biological mechanisms. In this mini-review, we survey comprehensively the state-of-the-art single-shot ultrafast optical imaging. Based on the illumination requirement, we categorized the field into active-detection and passive-detection domains. Depending on the specific image acquisition and reconstruction strategies, these two categories are further divided into a total of six sub-categories. Under each sub-category, we describe operating principles, present representative cutting-edge techniques with a particular emphasis on their methodology and applications, and discuss their advantages and challenges. Finally, we envision prospects of technical advancement in this field.
While the concept of focusing usually applies to the spatial domain, it is equally applicable to the time domain. Real-time imaging of temporal focusing of single ultrashort laser pulses is of great significance in exploring the physics of the space–time duality and finding diverse applications. The drastic changes in the width and intensity of an ultrashort laser pulse during temporal focusing impose a requirement for femtosecond-level exposure to capture the instantaneous light patterns generated in this exquisite phenomenon. Thus far, established ultrafast imaging techniques either struggle to reach the desired exposure time or require repeatable measurements. We have developed single-shot 10-trillion-frame-per-second compressed ultrafast photography (T-CUP), which passively captures dynamic events with 100-fs frame intervals in a single camera exposure. The synergy between compressed sensing and the Radon transformation empowers T-CUP to significantly reduce the number of projections needed for reconstructing a high-quality three-dimensional spatiotemporal datacube. As the only currently available real-time, passive imaging modality with a femtosecond exposure time, T-CUP was used to record the first-ever movie of non-repeatable temporal focusing of a single ultrashort laser pulse in a dynamic scattering medium. T-CUP’s unprecedented ability to clearly reveal the complex evolution in the shape, intensity, and width of a temporally focused pulse in a single measurement paves the way for single-shot characterization of ultrashort pulses, experimental investigation of nonlinear light-matter interactions, and real-time wavefront engineering for deep-tissue light focusing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.