Advances in functional brain imaging now allow sustained rapid 3D visualization of large numbers of neurons inside behaving animals. To decode circuit activity, imaged neurons must be individually segmented and tracked. This is particularly challenging when the brain itself moves and deforms inside a flexible body. The field has lacked general methods for solving this problem effectively. To address this need, we developed a method based on a convolutional neural network (CNN) with specific enhancements which we apply to freely moving Caenorhabditis elegans. For a traditional CNN to track neurons across images of a brain with different postures, the CNN must be trained with ground truth (GT) annotations of similar postures. When these postures are diverse, an adequate number of GT annotations can be prohibitively large to generate manually. We introduce 'targeted augmentation', a method to automatically synthesize reliable annotations from a few manual annotations. Our method effectively learns the internal deformations of the brain. The learned deformations are used to synthesize annotations for new postures by deforming the manual annotations of similar postures in GT images. The technique is germane to 3D images, which are generally more difficult to analyze than 2D images. The synthetic annotations, which are added to diversify training datasets, drastically reduce manual annotation and proofreading. Our method is effective both when neurons are represented as individual points or as 3D volumes. We provide a GUI that incorporates targeted augmentation in an end-to-end pipeline, from manual GT annotation of a few images to final proofreading of all images. We apply the method to simultaneously measure activity in the second-layer interneurons in C. elegans: RIA, RIB, and RIM, including the RIA neurite. We find that these neurons show rich behaviors, including switching entrainment on and off dynamically when the animal is exposed to periodic odor pulses.
It is well known that the power spectrum is not able to fully characterize the statistical properties of non-Gaussian density fields. Recently, many different statistics have been proposed to extract information from non-Gaussian cosmological fields that perform better than the power spectrum. The Fisher matrix formalism is commonly used to quantify the accuracy with which a given statistic can constrain the value of the cosmological parameters. However, these calculations typically rely on the assumption that the sampling distribution of the considered statistic follows a multivariate Gaussian distribution. In this work, we follow Sellentin & Heavens and use two different statistical tests to identify non-Gaussianities in different statistics such as the power spectrum, bispectrum, marked power spectrum, and wavelet scattering transform (WST). We remove the non-Gaussian components of the different statistics and perform Fisher matrix calculations with the Gaussianized statistics using Quijote simulations. We show that constraints on the parameters can change by a factor of ∼2 in some cases. We show with simple examples how statistics that do not follow a multivariate Gaussian distribution can achieve artificially tight bounds on the cosmological parameters when using the Fisher matrix formalism. We think that the non-Gaussian tests used in this work represent a powerful tool to quantify the robustness of Fisher matrix calculations and their underlying assumptions. We release the code used to compute the power spectra, bispectra, and WST that can be run on both CPUs and GPUs.
Cosmological surveys must correct their observations for the reddening of extragalactic objects by Galactic dust. Existing dust maps, however, have been found to have spatial correlations with the large-scale structure of the Universe. Errors in extinction maps can propagate systematic biases into samples of dereddened extragalactic objects and into cosmological measurements such as correlation functions between foreground lenses and background objects and the primordial non-Gaussianity parameter f NL. Emission-based maps are contaminated by the cosmic infrared background, while maps inferred from stellar reddenings suffer from imperfect removal of quasars and galaxies from stellar catalogs. Thus, stellar-reddening-based maps using catalogs without extragalactic objects offer a promising path to making dust maps with minimal correlations with large-scale structure. We present two high-latitude integrated extinction maps based on stellar reddenings, with a point-spread functions of FWHMs 6.′1 and 15′. We employ a strict selection of catalog objects to filter out galaxies and quasars and measure the spatial correlation of our extinction maps with extragalactic structure. Our galactic extinction maps have reduced spatial correlation with large-scale structure relative to most existing stellar-reddening-based and emission-based extinction maps.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.