Detailed knowledge of biological structure has been key in understanding biology at several levels of organisation, from organs to cells and proteins. Volume electron microscopy (volume EM) provides high resolution 3D structural information about tissues on the nanometre scale. However, the throughput rate of conventional electron microscopes has limited the volume size and number of samples that can be imaged. Recent improvements in methodology are currently driving a revolution in volume EM, making possible the structural imaging of whole organs and small organisms. In turn, these recent developments in image acquisition have created or stressed bottlenecks in other parts of the pipeline, like sample preparation, image analysis and data management. While the progress in image analysis is stunning due to the advent of automatic segmentation and server-based annotation tools, several challenges remain. Here we discuss recent trends in volume EM, emerging methods for increasing throughput and implications for sample preparation, image analysis and data management.
Volume electron microscopy (EM) of biological systems has grown exponentially in recent years due to innovative large-scale imaging approaches. As a standalone imaging method, however, large-scale EM typically has two major limitations: slow rates of acquisition and the difficulty to provide targeted biological information. We developed a 3D image acquisition and reconstruction pipeline that overcomes both of these limitations by using a widefield fluorescence microscope integrated inside of a scanning electron microscope. The workflow consists of acquiring large field of view fluorescence microscopy (FM) images, which guide to regions of interest for successive EM (integrated correlative light and electron microscopy). High precision EM-FM overlay is achieved using cathodoluminescent markers. We conduct a proof-of-concept of our integrated workflow on immunolabelled serial sections of tissues. Acquisitions are limited to regions containing biological targets, expediting total acquisition times and reducing the burden of excess data by tens or hundreds of GBs.
We present photometric data of the classical nova, V723 Cas (Nova Cas 1995), over a span of 10 years (2006 through 2016) taken with the 0.9 m telescope at Lowell Observatory, operated as the National Undergraduate Research Observatory (NURO) on Anderson Mesa near Flagstaff, Arizona. A photometric analysis of the data produced light curves in the optical bands (Bessel B, V, and R filters). The data analyzed here reveal an asymmetric light curve (steep rise to maximum, followed by a slow decline to minimum), the overall structure of which exhibits pronounced evolution including a decrease in magnitude from year to year, at the rate of ∼0.15 mag yr−1. We model these data with an irradiated secondary and an accretion disk with a hot spot using the eclipsing binary modeling program Nightfall. We find that we can model reasonably well each season of observation by changing very few parameters. The longitude of the hot spot on the disk and the brightness of the irradiated spot on the companion are largely responsible for the majority of the observed changes in the light curve shape and amplitude until 2009. After that, a decrease in the temperature of the white dwarf is required to model the observed light curves. This is supported by Swift/X-Ray Telescope observations, which indicate that nuclear fusion has ceased, and that V723 Cas is no longer detectable in the X-ray.
Volume electron microscopy (EM) is now increasingly looked to for answering complex biological questions, thanks in large part to revolutionary developments in instrumentation and sample preparation techniques. However, due to the slow nature of electron-probe-based scanning techniques, throughput has long remained the primary obstacle in pushing to larger areas at high resolution. An undesirable trade-off is therefore commonly made between imaging resolution, volume, and time. A recent approach that attempts to circumvent this trade-off involves selective imaging of the specimen in multiple iterations at increasing levels of magnification. Between iterations, the EM dataset is reconstructed and inspected to locate sites for higher magnification imaging in the next iteration [1]. While this approach can outpace indiscriminate scanning of the entire volume, overall throughput is limited by the overhead involved in reconstructing and evaluating the intermediate datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.