Organisms and organs come in sizes and shapes. With size, science has no problems, but how to quantify shape? How similar are two birds or two brains? This problem is particularly pressing in cases like brains where structure reflects function. The problem is not new, but satisfying solutions have yet to be worked out. For brain anatomy, no general methodology for a statistically secured quantitative description is available. Using the small brain of the fly Drosophila melanogaster, we have explored a new approach combining immunohistochemistry, high-resolution 3D confocal microscopy, and advanced graphics computing. For a genetic model organism such as Drosophila, a quantitative assessment of brain structure is particularly rewarding, since it allows for the identification of genetic variants with subtle brain structure phenotypes and, even more importantly, the organization of the wealth of gene expression patterns in the brain into a genetic atlas linking molecular and organismic gene function. We now provide a representative standard for the brain of D. melanogaster wild-type with means and variances for several aspects of its shape. Its application to volumetry, mutants, and gene expression patterns is demonstrated.
We describe a novel method for continuously transforming two triangulated models of arbitrary topology into each other. Equal global topology for both objects is assumed. However, extensions for genus changes during metamorphosis are provided. The proposed method addresses the major challenge in 3D metamorphosis, namely, specifying the morphing process intuitively with minimal user interaction and sufficient detail. Corresponding regions and point features are interactively identified. These regions are parametrized automatically and consistently, providing a basis for smooth interpolation. Suitable 3D interaction techniques offer a simple and intuitive control over the whole morphing process.
Two distinct neuronal pathways connect the first olfactory neuropil, the antennal lobe, with higher integration areas, such as the mushroom bodies, via antennal lobe projection neurons. Intracellular recordings were used to address the question whether neuroanatomical features affect odor-coding properties. We found that neurons in the median antennocerebral tract code odors by latency differences or specific inhibitory phases in combination with excitatory phases, have a more specific activity profile for different odors and convey the information with a delay. The neurons of the lateral antennocerebral tract code odors by spike rate differences, have a broader activity profile for different odors, and convey the information quickly. Thus, rather preliminary information about the olfactory stimulus first reaches the mushroom bodies and the lateral horn via neurons of the lateral antennocerebral tract and subsequently odor information becomes more specified by activities of neurons of the median antennocerebral tract. We conclude that this neuroanatomical feature is not related to the distinction between different odors, but rather reflects a dual coding of the same odor stimuli by two different neuronal strategies focusing different properties of the same stimulus.
A new technique for interactive vector field visualization using large numbers of properly illuminated field lines is presented. Taking into account ambient, diffuse, and specular reflection terms as well as transparency and depth cueing, we employ a realistic shading model which significantly increases quality and realism of the resulting images. While many graphics workstations offer hardware support for illuminating surface primitives, usually no means for an accurate shading of line primitives are provided. However, we show that proper illumination of lines can be implemented by exploiting the texture mapping capabilities of modern graphics hardware. In this way high rendering performance with interactive frame rates can be achieved. We apply the technique to render large numbers of integral curves of a vector field. The impression of the resulting images can be further improved by a number of visual enhancements, like transparency and depthcueing. We also describe methods for controlling the distribution of field lines in space. These methods enable us to use illuminated field lines for interactive exploration of vector fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.