The anatomical substrates of neural nets are usually composed from reconstructions of neurons that were stained in different preparations. Realistic models of the structural relationships between neurons require a common framework. Here we present 3-D reconstructions of single projection neurons (PN) connecting the antennal lobe (AL) with the mushroom body (MB) and lateral horn, groups of intrinsic mushroom body neurons (type 5 Kenyon cells), and a single mushroom body extrinsic neuron (PE1), aiming to compose components of the olfactory pathway in the honeybee. To do so, we constructed a digital standard atlas of the bee brain. The standard atlas was created as an average-shape atlas of 22 neuropils, calculated from 20 individual immunostained whole-mount bee brains. After correction for global size and positioning differences by repeatedly applying an intensity-based nonrigid registration algorithm, a sequence of average label images was created. The results were qualitatively evaluated by generating average gray-value images corresponding to the average label images and judging the level of detail within the labeled regions. We found that the first affine registration step in the sequence results in a blurred image because of considerable local shape differences. However, already the first nonrigid iteration in the sequence corrected for most of the shape differences among individuals, resulting in images rich in internal detail. A second iteration improved on that somewhat and was selected as the standard. Registering neurons from different preparations into the standard atlas reveals 1) that the m-ACT neuron occupies the entire glomerulus (cortex and core) and overlaps with a local interneuron in the cortical layer; 2) that, in the MB calyces and the lateral horn of the protocerebral lobe, the axon terminals of two identified m-ACT neurons arborize in separate but close areas of the neuropil; and 3) that MB-intrinsic clawed Kenyon cells (type 5), with somata outside the calycal cups, project to the peduncle and lobe output system of the MB and contact (proximate) the dendritic tree of the PE1 neuron at the base of the vertical lobe. Thus the standard atlas and the procedures applied for registration serve the function of creating realistic neuroanatomical models of parts of a neural net. The Honeybee Standard Brain is accessible at www.neurobiologie.fu-berlin.de/beebrain.
Abstract:The study of cerebral micro-vascular network requires high resolution images. However, to obtain statistically relevant results, a large area of the brain (about few square millimeters) has to be investigated. This leads us to consider huge images, too large to be loaded and processed at once in the memory of a standard computer. To consider a large area, a compact representation of the vessels is required. The medial axis seems to be the tools of choice for the aimed application. To extract it, a dedicated skeletonization algorithm is proposed. Indeed, a skeleton must be homotopic, thin and medial with respect to the object it represents. Numerous approaches already exist which focus on computational efficiency. However, they all implicitly assume that the image can be completely processed in the computer memory, which is not realistic with the size of the data considered here. We present in this paper a skeletonization algorithm that processes data locally (in sub-images) while preserving global properties (i.e. homotopy). We then show some results obtained on a mosaic of 3-D images acquired by confocal microscopy. Key-words:Image mosaic, digital topology, chamfer map, medial axis, skeleton, topological thinning
Skeletons are compact representations that allow mathematical analysis of objects. A skeleton must be homotopic, thin and medial in relation to the object it represents. Numerous approaches already exist which focus on computational efficiency. However, when dealing with data too large to be loaded into the main memory of a personal computer, such approaches can no longer be used. We present in this article a skeletonization algorithm that processes the data locally (in sub-images) while preserving global properties (medial localization). Our privileged application is the study of the cerebral micro-vascularisation, and we show some results obtained on a mosaic of 3-D images acquired by confocal microscopy.
BackgroundTo externally evaluate the first picture archiving communications system (PACS)‐integrated artificial intelligence (AI)‐based workflow, trained to automatically detect a predefined computed tomography (CT) slice at the third lumbar vertebra (L3) and automatically perform complete image segmentation for analysis of CT body composition and to compare its performance with that of an established semi‐automatic segmentation tool regarding speed and accuracy of tissue area calculation.MethodsFor fully automatic analysis of body composition with L3 recognition, U‐Nets were trained (Visage) and compared with a conventional image segmentation software (TomoVision). Tissue was differentiated into psoas muscle, skeletal muscle, visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT). Mid‐L3 level images from randomly selected DICOM slice files of 20 CT scans acquired with various imaging protocols were segmented with both methods.ResultsSuccess rate of AI‐based L3 recognition was 100%. Compared with semi‐automatic, fully automatic AI‐based image segmentation yielded relative differences of 0.22% and 0.16% for skeletal muscle, 0.47% and 0.49% for psoas muscle, 0.42% and 0.42% for VAT and 0.18% and 0.18% for SAT. AI‐based fully automatic segmentation was significantly faster than semi‐automatic segmentation (3 ± 0 s vs. 170 ± 40 s, P < 0.001, for User 1 and 152 ± 40 s, P < 0.001, for User 2).ConclusionRapid fully automatic AI‐based, PACS‐integrated assessment of body composition yields identical results without transfer of critical patient data. Additional metabolic information can be inserted into the patient's image report and offered to the referring clinicians.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.