Abstract. BACKGROUND:Laminography is a tomographic technique that allows three-dimensional imaging of flat, elongated objects that stretch beyond the extent of a reconstruction volume. Laminography datasets can be reconstructed using iterative algorithms based on the Kaczmarz method. OBJECTIVE: The goal of this study is to develop a reconstruction algorithm that provides superior reconstruction quality for a challenging class of problems. METHODS: Images are represented in computer memory using coefficients over basis functions, typically piecewise constant functions (voxels). By replacing voxels with spherically symmetric volume elements (blobs) based on generalized KaiserBessel window functions, we obtained an adapted version of the algebraic reconstruction technique. RESULTS: Band-limiting properties of blob functions are beneficial particular in the case of noisy projections and if only a limited number of projections is available. In this case, using blob basis functions improved the full-width-at-half-maximum resolution from 10.2 ± 1.0 to 9.9 ± 0.9 (p value = 2.3·10 −4 ). For the same dataset, the signal-to-noise ratio improved from 16.1 to 31.0. The increased computational demand per iteration is compensated for by a faster convergence rate, such that the overall performance is approximately identical for blobs and voxels. CONCLUSIONS: Despite the higher complexity, tomographic reconstruction from computed laminography data should be implemented using blob basis functions, especially if noisy data is expected.
We conducted a comparative study of three widely used algorithms for the detection of fiducial markers in electron microscopy images. The algorithms were applied to four datasets from different sources. For the purpose of obtaining comparable results, we introduced figures of merit and implemented all three algorithms in a unified code base to exclude software-specific differences. The application of the algorithms revealed that none of the three algorithms is superior to the others in all cases. This leads to the conclusion that the choice of a marker detection algorithm highly depends on the properties of the dataset to be analyzed, even within the narrowed domain of electron tomography.
We present a novel software package for tomographic reconstruction in electron microscopy, named Ettention [1]. The software consists of a set of modular building-blocks for iterative reconstruction algorithms. Ettention simultaneously features (1) a modular, object-oriented software design, (2) optimized access to high-performance computing (HPC) platforms such as graphic processing units (GPU) or many-core architectures like Xeon Phi, and (3) accessibility to microscopy end-users via integration in the IMOD package and user interface. We provide developers with a clean application programming interface (API) that allows for extending the software easily and thus makes it an ideal platform for algorithmic research while hiding most of the technical details of high-performance computing. Several case studies are provided to demonstrate the feasibility of the concept [2].
We compared three commonly used algorithms for detecting fiducial markers in electron microscopy images [1]. The algorithms were implemented in a unified codebase in the software package Ettention [2] to ensure comparability of results without the influence of software. Several evaluation metrics were introduced to assess the capabilities of the algorithms on basis of four datasets. We showed, that depending on a dataset, different algorithms performed best. This proved, that the choice of a marker detection algorithm highly depends on the properties of a dataset to be analyzed, which makes it difficult to achieve best possible marker detection capabilities on a wide range of datasets with varying properties. Hence, more sophisticated marker detection methods may be needed to ensure the proper working of subsequent steps like alignment.Marker detection capabilities of two cross-correlation based algorithms with template matching [3] and with pattern averaging [4], both also with additional filtering of false positives, as well as one convolution-based algorithm [5] were compared based on four datasets with discriminating properties and varying complex environment for expressive testing capabilities. The datasets covered a wide range of resolutions for both image resolution and marker resolution to ensure robustness and scalability. We compared the algorithms regarding their ability to find the right centre coordinate of markers as well as the reliability that detected markers are real markers in terms of sensitivity and in terms of making few mistakes. For unification of the number of identified markers and filtering of noise, all marker candidates were ranked by each algorithms individual score and the top 5% were chosen as identified markers.Experiments showed that dataset and application determine the algorithm to choose. Cross-correlation based methods find most real markers, but also introduce many falsely detected markers, which can be partly overcome with additional filtering that also filters out some real markers. The approach with pattern averaging dominates regarding the detection of true coordinates of markers. The convolutionbased method is very good at finding only real markers without introducing many falsely detected markers, however, it also misses many real markers.The results of our study showed, that marker detection is also an important step to be considered and evaluated independently of further proceedings, which has not been done in any of the evaluated approaches. Depending on the dataset, results of complete pipelines that include marker detection, e.g. alignment, may heavily be influenced by the choice of the marker detection algorithm. The study showed that a higher awareness of that fact could lead to an improvement of many applications which rely on marker detection steps. Further research towards better marker detection algorithms that are able to adapt to a given dataset or majority frameworks of different marker detection algorithms could support such improvements. [7] 1044
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.