Despite considerable progress in the field of automated neurite arbor reconstruction from two-dimensional (2-D) or three-dimensional (3-D) images acquired using optical microscopes, 1 even the best available automated systems of today have a non-zero error rate, implying the continued need for visual proofreading and corrective editing systems. Currently available systems require the user to visually detect potential errors, and perform corrective edits serially, one at a time. Consequently, it is not uncommon for the task of proofreading an automated reconstruction to take longer than de novo manual reconstruction in the hands of a skilled operator, 2 appearing to defeat the utility of automated reconstruction systems. There is a compelling need in the neuroscience research community for smarter and more scalable proofreading tools that can significantly accelerate the process, and reduce the tedium and manpower cost.This need is especially compelling in large-scale and highthroughput studies that require large numbers of neurite arbor measurements. Meeting the above-mentioned need calls for a well-integrated combination of methods for (i) detailed visualization of reconstructions overlaid on the source images (usually large 2-D/3-D, and multi-channel image data); (ii) rapid identification of tracing errors; and (iii) rapid and minimal-effort correction of errors using interactive graphical tools.There is a parallel need among computer science researchers who are developing new and improved algorithms for automated reconstruction. Each newly developed algorithm must be validated, and its performance quantified in order to determine the extent to which it improves upon previously developed systems. Detailed performance evaluation data can enable algorithm developers to focus efforts on the most important algorithm deficiencies. Traditionally, this goal has been met by comparing automatically generated reconstructions against pre-established "gold standard" reconstructions. Currently available gold standards are generated by manual reconstruction, and sometimes proofread by multiple human observers to generate a consensus of reconstructions. This process is expensive, slow, and not scalable. Manual reconstructions are simply unavailable in an operational setting such as a highthroughput study. These considerations suggest the need for alternative performance evaluation and validation methodologies.In this paper, we advocate edit-based methodologies for validation and performance assessment. These methods become practically appropriate as and when the performance of automated algorithms is close to being optimal, i.e., the differences between automated reconstructions and human reconstructions are less than 10%. Under these 0 Peng H., Long F., Zhao T., Myers E.W., (2010) Proof-editing is the bottleneck of 3D neuron reconstruction: The problem and solutions, Neuroinformatics, published online: 17 th Dec. favorable conditions, a more efficient alternative methodology can, and should be, considered. In this alternativ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.