In the setting of a challenge competition, some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic routine pathology workflow; algorithm performance was comparable with an expert pathologist interpreting whole-slide images without time constraints. Whether this approach has clinical utility will require evaluation in a clinical setting.
The lack of publicly available ground-truth data has been identified as the major challenge for transferring recent developments in deep learning to the biomedical imaging domain. Though crowdsourcing has enabled annotation of large scale databases for real world images, its application for biomedical purposes requires a deeper understanding and hence, more precise definition of the actual annotation task. The fact that expert tasks are being outsourced to non-expert users may lead to noisy annotations introducing disagreement between users. Despite being a valuable resource for learning annotation models from crowdsourcing, conventional machine-learning methods may have difficulties dealing with noisy annotations during training. In this manuscript, we present a new concept for learning from crowds that handle data aggregation directly as part of the learning process of the convolutional neural network (CNN) via additional crowdsourcing layer (AggNet). Besides, we present an experimental study on learning from crowds designed to answer the following questions. 1) Can deep CNN be trained with data collected from crowdsourcing? 2) How to adapt the CNN to train on multiple types of annotation datasets (ground truth and crowd-based)? 3) How does the choice of annotation and aggregation affect the accuracy? Our experimental setup involved Annot8, a self-implemented web-platform based on Crowdflower API realizing image annotation tasks for a publicly available biomedical image database. Our results give valuable insights into the functionality of deep CNN learning from crowd annotations and prove the necessity of data aggregation integration.
Registration of vascular structures is crucial for preoperative planning, intraoperative navigation, and follow-up assessment. Typical applications include, but are not limited to, Trans-catheter Aortic Valve Implantation and monitoring of tumor vasculature or aneurysm growth. In order to achieve the aforementioned goals, a large number of various registration algorithms has been developed. With this review paper we provide a comprehensive overview over the plethora of existing techniques with a particular focus on the suitable classification criteria such as the involved modalities of the employed optimization methods. However, we wish to go beyond a static literature review which is naturally doomed to be outdated after a certain period of time due to the research progress. We augment this review paper with an extendable and interactive database in order to obtain a living review whose currency goes beyond the one of a printed paper. All papers in this database are labeled with one or multiple tags according to 13 carefully defined categories. The classification of all entries can then be visualized as one or multiple trees which are presented via a web-based interactive app (http://livingreview.in.tum.de) allowing the user to choose a unique perspective for literature review. In addition, the user can search the underlying database for specific tags or publications related to vessel registration. Many applications of this framework are conceivable, including the use for getting a general overview on the topic or the utilization by physicians for deciding about the best-suited algorithm for a specific application.
In the current clinical workflow of endovascular abdominal aortic repairs (EVAR) a stent graft is inserted into the aneurysmatic aorta under 2D angiographic imaging. Due to the missing depth information in the X-ray visualization, it is highly difficult in particular for junior physicians to place the stent graft in the preoperatively defined position within the aorta. Therefore, advanced 3D visualization of stent grafts is highly required. In this paper, we present a novel algorithm to automatically match a 3D model of the stent graft to an intraoperative 2D image showing the device. By automatic preprocessing and a globalto-local registration approach, we are able to abandon user interaction and still meet the desired robustness. The complexity of our registration scheme is reduced by a semi-simultaneous optimization strategy incorporating constraints that correspond to the geometric model of the stent graft. Via experiments on synthetic, phantom, and real interventional data, we are able to show that the presented method matches the stent graft model to the 2D image data with good accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.