120 max) 25 We propose, and formalize, a new framework for research synthesis of both 26 evidence and influence, named 'research weaving'. It summarizes and visualizes 27 information content, history, and networks among a collection of diverse publication 28 types on any given topic. Research weaving achieves this feat by combining the 29 power of two methodologies: systematic mapping and bibliometrics. Systematic 30 mapping provides a snapshot of the current state of knowledge, identifying areas 31 needing more research attention and those ready for full synthesis (e.g., using meta-32 analysis). Bibliometrics enables researchers to see how pieces of evidence are 33 connected, revealing the structure and the evolution of a field. We explain how to 34 become a 'research weaver', and discuss how research weaving may change the 35 landscape of research synthesis. 36 37 38 Keywords: meta-research, quantitative synthesis, systematic review, Big Data, data 39 visualization, evidence synthesis (max 6) 40 41 2 Influence 43 Research fields are flooded with torrents of publications, and researchers require 44 informative reviews to stay afloat. For many years, researchers sought expert 45 opinions from narrative reviews (see Glossary) to obtain and update their 46 knowledge of a research topic or question [1]. These reviews were valuable not just 47 for summarizing 'facts' about a particular research field, but also for giving broader 48 insights, such as identifying the origin and development of key theoretical concepts, 49 or drawing attention to ideas that deserved greater research focus. More 50 sophisticated syntheses are now commonly used -systematic review and meta-51 analysis [2-8] -which incorporate systematic and often quantitative methods to 52 extract factual information from the literature in a reliable manner. However, both 53 these syntheses have their limitations. They are not practical for broad fields 54 encompassing thousands of publications, and cannot handle a highly heterogeneous 55 literature. A new technique has emerged to deal with these limitations: mapping. 56Currently, scientists' 'map' research evidence using two complementary 57 methodologies of different origins: systematic mapping and bibliometrics. 58Systematic mapping (sometimes called 'evidence mapping') is a method derived 59 from systematic reviews, with the goal of classifying the types of research on a broad 60 topic [9][10][11][12][13][14]. Systematic mapping is still a nascent methodology, with the first 61 systematic maps appearing only in the last decade [9, 10]. In addition to providing a 62 written report, a systematic map typically involves the production of a database of 63 studies and their attributes, which can be provided to users as a searchable 64 database or a series of visualisations [10][11][12]. In contrast, bibliometrics (more 65
Purpose The purpose of this study was to summarize and evaluate artificial intelligence (AI) algorithms used in geographic atrophy (GA) diagnostic processes (e.g. isolating lesions or disease progression). Methods The search strategy and selection of publications were both conducted in accordance with the Preferred of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Web of Science were used to extract literary data. The algorithms were summarized by objective, performance, and scope of coverage of GA diagnosis (e.g. lesion automation and GA progression). Results Twenty-seven studies were identified for this review. A total of 18 publications focused on lesion segmentation only, 2 were designed to detect and classify GA, 2 were designed to predict future overall GA progression, 3 focused on prediction of future spatial GA progression, and 2 focused on prediction of visual function in GA. GA-related algorithms reported sensitivities from 0.47 to 0.98, specificities from 0.73 to 0.99, accuracies from 0.42 to 0.995, and Dice coefficients from 0.66 to 0.89. Conclusions Current GA-AI publications have a predominant focus on lesion segmentation and a minor focus on classification and progression analysis. AI could be applied to other facets of GA diagnoses, such as understanding the role of hyperfluorescent areas in GA. Using AI for GA has several advantages, including improved diagnostic accuracy and faster processing speeds. Translational Relevance AI can be used to quantify GA lesions and therefore allows one to impute visual function and quality-of-life. However, there is a need for the development of reliable and objective models and software to predict the rate of GA progression and to quantify improvements due to interventions.
Summary Segmentation of organs and structures, as either targets or organs‐at‐risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time‐consuming task for clinicians, and inter‐observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto‐segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U‐net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N‐fold cross‐validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
We propose, and formalize, a new framework for research synthesis of both evidence and influence, named ‘research weaving’. It summarizes and visualizes information content, history, and networks among a collection of diverse publication types on any given topic. Research weaving achieves this feat by combining the power of two methodologies: systematic mapping and bibliometrics. Systematic mapping provides a snapshot of the current state of knowledge, identifying areas needing more research attention and those ready for full synthesis (e.g., using meta-analysis). Bibliometrics enables researchers to see how pieces of evidence are connected, revealing the structure and the evolution of a field. We explain how to become a ‘research weaver’, and discuss how research weaving may change the landscape of research synthesis.
This study describes the development of a deep learning algorithm based on the U-Net architecture for automated segmentation of geographic atrophy (GA) lesions in fundus autofluorescence (FAF) images. Methods:Image preprocessing and normalization by modified adaptive histogram equalization were used for image standardization to improve effectiveness of deep learning. A U-Net-based deep learning algorithm was developed and trained and tested by fivefold cross-validation using FAF images from clinical datasets. The following metrics were used for evaluating the performance for lesion segmentation in GA: dice similarity coefficient (DSC), DSC loss, sensitivity, specificity, mean absolute error (MAE), accuracy, recall, and precision. Results:In total, 702 FAF images from 51 patients were analyzed. After fivefold crossvalidation for lesion segmentation, the average training and validation scores were found for the most important metric, DSC (0.9874 and 0.9779), for accuracy (0.9912 and 0.9815), for sensitivity (0.9955 and 0.9928), and for specificity (0.8686 and 0.7261). Scores for testing were all similar to the validation scores. The algorithm segmented GA lesions six times more quickly than human performance. Conclusions:The deep learning algorithm can be implemented using clinical data with a very high level of performance for lesion segmentation. Automation of diagnostics for GA assessment has the potential to provide savings with respect to patient visit duration, operational cost and measurement reliability in routine GA assessments. Translational Relevance: A deep learning algorithm based on the U-Net architecture and image preprocessing appears to be suitable for automated segmentation of GA lesions on clinical data, producing fast and accurate results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.