120 max) 25 We propose, and formalize, a new framework for research synthesis of both 26 evidence and influence, named 'research weaving'. It summarizes and visualizes 27 information content, history, and networks among a collection of diverse publication 28 types on any given topic. Research weaving achieves this feat by combining the 29 power of two methodologies: systematic mapping and bibliometrics. Systematic 30 mapping provides a snapshot of the current state of knowledge, identifying areas 31 needing more research attention and those ready for full synthesis (e.g., using meta-32 analysis). Bibliometrics enables researchers to see how pieces of evidence are 33 connected, revealing the structure and the evolution of a field. We explain how to 34 become a 'research weaver', and discuss how research weaving may change the 35 landscape of research synthesis. 36 37 38 Keywords: meta-research, quantitative synthesis, systematic review, Big Data, data 39 visualization, evidence synthesis (max 6) 40 41 2 Influence 43 Research fields are flooded with torrents of publications, and researchers require 44 informative reviews to stay afloat. For many years, researchers sought expert 45 opinions from narrative reviews (see Glossary) to obtain and update their 46 knowledge of a research topic or question [1]. These reviews were valuable not just 47 for summarizing 'facts' about a particular research field, but also for giving broader 48 insights, such as identifying the origin and development of key theoretical concepts, 49 or drawing attention to ideas that deserved greater research focus. More 50 sophisticated syntheses are now commonly used -systematic review and meta-51 analysis [2-8] -which incorporate systematic and often quantitative methods to 52 extract factual information from the literature in a reliable manner. However, both 53 these syntheses have their limitations. They are not practical for broad fields 54 encompassing thousands of publications, and cannot handle a highly heterogeneous 55 literature. A new technique has emerged to deal with these limitations: mapping. 56Currently, scientists' 'map' research evidence using two complementary 57 methodologies of different origins: systematic mapping and bibliometrics. 58Systematic mapping (sometimes called 'evidence mapping') is a method derived 59 from systematic reviews, with the goal of classifying the types of research on a broad 60 topic [9][10][11][12][13][14]. Systematic mapping is still a nascent methodology, with the first 61 systematic maps appearing only in the last decade [9, 10]. In addition to providing a 62 written report, a systematic map typically involves the production of a database of 63 studies and their attributes, which can be provided to users as a searchable 64 database or a series of visualisations [10][11][12]. In contrast, bibliometrics (more 65
Summary Segmentation of organs and structures, as either targets or organs‐at‐risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time‐consuming task for clinicians, and inter‐observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto‐segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U‐net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N‐fold cross‐validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Purpose The purpose of this study was to summarize and evaluate artificial intelligence (AI) algorithms used in geographic atrophy (GA) diagnostic processes (e.g. isolating lesions or disease progression). Methods The search strategy and selection of publications were both conducted in accordance with the Preferred of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Web of Science were used to extract literary data. The algorithms were summarized by objective, performance, and scope of coverage of GA diagnosis (e.g. lesion automation and GA progression). Results Twenty-seven studies were identified for this review. A total of 18 publications focused on lesion segmentation only, 2 were designed to detect and classify GA, 2 were designed to predict future overall GA progression, 3 focused on prediction of future spatial GA progression, and 2 focused on prediction of visual function in GA. GA-related algorithms reported sensitivities from 0.47 to 0.98, specificities from 0.73 to 0.99, accuracies from 0.42 to 0.995, and Dice coefficients from 0.66 to 0.89. Conclusions Current GA-AI publications have a predominant focus on lesion segmentation and a minor focus on classification and progression analysis. AI could be applied to other facets of GA diagnoses, such as understanding the role of hyperfluorescent areas in GA. Using AI for GA has several advantages, including improved diagnostic accuracy and faster processing speeds. Translational Relevance AI can be used to quantify GA lesions and therefore allows one to impute visual function and quality-of-life. However, there is a need for the development of reliable and objective models and software to predict the rate of GA progression and to quantify improvements due to interventions.
We propose, and formalize, a new framework for research synthesis of both evidence and influence, named ‘research weaving’. It summarizes and visualizes information content, history, and networks among a collection of diverse publication types on any given topic. Research weaving achieves this feat by combining the power of two methodologies: systematic mapping and bibliometrics. Systematic mapping provides a snapshot of the current state of knowledge, identifying areas needing more research attention and those ready for full synthesis (e.g., using meta-analysis). Bibliometrics enables researchers to see how pieces of evidence are connected, revealing the structure and the evolution of a field. We explain how to become a ‘research weaver’, and discuss how research weaving may change the landscape of research synthesis.
Increasing urbanization, population growth and looming climate change impacts are real problems that we face now and will be burning issues in the foreseeable future too. These issues emerge as a result of complex interactions between various domains which require improved access and sharing of key evidence on strategies for reducing resources consumption and lowering carbon footprint. However, rapidly accumulating evidence from the built environment sector, a multi-disciplinary field of study, provides challenges when attempting to find scientifically robust research and distil knowledge to draw confident conclusions in a reasonable timeframe. Since there is a vast number of primary studies, to create an evidence overview, secondary studies were collected using a systematic review methodology. The methodology included a predefined protocol, searches in multiple databases and other sources, structured screening, data extraction and coding. Reviews which were claimed to be systematic reviews or meta-analyses of literature relevant to reducing carbon footprint of the built environment or its co-benefits were deemed eligible for inclusion. The quality of the included reviews was assessed using an established tool for apprising systematic reviews of literature in the medical sciences. Key bibliographic, methodological and content details of the reviews were also extracted and coded. The resulting database contains 131 reviews published between 2001 and early 2018. The database is available via a dedicated website which includes interactive visualizations and filtering tools. The included high-level evidence is framed within the context of low carbon living and its co-benefits (e.g., health and well-being), although it is not an exhaustive database of all systematic reviews and meta-analyses in the built environment and sustainability sectors. However, due to the dynamic nature of the database, it can be easily expanded to host a broader range of evidence. In its current form, the interactive map and database can help discovery of secondary evidence for decision-making and research use, supporting transition to low carbon future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.