Radiomics aims to quantify phenotypic characteristics on medical imaging through the use of automated algorithms. Radiomic artificial intelligence (AI) technology, either based on engineered hard-coded algorithms or deep learning methods, can be used to develop non-invasive imaging-based biomarkers. However, lack of standardized algorithm definitions and image processing severely hampers reproducibility and comparability of results. To address this issue, we developed PyRadiomics, a flexible open-source platform capable of extracting a large panel of engineered features from medical images. PyRadiomics is implemented in Python and can be used standalone or using 3D-Slicer. Here, we discuss the workflow and architecture of PyRadiomics and demonstrate its application in characterizing lung-lesions. Source code, documentation, and examples are publicly available at www.radiomics.io. With this platform, we aim to establish a reference standard for radiomic analyses, provide a tested and maintained resource, and to grow the community of radiomic developers addressing critical needs in cancer research.
Neurodata Without Borders: Neurophysiology (NWB:N) is a data standard for neurophysiology, providing neuroscientists with a common standard to share, archive, use, and build common analysis tools for neurophysiology data. With NWB:N version 2.0 (NWB:N 2.0) we made significant advances towards creating a usable standard, software ecosystem, and vibrant community for standardizing neurophysiology data. In this manuscript we focus in particular on the NWB:N data standard schema and present advances towards creating an accessible data standard for neurophysiology. IntroductionMotivation: Brain function is produced by the coordinated activity of multiple neuronal types that are widely distributed across many brain regions. Neuronal signals are acquired using extra-and intracellular recordings, and increasingly optical imaging, during sensory, motor, and cognitive tasks. Neurophysiology research generates large, complex and heterogeneous datasets at terabyte scale. The data size and complexity is expected to continue to grow with the increasing sophistication of experimental apparatuses. Lack of standards for neurophysiology data and related metadata is the single greatest impediment to fully extracting return-on-investment from neurophysiology experiments, impeding interchange and reuse of data and reproduction of derived conclusions. This gap motivated the launch of the Neurodata Without Borders : Neurophysiology (NWB:N) data standards project. The goal of NWB:N is to develop a standardized format and methods for neurophysiology data and metadata.Background: The first NWB:N 1.0.x standard was the result of a 1 year pilot project in 2015 12 . As part of this pilot, neurophysiologists and software developers met during two hackathons to create a common data format for recordings and metadata of cellular electro-and optical physiology experiments (Fig. 1, top). Despite the important advances that NWB:N 1.0 made towards creating a neurophysiology data standard, the standard was not easily accessible to users. To enhance broad adoption, a sustainable software and community strategy and easy-to-use, high-level application programming interfaces (APIs) were desperately needed. Here we describe NWB:N 2.0, a modern ecosystem for data standardization and accessible data standard for neurophysiology.A Brief History of NWB:N 2.0: The development of the second version of NWB:N began in Janurary 2017 with the start of the Kavli funded NWB4HPC project. The goal was to develop infrastructure and algorithms to enable data-driven discovery and dissemination on high-performance computing systems for the BRAIN Initiative (Fig. 1, bottom). One main goal of the project was to develop the next version of NWB:N to enhance its adoption, with an initial focus on high-level APIs for read, write, and extension of the original NWB:N 1.0.x standard. This standard represented a critical first step toward a unified framework for neural data, but it became clear that in order to achieve these goals we needed an advanced software architecture, a well...
The segmentation of medical and dental images is a fundamental step in automated clinical decision support systems. It supports the entire clinical workflow from diagnosis, therapy planning, intervention, and follow-up. In this paper, we propose a novel tool to accurately process a full-face segmentation in about 5 minutes that would otherwise require an average of 7h of manual work by experienced clinicians. This work focuses on the integration of the state-of-the-art UNEt TRansformers (UNETR) of the Medical Open Network for Artificial Intelligence (MONAI) framework. We trained and tested our models using 618 de-identified Cone-Beam Computed Tomography (CBCT) volumetric images of the head acquired with several parameters from different centers for a generalized clinical application. Our results on a 5-fold cross-validation showed high accuracy and robustness with a Dice score up to 0.962±0.02. Our code is available on our public GitHub repository.
PURPOSE We summarize Quantitative Imaging Informatics for Cancer Research (QIICR; U24 CA180918), one of the first projects funded by the National Cancer Institute (NCI) Informatics Technology for Cancer Research program. METHODS QIICR was motivated by the 3 use cases from the NCI Quantitative Imaging Network. 3D Slicer was selected as the platform for implementation of open-source quantitative imaging (QI) tools. Digital Imaging and Communications in Medicine (DICOM) was chosen for standardization of QI analysis outputs. Support of improved integration with community repositories focused on The Cancer Imaging Archive (TCIA). Priorities included improved capabilities of the standard, toolkits and tools, reference datasets, collaborations, and training and outreach. RESULTS Fourteen new tools to support head and neck cancer, glioblastoma, and prostate cancer QI research were introduced and downloaded over 100,000 times. DICOM was amended, with over 40 correction proposals addressing QI needs. Reference implementations of the standard in a popular toolkit and standalone tools were introduced. Eight datasets exemplifying the application of the standard and tools were contributed. An open demonstration/connectathon was organized, attracting the participation of academic groups and commercial vendors. Integration of tools with TCIA was improved by implementing programmatic communication interface and by refining best practices for QI analysis results curation. CONCLUSION Tools, capabilities of the DICOM standard, and datasets we introduced found adoption and utility within the cancer imaging community. A collaborative approach is critical to addressing challenges in imaging informatics at the national and international levels. Numerous challenges remain in establishing and maintaining the infrastructure of analysis tools and standardized datasets for the imaging community. Ideas and technology developed by the QIICR project are contributing to the NCI Imaging Data Commons currently being developed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.