The Human BioMolecular Atlas Program (HuBMAP) aims to create a multi-scale spatial atlas of the healthy human body at single-cell resolution by applying advanced technologies and disseminating resources to the community. As the HuBMAP moves past its first phase, creating ontologies, protocols and pipelines, this Perspective introduces the production phase: the generation of reference spatial maps of functional tissue units across many organs from diverse populations and the creation of mapping tools and infrastructure to advance biomedical research.HuBMAP was founded with the goal of establishing state-of-the-art frameworks for building spatial multiomic maps of non-diseased human organs at single-cell resolution 1 . During the first phase (2018)(2019)(2020)(2021)(2022), the priorities of the project included the validation and development of assay platforms; workflows for data processing, management, exploration and visualization; and the establishment of protocols, quality control standards and standard operating procedures. Extensive infrastructure was established through a coordinated effort among the various HuB-MAP integration, visualization and engagement teams, tissue-mapping centres, technology and tools development and rapid technology implementation teams and working groups 1 . Single-cell maps, predominantly consisting of two-dimensional (2D) spatial data as well as data from dissociated cells, were generated for several organs. The HuBMAP Data Portal (https://portal.hubmapconsortium.org) was established for open access to experimental tissue data and reference atlas data.The infrastructure was augmented with software tools for tissue data registration, processing, annotation, visualization, cell segmentation and automated annotation of cell types and cellular neighbourhoods from spatial data. Computational methods were developed for integrating multiple data types across scales and interpretation 2 . Standard reference terminology and a common coordinate framework spanning anatomical to biomolecular scales were established to ensure interoperability across organs, research groups and consortia 3 . Guidelines to capture high-quality multiplexed spatial data 4 were established including validated panels of cell-and structure-specific antibodies 5 . The first phase produced a large number of manuscripts (https://commonfund.nih.gov/ publications?pid=43) including spatially resolved single-cell maps [6][7][8][9][10][11] .The production phase of HuBMAP was launched in the autumn of 2022. The focus is on scaling data production spanning diverse biological variables (for example, age and ethnicity) and deployment and enhancement of analytical, visualization and navigational tools to generate high-resolution 3D accessible maps of major functional tissue units from more than 20 organs. This phase involves over 60 institutions and 400 researchers with opportunities for active intra-and inter-consortia collaborations and building a foundational resource for new biological insights and precision medicine. Below, ...
Diabetes is one of the top ten causes of death among adults worldwide. People with diabetes are prone to suffer from eye disease such as diabetic retinopathy (DR). DR damages the blood vessels in the retina and can result in vision loss. DR grading is an essential step to take to help in the early diagnosis and in the effective treatment thereof, and also to slow down its progression to vision impairment. Existing automatic solutions are mostly based on traditional image processing and machine learning techniques. Hence, there is a big gap when it comes to more generic detection and grading of DR. Various deep learning models such as convolutional neural networks (CNNs) have been previously utilized for this purpose. To enhance DR grading, this paper proposes a novel solution based on an ensemble of state-of-the-art deep learning models called vision transformers. A challenging public DR dataset proposed in a 2015 Kaggle challenge was used for training and evaluation of the proposed method. This dataset includes highly imbalanced data with five levels of severity: No DR, Mild, Moderate, Severe, and Proliferative DR. The experiments conducted showed that the proposed solution outperforms existing methods in terms of precision (47%), recall (45%), F1 score (42%), and Quadratic Weighted Kappa (QWK) (60.2%). Finally, it was able to run with low inference time (1.12 seconds). For this reason, the proposed solution can help examiners grade DR more accurately than manual means.
Laparoscopic surgery is a surgical procedure performed by inserting narrow tubes into the abdomen without making large incisions in the skin. It is done with the aid of a video camera. Laparoscopic videos are affected by various distortions during surgery which lead to loss of visual quality. Identification of these distortions is the primary requisite in automated video enhancement systems used to classify the distortions correctly and accordingly select the proper algorithm to enhance video quality. In addition to high accuracy, the speed of distortion classification should be high, and the system must consider realtime conditions. This paper aims to address the issues faced by similar methods by developing a fast and accurate deep learning model for distortion classification. The dataset proposed by the ICIP2020 conference challenge was used for training and evaluation of the proposed method. This challenging dataset contains videos that have five types of distortions such as noise, smoke, uneven illumination, defocus blur, and motion blur with four levels of intensity. This paper discusses the proposed solution which received the first prize in the ICIP2020 challenge. The solution utilized a transfer learning approach to transfer representation from the domain of natural images to the domain of laparoscopic videos. We used a pretrained ResNet50 convolutional neural network (CNN) to extract informative features that were mapped by support vector machine (SVM) classifiers to various distortion categories. In this work, the problem of multiple distortions in the same video was formulated as a multi-label distortion classification problem. The approach of transfer learning with decision fusion was applied and was found to outperform other solutions in terms of accuracy (83%), F1 score of a single distortion (94.7%), and F1 score of single and multiple distortions (94.9%). In addition, the proposed solution can run in real time with an inference speed of 20 frames per second (FPS).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.