Image-based profiling has emerged as a powerful technology for various steps in basic biological and pharmaceutical discovery, but the community has lacked a large, public reference set of data from chemical and genetic perturbations. Here we present data generated by the Joint Undertaking for Morphological Profiling (JUMP)-Cell Painting Consortium, a collaboration between 10 pharmaceutical companies, six supporting technology companies, and two non-profit partners. When completed, the dataset will contain images and profiles from the Cell Painting assay for over 116,750 unique compounds, over-expression of 12,602 genes, and knockout of 7,975 genes using CRISPR-Cas9, all in human osteosarcoma cells (U2OS). The dataset is estimated to be 115 TB in size and capturing 1.6 billion cells and their single-cell profiles. File quality control and upload is underway and will be completed over the coming months at the Cell Painting Gallery: https://registry.opendata.aws/cellpainting-gallery. A portal to visualize a subset of the data is available at https://phenaid.ardigen.com/jumpcpexplorer/.
Multiple Instance Learning (MIL) is weakly supervised learning, which assumes that there is only one label provided for the entire bag of instances. As such, it appears in many problems of medical image analysis, like the whole-slide images classification of biopsy. Most recently, MIL was also applied to deep architectures by introducing the aggregation operator, which focuses on crucial instances of a bag. In this paper, we enrich this idea with the self-attention mechanism to take into account dependencies across the instances. We conduct several experiments and show that our method with various types of kernels increases the accuracy, especially in the case of non-standard MIL assumptions. This is of importance for real-word medical problems, which usually satisfy presence-based or threshold-based assumptions.
This paper concerns an approach to model the ledger-stand joints of modular scaffolds. Based on the analysis of the working range of the ledger (represented by a linear relationship between load and displacement), two models of the ledger-stand joint are analysed: first -with flexibility joints and second -with rigid joints and with a transition part of lower stiffness. Parameters are selected based on displacement measurements and numerical analyses of joints, then they are verified. On the basis of performed research, it can be stated that both methods of joint modelling recommended in this paper, can be applied in engineering practices.
Preliminary microbiological diagnosis usually relies on microscopic examination and, due to the routine culture and bacteriological examination, lasts up to 11 days. Hence, many deep learning methods based on microscopic images were recently introduced to replace the time-consuming bacteriological examination. They shorten the diagnosis by 1-2 days but still require iterative culture to obtain monoculture samples. In this work, we present a feasibility study for further shortening the diagnosis time by analyzing polyculture images. It is possible with multi-MIL, a novel multi-label classification method based on multiple instance learning.To evaluate our approach, we introduce a dataset containing microscopic images for all combinations of four considered bacteria species. We obtain ROC AUC above 0.9, proving the feasibility of the method and opening the path for future experiments with a larger number of species.
Self-supervised methods gain more and more attention, especially in the medical domain, where the number of labeled data is limited. They provide results on par or superior to their fully supervised competitors, yet the difference between information coded by both methods is unclear. This work introduces a novel comparison framework for explaining differences between supervised and self-supervised models using visual characteristics important to the human perceptual system. We apply this framework to models trained for Gleason score and conclude that self-supervised methods are more biased toward contrast and texture transformation than their supervised counterparts. At the same time, supervised methods code more information about the shape.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.