2019
DOI: 10.1101/725754
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Metadata-Guided Visual Representation Learning for Biomedical Images

Abstract: Motivation: The clustering of biomedical images according to their phenotype is an important step in early drug discovery. Modern highcontent-screening devices easily produce thousands of cell images, but the resulting data is usually unlabelled and it requires extra effort to construct a visual representation that supports the grouping according to the presented morphological characteristics. Results:We introduce a novel approach to visual representation learning that is guided by metadata. In high-content-sc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(15 citation statements)
references
References 21 publications
0
12
0
Order By: Relevance
“…Domain-specific representation learning and transfer learning are active research topics in the biomedical imaging field. 12 15…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Domain-specific representation learning and transfer learning are active research topics in the biomedical imaging field. 12 15…”
Section: Introductionmentioning
confidence: 99%
“…Domain-specific representation learning and transfer learning are active research topics in the biomedical imaging field. [12][13][14][15] Since the collection and annotation of digital tissue slides are time-consuming and cumbersome, one looks for other imaging domains, where the image data and corresponding annotations are abundant and thus one could try to use models trained on such data for transfer learning. One large publicly available data set is ImageNet, which consists of more than 14 million images labelled with over 21,000 classes that are often used for benchmarking visual object recognition and image classification.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, [3] proposes a weakly supervised learning method. Other works as in [12,28] use the metadata information to devise pseudo-labels for the network supervision. All the fully or weakly supervised methods above suffer from the requirement of labeled sets, and annotating such images by experts leads to high costs and time-consuming efforts.…”
Section: Introductionmentioning
confidence: 99%
“…All the fully or weakly supervised methods above suffer from the requirement of labeled sets, and annotating such images by experts leads to high costs and time-consuming efforts. Even in [12,28], the labels acquired from metadata can be prone to imprecise labeling due to the nature of the data and treatment analysis techniques. Some other works use transfer learning techniques [1,26].…”
Section: Introductionmentioning
confidence: 99%
“…In addition, batch effects could cause images with identical metadata to look different. All the methods above, besides Ljosa et al, (2013), use deep neural networks trained either in a supervised (Kraus et al, 2016;Godinez et al, 2017), self-supervised setup (Godinez et al, 2018;Spiegel et al, 2019) on an annotated subset of the data or make use of a neural network pre-trained on non-cellular images (Ando et al, 2017;Tabak et al, 2019). Using a neural network trained on non-cellular images yields the risk of domain-specific features being lost in the process of capturing features on data not related to HCS.…”
Section: Introductionmentioning
confidence: 99%