2023
DOI: 10.1038/s41597-023-02108-z
|View full text |Cite
|
Sign up to set email alerts
|

Cross-platform dataset of multiplex fluorescent cellular object image annotations

Abstract: Defining cellular and subcellular structures in images, referred to as cell segmentation, is an outstanding obstacle to scalable single-cell analysis of multiplex imaging data. While advances in machine learning-based segmentation have led to potentially robust solutions, such algorithms typically rely on large amounts of example annotations, known as training data. Datasets consisting of annotations which are thoroughly assessed for quality are rarely released to the public. As a result, there is a lack of wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Furthermore, we also benchmarked segmentation performance of UNSEG with respect to Cellpose and Mesmer using publicly available, multiplexed imaging tissue datasets acquired using CODEX, Vectra, and Zeiss imaging platforms 36 . These datasets include cell annotations.…”
Section: Unseg Benchmarkingmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, we also benchmarked segmentation performance of UNSEG with respect to Cellpose and Mesmer using publicly available, multiplexed imaging tissue datasets acquired using CODEX, Vectra, and Zeiss imaging platforms 36 . These datasets include cell annotations.…”
Section: Unseg Benchmarkingmentioning
confidence: 99%
“…Similarly, Supplementary Figures 5 and 6 compare the performance of UNSEG cell segmentation with that of Cellpose and Mesmer for Vectra and Zeiss datasets 36 respectively. The Vectra dataset includes 131 images of size 400 × 400, while the Zeiss dataset consists of nineteen images of size 800 × 800.…”
Section: Unseg Benchmarkingmentioning
confidence: 99%