2022
DOI: 10.1038/s41592-022-01663-4
|View full text |Cite
|
Sign up to set email alerts
|

Cellpose 2.0: how to train your own model

Abstract: Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that mod… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
326
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 441 publications
(446 citation statements)
references
References 50 publications
2
326
0
Order By: Relevance
“…The choice of FogBank was based on findings by Vicar et al (2019) that compared it favorably against a number of common contrast microscopy segmentation methods. CellPose is a supervised algorithm for cell segmentation trained on a wide array of community-supplied fluorescent and non-fluorescent microscopy images ( Pachitariu and Stringer, 2022 ; Stringer et al , 2021 ). For comparison, we define two metrics of performance: binary error, , where A gt and A pred are the foreground area of the ground truth segmentation and predicted segmentation, respectively, and object count error, , where N gt and N pred are the number of cells in the ground truth segmentation and predicted segmentation, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…The choice of FogBank was based on findings by Vicar et al (2019) that compared it favorably against a number of common contrast microscopy segmentation methods. CellPose is a supervised algorithm for cell segmentation trained on a wide array of community-supplied fluorescent and non-fluorescent microscopy images ( Pachitariu and Stringer, 2022 ; Stringer et al , 2021 ). For comparison, we define two metrics of performance: binary error, , where A gt and A pred are the foreground area of the ground truth segmentation and predicted segmentation, respectively, and object count error, , where N gt and N pred are the number of cells in the ground truth segmentation and predicted segmentation, respectively.…”
Section: Resultsmentioning
confidence: 99%
“…Accordingly, we consider AimSeg as a powerful tool to train novel deep learning models. Indeed, amending segmentation results obtained automatically has proven to be an efficient approach to improve pre-existing models [47].…”
Section: Discussionmentioning
confidence: 99%
“…All the cells were inspected. Alternatively, for rabbit and monkey embryos, we used a custom-made cell detection and tracking algorithm (unpublished) based on nuclear segmentation with Cellpose (Pachitariu and Stringer, 2022; Stringer et al, 2021) and semi-automatic shape tracking using Napari (Sofroniew, Nicholas et al, 2022).…”
Section: Methodsmentioning
confidence: 99%