2018
DOI: 10.1038/s41592-018-0261-2
|View full text |Cite|
|
Sign up to set email alerts
|

U-Net: deep learning for cell counting, detection, and morphometry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

6
1,179
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,453 publications
(1,194 citation statements)
references
References 13 publications
6
1,179
0
2
Order By: Relevance
“…We did not introduce a different weighting scheme for edges between two cells, giving all boundary pixels the same weight regardless of their context (background or another cell). Also, a U-Net plugin was independently developed for ImageJ for running generic cell segmentation and quantification tasks (42). Also, we apply additional data augmentation using elastic deformations, as discussed by the authors (16).…”
Section: U-netmentioning
confidence: 99%
See 1 more Smart Citation
“…We did not introduce a different weighting scheme for edges between two cells, giving all boundary pixels the same weight regardless of their context (background or another cell). Also, a U-Net plugin was independently developed for ImageJ for running generic cell segmentation and quantification tasks (42). Also, we apply additional data augmentation using elastic deformations, as discussed by the authors (16).…”
Section: U-netmentioning
confidence: 99%
“…The source code of our U-Net implementation can be found in https://github.com/ carpenterlab/2019_caicedo_cytometryA, with an optional CellProfiler 3.0 plugin of this nucleus-specific model (41). Also, a U-Net plugin was independently developed for ImageJ for running generic cell segmentation and quantification tasks (42).…”
Section: U-netmentioning
confidence: 99%
“…This is a complicated task even for humans, and as classical computational approaches have not been sufficiently accurate, fluorescent staining with its clean nuclear signal has been preferred. However, recent breakthroughs in deep learning have led to impressive performance on image analysis tasks in general (Angermueller et al 2016;Fan and Zhou 2016) , and segmentation from cell images in particular (Van Valen et al 2016;Falk et al 2019;Christiansen et al 2018;Jones et al 2017) . This motivates a re-evaluation of whether nuclear segmentation could be achieved without a DNA stain.…”
Section: Introductionmentioning
confidence: 99%
“…Recently established histological methods allow now resolving the three‐dimensional (3D) structures of microglia in large mammalian brain samples (Chung et al, ; Grabow, Yoder, & Mote, ; Hama et al, ; Ke, Fujimoto, & Imai, ; Lai et al, ). A few recent corresponding computational approaches exist to analyze 3D morphologies (Falk et al, ; Heindl et al, ) but are either not designed for an unbiased high‐throughput analysis or not specialized on microglia morphologies in more complex human postmortem samples. To address this gap, we developed a computational pipeline for Microglia and Immune Cell Morphological Analysis and Classification (MIC‐MAC; https://micmac.lcsb.uni.lu/ and Supporting Information Figure S1) that captures morphological heterogeneity of microglia at single cell level in large 3D high‐resolution confocal stacks from mouse and human brain sections immunolabeled for cell‐type specific morphological markers.…”
Section: Introductionmentioning
confidence: 99%
“…Recently established histological methods allow now resolving the threedimensional (3D) structures of microglia in large mammalian brain samples (Chung et al, 2013;Grabow, Yoder, & Mote, 2000;Hama et al, 2015;Ke, Fujimoto, & Imai, 2013;Lai et al, 2018). A few recent corresponding computational approaches exist to analyze 3D morphologies (Falk et al, 2019;Heindl et al, 2018) Figure S1) that captures morphological heterogeneity of microglia at single cell level in large 3D high-resolution confocal stacks from mouse and human brain sections immunolabeled for cell-type specific morphological markers.…”
Section: Introductionmentioning
confidence: 99%