2018 25th IEEE International Conference on Image Processing (ICIP) 2018
DOI: 10.1109/icip.2018.8451187
|View full text |Cite
|
Sign up to set email alerts
|

Multiclass Weighted Loss for Instance Segmentation of Cluttered Cells

Abstract: We propose a new multiclass weighted loss function for instance segmentation of cluttered cells. We are primarily motivated by the need of developmental biologists to quantify and model the behavior of blood T-cells which might help us in understanding their regulation mechanisms and ultimately help researchers in their quest for developing an effective immunotherapy cancer treatment. Segmenting individual touching cells in cluttered regions is challenging as the feature distribution on shared borders and cell… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 71 publications
(53 citation statements)
references
References 16 publications
0
51
0
Order By: Relevance
“…Quantitative cell biology requires simultaneous measurements of different cellular properties, such as shape, position, RNA expression and protein expression [1]. A first step towards assigning these properties to single cells is the segmentation of an imaged volume into cell bodies, usually based on a cytoplasmic or membrane marker [2][3][4][5][6][7][8]. This step can be straightforward when cells are sufficiently separated from each other, for example when a fluorescent marker is expressed sparsely in a subset of cells, or in cultures where cells are dissociated from tissue.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Quantitative cell biology requires simultaneous measurements of different cellular properties, such as shape, position, RNA expression and protein expression [1]. A first step towards assigning these properties to single cells is the segmentation of an imaged volume into cell bodies, usually based on a cytoplasmic or membrane marker [2][3][4][5][6][7][8]. This step can be straightforward when cells are sufficiently separated from each other, for example when a fluorescent marker is expressed sparsely in a subset of cells, or in cultures where cells are dissociated from tissue.…”
Section: Introductionmentioning
confidence: 99%
“…Methods to achieve cell body segmentation typically trade off flexibility for automation. In order of increasing automation and decreased flexibility, these methods range from fully-manual labelling [9], to usercustomized pipelines involving a sequence of image transformations with user-defined parameters [2,8,10,11], to fully automated methods based on deep neural networks with parameters estimated on large training datasets [4,5,7,12,13]. Fully automated methods have many advantages, such as reduced human effort, increased reproducibility and better scalability to big datasets from large screens.…”
Section: Introductionmentioning
confidence: 99%
“…We formulate the instance segmentation problem as a semantic segmentation problem where we obtain object segmentation and separation of cells at once. To transform an instance ground truth to a semantic ground truth, we adopted the three semantic classes scheme of [4]: image background, cell interior, and touching region between cells. This is suitable as the intensity distribution of our images in those regions is multi-modal.…”
Section: Segmentation Methodsmentioning
confidence: 99%
“… Neuron size and neuron shape: Precise computation and representation of neuron size and neuron shape required accurate segregation of the cell bodies, which is challenging by itself at the moment 44,45 (see also, for instance, the 2018 Data Science Bowl challenge from the Kaggle competition 46 ) and beyond the scope of this paper. Instead, we only used rough measures that reflect mean neuron size and 2D shape for the individual slices .…”
Section: Definition Of Used Features and Feature Setsmentioning
confidence: 99%