2017
DOI: 10.1007/978-3-319-59063-9_7
|View full text |Cite
|
Sign up to set email alerts
|

Avoiding Over-Detection: Towards Combined Object Detection and Counting

Abstract: Further information on publisher's website:https://doi.org/10.1007/978-3-319-59063-9 7Publisher's copyright statement:The nal publication is available at Springer via https://doi.org/10.1007/978-3-319-59063-97Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that:• a full bibliographic reference is made to the origin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
1
1
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…boguslaw.obara@durham.ac.uk arXiv:1904.05217v1 [q-bio.QM] 10 Apr 2019 variations upon thresholding-based segmentation, which are limited in performance due to intensity inhomogeneity and nuclei/cell clustering [5]; -H-minima transforms [8]; voting-based techniques [18], which both show good results but are sensitive to parameters; gradient vector flow tracking and thresholding [9,10], such PDE-based methods require strong stopping and reinitialisation criteria, set in advance, to achieve smooth curves for tracking; using Laplacian of Gaussian filters [17], which is low computational complexity but struggles with variation in size, shape and rotation of objects within an image; and graph-cut optimisation approaches [12], which requires the finding of initial seed points for each nucleus. convolutional neural networks, which generally require copious amounts of manually labelled training data, and struggle to separate overlapping objects [7] In our experience, nuclei size (scale) is an important contributor to 1. whether or not an algorithm is successful 2. how robust an algorithm is to image variation and 3. to the running time of an algorithm. We have also found that many excellent algorithms exist for detecting small blobs, on the scale of a few pixels diameter, that fail or are less reliable for medium or large blobs, i.e.…”
Section: Introductionmentioning
confidence: 89%
“…boguslaw.obara@durham.ac.uk arXiv:1904.05217v1 [q-bio.QM] 10 Apr 2019 variations upon thresholding-based segmentation, which are limited in performance due to intensity inhomogeneity and nuclei/cell clustering [5]; -H-minima transforms [8]; voting-based techniques [18], which both show good results but are sensitive to parameters; gradient vector flow tracking and thresholding [9,10], such PDE-based methods require strong stopping and reinitialisation criteria, set in advance, to achieve smooth curves for tracking; using Laplacian of Gaussian filters [17], which is low computational complexity but struggles with variation in size, shape and rotation of objects within an image; and graph-cut optimisation approaches [12], which requires the finding of initial seed points for each nucleus. convolutional neural networks, which generally require copious amounts of manually labelled training data, and struggle to separate overlapping objects [7] In our experience, nuclei size (scale) is an important contributor to 1. whether or not an algorithm is successful 2. how robust an algorithm is to image variation and 3. to the running time of an algorithm. We have also found that many excellent algorithms exist for detecting small blobs, on the scale of a few pixels diameter, that fail or are less reliable for medium or large blobs, i.e.…”
Section: Introductionmentioning
confidence: 89%
“…Non-Maximum Suppression (NMS) is an algorithm used in the post-processing of object detection [12,14]. NMS performs well in images where objects do not overlap much, but the performance gets poor when dense object scenes or objects are overlapped and only partially visible [4,[8][9][10]18]. Therefore, to increase the detection capability of high dense overlapping objects, the optimum IoU and Confidence Score thresholds must be [7,11,19,21].…”
Section: Introductionmentioning
confidence: 99%