2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.9
|View full text |Cite
|
Sign up to set email alerts
|

Count-ception: Counting by Fully Convolutional Redundant Counting

Abstract: Counting objects in digital images is a process that should be replaced by machines. This tedious task is time consuming and prone to errors due to fatigue of human annotators. The goal is to have a system that takes as input an image and returns a count of the objects inside and justification for the prediction in the form of object localization. We repose a problem, originally posed by Lempitsky and Zisserman, to instead predict a count map which contains redundant counts based on the receptive field of a sm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
132
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 121 publications
(134 citation statements)
references
References 16 publications
2
132
0
Order By: Relevance
“…The pixels are then summed and divided by the average synthetic cell area to produce a count. This is similar to Cohen et al 's [19] state-of-the-art approach, with the distinction that ours is unsupervised. In a second dataset depicted in Fig.2b) we simply create round-shaped cells of varying intensity in the blue channel, a two-dimensional gaussian map with a standard deviation of 3 in the green and 1 in the red channel.…”
Section: Vgg Cell Datasetsupporting
confidence: 83%
See 2 more Smart Citations
“…The pixels are then summed and divided by the average synthetic cell area to produce a count. This is similar to Cohen et al 's [19] state-of-the-art approach, with the distinction that ours is unsupervised. In a second dataset depicted in Fig.2b) we simply create round-shaped cells of varying intensity in the blue channel, a two-dimensional gaussian map with a standard deviation of 3 in the green and 1 in the red channel.…”
Section: Vgg Cell Datasetsupporting
confidence: 83%
“…The red and green channels encode the local angle ϕ of the worm using [cos(2 · ϕ)/2 + 1, sin(2 · ϕ)/2 + 1]. The results of a modified dataset, where the total number of Table 2: Prediction error for neurons: We compare the mean relative error of both the count-ception network as proposed by Cohen et al [19] and our cycleGAN. The first two rows show the average count and the corresponding standard deviation of both dead and live neurons for each of the three experts.…”
Section: Live-dead Assay Of C Elegansmentioning
confidence: 99%
See 1 more Smart Citation
“…Being able to interrogate high-dimensional endophenotypes in a GWAS framework may yield a more rapid uncovering of genetic variants directly linked to the biological mechanisms that underpin clinically-measured outcomes. Many such methods to derive phenotypes from images are currently being developed 19,22,37,38 .…”
Section: Discussionmentioning
confidence: 99%
“…Here we describe the development of a deep neural network for quantitative analysis of microscopic images of cells, expressing the ER stress biosensor XBP1-TagRFP. Cell counting was realized with an approach based on the paper by Cohen et al (7) aimed to a very similar research field. The authors proposed a way to count multiple small objects (as compared to the image size) pertaining to a same category using fully-convolutional neural networks (FCNN), which predict the so-called redundant count maps.…”
Section: Introductionmentioning
confidence: 99%