2019
DOI: 10.1007/s00371-019-01769-5
|View full text |Cite
|
Sign up to set email alerts
|

Bilateral counting network for single-image object counting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…We compare the proposed Count‐DANet with several counting methods and it achieves a significant improvement on different items in the GAME metric, and the detailed results are shown in Table 1. It can be deduced that the Count‐DANet has a 29.4% improvement of GAME(0) over Zhang et al [15] on this dataset and also outperforms other methods, except for the method proposed in [19]. Compared to the method proposed in [19], the proposed method achieves an inferior result than it and the value of the proposed method GAME(0) is only lower by 0.1 point, showing that the proposed method has a competitive performance with the method proposed in [19].…”
Section: Methodsmentioning
confidence: 80%
See 2 more Smart Citations
“…We compare the proposed Count‐DANet with several counting methods and it achieves a significant improvement on different items in the GAME metric, and the detailed results are shown in Table 1. It can be deduced that the Count‐DANet has a 29.4% improvement of GAME(0) over Zhang et al [15] on this dataset and also outperforms other methods, except for the method proposed in [19]. Compared to the method proposed in [19], the proposed method achieves an inferior result than it and the value of the proposed method GAME(0) is only lower by 0.1 point, showing that the proposed method has a competitive performance with the method proposed in [19].…”
Section: Methodsmentioning
confidence: 80%
“…We compare our method with several typical crowd counting methods on Shanghaitech_A dataset: Marsden et al [29], Zhang et al [12], Sindagi and Patel [18], Sam et al [13], Li et al [16], SaCNN [14], Li et al [19], and Zhang et al [15], the results of which are shown in Table 3. As is shown in Table 3, the proposed method gets a 0.3% improvement of MAE with Li et al [19] and achieves the lowest 85.9 of MAE among them, except the method proposed in [15]. Compared to the method proposed in [15], the result of the proposed method on Shanghaitech_A is lower than it by 5.9% and the proposed method achieves higher performance on TRANCOS by 29.4%.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The traffic scenario of this dataset is highly complex, and the scale of the counting object varies widely. The experimental comparison shows that our method is superior to the other state-of-the-art methods (Li et al, , 2020(Li et al, , 2023Fu et al, 2023).…”
Section: Comparisons With State-of-the-artmentioning
confidence: 87%
“…The mall dataset is similar to UCSD, which contains single scene and a small number of pedestrians. On this dataset, this work gets the best result compared with [7, 8, 42, 43, 44]. It shows that MSR‐FAN has good generalization ability for general datasets.…”
Section: Methodsmentioning
confidence: 99%