2022
DOI: 10.1109/tcsvt.2021.3067449
|View full text |Cite
|
Sign up to set email alerts
|

Task-Adaptive Attention for Image Captioning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
37
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 175 publications
(38 citation statements)
references
References 32 publications
1
37
0
Order By: Relevance
“…Among the seven features, three characteristics are below the standard deviation and two characteristics are between the standard deviation and twice the standard deviation. This value can be obtained by formulas (12) to (16) with similar results for nearly 4 features. Like the statistical results of Fig.…”
Section: Resultssupporting
confidence: 62%
See 2 more Smart Citations
“…Among the seven features, three characteristics are below the standard deviation and two characteristics are between the standard deviation and twice the standard deviation. This value can be obtained by formulas (12) to (16) with similar results for nearly 4 features. Like the statistical results of Fig.…”
Section: Resultssupporting
confidence: 62%
“…Next, we use formulas ( 12) and ( 13) to calculate the feature quantity threshold T σ that takes into account human visual errors. After obtaining the characteristic quantity valve value T σ , calculate the valve value T according to formulas (14) to (16). In the formula, Mo(•) represents the mode, Q 1 (•) is the first interquartile difference, and w k is the weight of the feature.…”
Section: Trademark Similarity Comparison System Based On Visual Weightmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from conventional image (Liu et al, 2020Li et al, 2020;Yan et al, 2019Yan et al, , 2020aYan et al, , 2021 or video captioning (Deng et al, 2021;Tu et al, 2017Tu et al, , 2020Yan et al, 2020b), change captioning addresses two-image captioning, especially to describe their difference. Jhamtani et al (Jhamtani and Berg-Kirkpatrick, 2018) is the first work for change captioning.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, deep learning [ 5 11 ] models have attained significant advancements in the field of medical image analysis by training on enough labeled data and fine-tuning its millions of parameters [ 12 , 13 ]. Therefore, it is becoming more and more important to use deep learning models to analyze CXR images of COVID-19 infected patients, to relieve the shortage of medical resources and the overload of doctors.…”
Section: Introductionmentioning
confidence: 99%