2023
DOI: 10.3389/fnins.2023.1270850
|View full text |Cite
|
Sign up to set email alerts
|

An image caption model based on attention mechanism and deep reinforcement learning

Tong Bai,
Sen Zhou,
Yu Pang
et al.

Abstract: Image caption technology aims to convert visual features of images, extracted by computers, into meaningful semantic information. Therefore, the computers can generate text descriptions that resemble human perception, enabling tasks such as image classification, retrieval, and analysis. In recent years, the performance of image caption has been significantly enhanced with the introduction of encoder-decoder architecture in machine translation and the utilization of deep neural networks. However, several challe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 38 publications
0
1
0
Order By: Relevance
“…When facing the cooperative interception of two interceptors, the space and timing of the HV maneuver are further compressed. It is one effective solution to obtain reasonable maneuver strategies in complex game confrontation scenarios through deep reinforcement learning, which can solve the sequential decision-making problem by gradually improving the maneuver strategies based on the reward feedback in the interaction with the environment ( Bai et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%
“…When facing the cooperative interception of two interceptors, the space and timing of the HV maneuver are further compressed. It is one effective solution to obtain reasonable maneuver strategies in complex game confrontation scenarios through deep reinforcement learning, which can solve the sequential decision-making problem by gradually improving the maneuver strategies based on the reward feedback in the interaction with the environment ( Bai et al, 2023 ).…”
Section: Methodsmentioning
confidence: 99%