2022
DOI: 10.1109/lra.2022.3140817
|View full text |Cite
|
Sign up to set email alerts
|

Q-Attention: Enabling Efficient Learning for Vision-Based Robotic Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 31 publications
(37 citation statements)
references
References 25 publications
0
35
1
Order By: Relevance
“…where future rewards are weighted with respect to the discount factor γ ∈ [0, 1). In this paper, we apply our Bingham Policy Parameterization (BPP) to three different algorithms: Soft-actor critic (SAC) [11], proximal policy optimization (PPO) [31], and attention-driven robotic manipulation (ARM) [14]. We briefly outline these below.…”
Section: Reinforcement Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…where future rewards are weighted with respect to the discount factor γ ∈ [0, 1). In this paper, we apply our Bingham Policy Parameterization (BPP) to three different algorithms: Soft-actor critic (SAC) [11], proximal policy optimization (PPO) [31], and attention-driven robotic manipulation (ARM) [14]. We briefly outline these below.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…These pixel locations are used to crop the RGB image and organised point cloud inputs and thus drastically reduce the input size to the next stage of the pipeline; this next stage is an actor-critic next-best pose agent using SAC as the underlying algorithm. For further details on keypoint detection and demo augmentation, we point the reader to [14].…”
Section: Reinforcement Learningmentioning
confidence: 99%
See 3 more Smart Citations