2022
DOI: 10.1080/1612197x.2022.2057570
|View full text |Cite
|
Sign up to set email alerts
|

Motor imagery during action observation in virtual reality: the impact of watching myself performing at a level I have not yet achieved

Abstract: Feedforward modeling, the creation of one's own behaviour that is potentially achievable in the future, can support motor performance and learning. While this has been shown for sequences of motor actions, it remains to be tested whether feedforward modelling is beneficial for single complex motor actions. Using an immersive, state-of-the-art, low-latency Cave Automatic Virtual Environment (CAVE), we compared motor imagery during action observation (AOMI) of oneself performing at one's current skill level agai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 74 publications
1
5
0
Order By: Relevance
“…For example, Aoyama et al (2020) demonstrated that, after reaching a plateau in their learning of a ball rotation task, participants made significant improvements using AO + MI of a moderately better performance compared to a control and two other AO + MI conditions, which displayed either the learners' current or a significantly better than current performance. Frank et al (2022) found a similar result with novice participants practising squats. In their AO + MI protocols, participants observed an avatar depicting themselves performing either one of their own previously executed squats (Me-Novice), or an avatar of themselves that had been edited in virtual reality to perform a skilled squat (Me-Skilled).…”
Section: Ao + MI Training At Advanced Skill Levelssupporting
confidence: 66%
“…For example, Aoyama et al (2020) demonstrated that, after reaching a plateau in their learning of a ball rotation task, participants made significant improvements using AO + MI of a moderately better performance compared to a control and two other AO + MI conditions, which displayed either the learners' current or a significantly better than current performance. Frank et al (2022) found a similar result with novice participants practising squats. In their AO + MI protocols, participants observed an avatar depicting themselves performing either one of their own previously executed squats (Me-Novice), or an avatar of themselves that had been edited in virtual reality to perform a skilled squat (Me-Skilled).…”
Section: Ao + MI Training At Advanced Skill Levelssupporting
confidence: 66%
“…Pilot testing confirmed that an 8 s execution period provided a target speed that was not achievable by novices prior to physical practice. Two AO+MI studies have recently shown the advantage of using a model that displays a future as yet unattained performance level (Frank et al, 2022;Aoyama et al, 2020). This pace could be achieved, however, after a sustained period of physical training.…”
Section: Stimuli and Apparatusmentioning
confidence: 99%
“…In addition, the feedforward self-modelling has been used to promote the acquisition of new motor skills in the observer by observing the edited video that includes the movies of the observer performing a motor task at a higher skill level (Ste-Marie et al, 2011). Moreover, the advanced feedforward self-modelling has been used to generate an avatar similar to an observer in virtual reality (VR), and their self-efficacy and performance are increased by observing the action of the avatar that the observer wishes to achieve in the future (Frank et al, 2023). Therefore, combining AI and VR feedforward modelling technologies may create avatars whose entire body, including the face and body, closely resembles the observer, which could further promote motor learning.…”
Section: Discussionmentioning
confidence: 99%