2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) 2021
DOI: 10.1109/ismar52148.2021.00039
|View full text |Cite
|
Sign up to set email alerts
|

PAVAL: Position-Aware Virtual Agent Locomotion for Assisted Virtual Reality Navigation

Abstract: Figure 1: First-person views of user navigation assisted by virtual agents in three representative scenes. The user is (a) watching a statue at a 1-meter distance; (b) learning machine functionality at a 2-meter distance; (c) studying how to play pommel horse at a 3-meter distance. The virtual agent in each scene automatically performs locomotion during user navigation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…Designing anthropomorphic IVAs to communicate with users using both verbal and non-verbal cues in VR and AR has gained much attention (Holz et al, 2011;Norouzi et al, 2019;Norouzi et al, 2020). Visual embodiment and social behaviors like the agent's gesture and locomotion could improve perceived social presence in both AR (Kim et al, 2018) and VR (Ye et al, 2021). Li et al (2018) investigated how embodiment and postures influence human-agent interaction in Mixed Reality (MR), in which they found people treated virtual humans similar to real persons.…”
Section: Multimodal Communication Of Embodied Intelligent Virtual Age...mentioning
confidence: 99%
“…Designing anthropomorphic IVAs to communicate with users using both verbal and non-verbal cues in VR and AR has gained much attention (Holz et al, 2011;Norouzi et al, 2019;Norouzi et al, 2020). Visual embodiment and social behaviors like the agent's gesture and locomotion could improve perceived social presence in both AR (Kim et al, 2018) and VR (Ye et al, 2021). Li et al (2018) investigated how embodiment and postures influence human-agent interaction in Mixed Reality (MR), in which they found people treated virtual humans similar to real persons.…”
Section: Multimodal Communication Of Embodied Intelligent Virtual Age...mentioning
confidence: 99%
“…Nevertheless, using strictly predefined tours (Ibanez et al, 2003b;Chrastil and Warren, 2013;Liszio and Masuch, 2016) is still common. In addition to taking the full responsibility for wayfinding, VAs can also function as companions, as exemplified by Ye et al (2021), while maintaining a "location-based sense of contextuality" (Ibanez et al, 2003b) when following the user through the IVE. This context awareness enhances user-agent interactions by tailoring shared knowledge and actions accordingly.…”
Section: Characteristicmentioning
confidence: 99%
“…One possible design of this hybrid role is given in Section 3.2. Here, we also incorporate socially compliant behavior (van der Heiden et al, 2020) for the VA, considering interpersonal distance preferences, ensuring that the VA approaches users at an appropriate distance and adapts its trajectories to user needs, as suggested by Jan et al (2009) and Ye et al (2021).…”
Section: Characteristicmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the requirements [10] to allow for an efficient means of reviewing football matches within VR requires the ability to recreate believable virtual agent behaviours due to the characteristics linked to the rich animations required for the visualisation of virtual football players. To this effect, we used AI-based motion retargetting algorithms from [11] and [12] as the basis for our extension of the football actions available to be replayed through our own animation controller within the context of football matches [13].…”
Section: Related Workmentioning
confidence: 99%