In recent years, Artificial Intelligence (AI) technology has seen significant
growth due to advancements in machine learning (ML) and data processing, as well as the
availability of large amounts of data. The integration of AI with eXtended Reality (XR)
technologies such as Virtual Reality (VR) and Augmented Reality (AR) can create
innovative solutions and provide intuitive interactions and immersive experiences across
various sectors, including education, entertainment and healthcare. The presented paper
describes the innovative Voice-drive interaction in XR spaces (VOXReality)* initiative,
funded by the European commission, that integrates language and vision-based AI with
unidirectional or bidirectional exchanges to drive AR and VR, allowing for natural human
interactions with XR systems and creating multi-modal XR experiences. It aligns Natural
Language Processing (NLP) and Computer Vision (CV) parallel progress to design novel
models and techniques that integrate language and visual understanding with XR,
providing a holistic understanding of goals, environment, and context. VOXReality plans
to validate its visionary approaches through three use cases such as a XR personal
assistant, real-time verbal communication in virtual conferences, and immersive
experience for the audience of theatrical plays.* Funded by European Union (Grant
agreement ID: 101070521)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.