This work presents a novel design of a new 3D user interface for an immersive virtual reality desktop and a new empirical analysis of the proposed interface using three interaction modes. The proposed novel dual-layer 3D user interface allows for user interactions with multiple screens portrayed within a curved 360-degree effective field of view available for the user. Downward gaze allows the user to raise the interaction layer that facilitates several traditional desktop tasks. The 3D user interface is analyzed using three different interaction modes, point-and-click, controller-based direct manipulation, and a gesture-based user interface. A comprehensive user study is performed within a mixed-methods approach for the usability and user experience analysis of all three user interaction modes. Each user interaction is quantitatively and qualitatively analyzed for simple and compound tasks in both standing and seated positions. The crafted mixed approach for this study allows to collect, evaluate, and validate the viability of the new 3D user interface. The results are used to draw conclusions about the suitability of the interaction modes for a variety of tasks in an immersive Virtual Reality 3D desktop environment.
The automatic identification and verification of speakers through representative audio continue to gain the attention of many researchers with diverse domains of applications. Despite this diversity, the availability of classified and categorized multi-purpose Arabic audio libraries is scarce. Therefore, we introduce a large Arabic-based audio clips dataset (15810 clips) of 30 popular reciters cantillating 37 chapters from the Holy Quran. These chapters have a variable number of verses saved to different subsequent folders, where each verse is allocated one folder containing 30 audio clips for the declared reciters covering the same textual content. An additional 397 audio clips for 12 competent imitators of the top reciters are collected based on popularity and number of views/downloads to allow for cross-comparison of text, reciters, and authenticity. Based on the volume, quality, and rich diversity of this dataset we anticipate a wide range of deployments for speaker identification, in addition to setting a new direction for the structure and organization of similar large audio clips dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.