Abstract-Human perceptual capabilities related to the laparoscopic interaction paradigm are not well known. Its study is important for the design of virtual reality simulators, and for the specification of augmented reality applications that overcome current limitations and provide a supersensing to the surgeon. As part of this work, this article addresses the study of laparoscopic pulling forces. Two definitions are proposed to focalize the problem: the perceptual fidelity boundary, limit of human perceptual capabilities, and the Utile fidelity boundary, that encapsulates the perceived aspects actually used by surgeons to guide an operation. The study is then aimed to define the perceptual fidelity boundary of laparoscopic pulling forces. This is approached with an experimental design in which surgeons assess the resistance against pulling of four different tissues, which are characterized with both in vivo interaction forces and ex vivo tissue biomechanical properties. A logarithmic law of tissue consistency perception is found comparing subjective valorizations with objective parameters. A model of this perception is developed identifying what the main parameters are: the grade of fixation of the organ, the tissue stiffness, the amount of tissue bitten, and the organ mass being pulled. These results are a clear requirement analysis for the force feedback algorithm of a virtual reality laparoscopic simulator. Finally, some discussion is raised about the suitability of augmented reality applications around this surgical gesture.Index Terms-Force feedback (FF), human factors, laparoscopy, virtual reality (VR) simulation requirements.
VMAF is a popular objective quality metric used for video quality evaluation. The power of VMAF has been demonstrated for a wide variety of video scales and encoding processes. However, its ability to evaluate the quality of small video patches has not yet been tested, despite its importance for encoding algorithms. We applied Maximum Likelihood Difference Scaling (MLDS) methodology to estimate supra-threshold perceptual differences in localized sections in videos, also known as tubes, encoded using AV1. We further used the results to assess the performance of VMAF in this scenario and proposed a recalibration of the algorithm to improve its agreement with the subjective data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.