The present functional magnetic resonance imaging (fMRI) study investigates the neural correlates of reachability judgements. In a block design experiment, 14 healthy participants judged whether a visual target presented at different distances in a virtual environment display was reachable or not with the right hand. In two control tasks, they judged the colour or the relative position of the visual target according to flankers. Contrasting the activations registered in the reachability judgement task and in the control tasks, we found activations in the frontal structures, and in the bilateral inferior and superior parietal lobe, including the precuneus, and the bilateral cerebellum. This fronto-parietal network including the cerebellum overlaps with the brain network usually activated during actual motor production and motor imagery. In a following event-related design experiment, we contrasted brain activations when targets were rated as 'reachable' with those when they were rated as 'unreachable'. We found activations in the left premotor cortex, the bilateral frontal structures, and the left middle temporal gyrus. At a lower threshold, we also found activations in the left motor cortex, and in the bilateral cerebellum. Given that reaction time increased with target distance in reachable space, we performed a subsequent parametric analysis that revealed a related increase of activity in the fronto-parietal network including the cerebellum. Unreachable targets did not show similar activation, and particularly in regions associated to motor production and motor imagery. Taken together, these results suggest that dynamical motor representations used to determine what is reachable are also part of the perceptual process leading to the distinct representation of peripersonal and extrapersonal spaces.
Unbiased global illumination methods based on stochastical techniques provide photorealistic images. However, they are prone to noise that can only be reduced by increasing the number of processed samples. The problem of finding the number of samples that are required in order to ensure that most observers cannot perceive any noise is still an open issue. In this article, we address this problem focusing on visual perception of noise. However, rather than using known perceptual models, we investigate the use of learning approaches classically used in the field of Artificial Intelligence. Hence, we propose to use such approaches to create a model which is able to learn which image highlights perceptual noise. The learning is performed through the use of a database of examples based on experimentations of noise perception with human users. This model can then be used in any progressive stochastic global illumination method in order to find the visual convergence threshold of different parts of an input image.
The aim of realistic image synthesis is to produce high fidelity images that authentically represent real scenes. As these images are produced for human observers, we may exploit the fact that not everything is perceived when viewing scene with our eyes. Thus, it is clear that taking advantage of the limited capacity of the human visual system (HVS), can significantly contribute to optimize rendering software.Global illumination methods are used to simulate realistic lighting in 3D scenes. They generally provide a progressive convergence to high-quality solution. One of the problem of such algorithms is to determine a stopping condition, for deciding if calculations reached a satisfactory convergence allowing the process to terminate.In this paper, we propose and we discuss different solutions to this important problem. We show different techniques based on the Visual Difference Predictor (VDP) proposed by Daly [Daly 1993] to define a perceptual stopping condition for rendering computations. We use the VDP to measure the perceived differences between rendered images and to guide the Path Tracing rendering to satisfy a perceptual quality. Also, in a controlled experimental setting with real subjects, we validate our results.
The estimation of image quality and noise perception still remains an important issue in various image processing applications. It has also become a hot topic in the field of photo-realistic computer graphics where noise is inherent in the calculation process. Unlike natural-scene images, however, a reference image is not available for computer-generated images. Thus, classic methods to assess noise quantity and stopping criterion during the rendering process are not usable. This is particularly important in the case of global illumination methods based on stochastic techniques: They provide photo-realistic images which are, however, corrupted by stochastic noise. This noise can be reduced by increasing the number of paths, as proved by Monte Carlo theory, but the problem of finding the right number of paths that are required in order to ensure that human observers cannot perceive any noise is still open. Until now, the features taking part in the human evaluation of image quality and the remaining perceived noise are not precisely known. Synthetic image generation tends to be very expensive and the produced datasets are high-dimensional datasets. In that case, finding a stopping criterion using a learning framework is a challenging task. In this paper, a new method for characterizing computational noise for computer generated images is presented. The noise is represented by the entropy of the singular value decomposition of each block composing an image. These Singular Value Decomposition (SVD)-entropy values are then used as input to a recurrent neural network architecture model in order to extract image noise and in predicting a visual convergence threshold of different parts of any image. Thus a new no-reference image quality assessment is proposed using the relation between SVD-Entropy and perceptual quality, based on a sequence of distorted images. Experiments show that the proposed method, compared with experimental psycho-visual scores, demonstrates a good consistency between these scores and stopping criterion measures that we obtain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.