A common task in computer graphics is the mapping of digital high dynamic range images to low dynamic range display devices such as monitors and printers. This task is similar to the adaptation processes which occur in the human visual system. Physiological evidence suggests that adaptation already occurs in the photoreceptors, leading to a straightforward model that can be easily adapted for tone reproduction. The result is a fast and practical algorithm for general use with intuitive user parameters that control intensity, contrast, and level of chromatic adaptation, respectively.
The goal of this paper is to review the most popular methods of predictor selection in regression models, to explain why some fail when the number P of explanatory variables exceeds the number N of participants, and to discuss alternative statistical methods that can be employed in this case. We focus on penalized least squares methods in regression models, and discuss in detail two such methods that are well established in the statistical literature, the LASSO and Elastic Net. We introduce bootstrap enhancements of these methods, the BE-LASSO and BE-Enet, that allow the user to attach a measure of uncertainty to each variable selected. Our work is motivated by a multimodal neuroimaging dataset that consists of morphometric measures (volumes at several anatomical regions of interest), white matter integrity measures from diffusion weighted data (fractional anisotropy, mean diffusivity, axial diffusivity and radial diffusivity) and clinical and demographic variables (age, education, alcohol and drug history). In this dataset, the number P of explanatory variables exceeds the number N of participants. We use the BE-LASSO and BE-Enet to provide the first statistical analysis that allows the assessment of neurocognitive performance from high dimensional neuroimaging and clinical predictors, including their interactions. The major novelty of this analysis is that biomarker selection and dimension reduction are accomplished with a view towards obtaining good predictions for the outcome of interest (i.e., the neurocognitive indices), unlike principal component analysis that are performed only on the predictors’ space independently of the outcome of interest.
Many applications require that an image will appear the same regardless of where or how it is displayed. However, the conditions in which an image is displayed can adversely affect its appearance. Computer monitor screens not only emit light, but can also reflect extraneous light present in the viewing environment. This can cause images displayed on a monitor to appear faded by reducing their perceived contrast. Current approaches to this problem involve measuring this ambient illumination with specialized hardware and then altering the display device or changing the viewing conditions. This is not only impractical, but also costly and time consuming. For a user who does not have the equipment, expertise, or budget to control these facets, a practical alternative is sought. This paper presents a method whereby the display device itself can be used to determine the effect of ambient light on perceived contrast, thus enabling the viewers themselves to perform visual calibration. This method is grounded in established psychophysical experimentation and we present both an extensive procedure and an equivalent rapid procedure. Our work is extended by providing a novel method of contrast correction so that the contrast of an image viewed in bright conditions can be corrected to appear the same as an image viewed in a darkened room. This is verified through formal validation. These methods are easy to apply in practical settings, while accurate enough to be useful.
Three dimensional computer reconstruction provides us with a means of visualising past environments, allowing us a glimpse of the past that might otherwise be difficult to appreciate. Many of the images generated for this purpose are photorealistic, but no attempt has been made to ensure they are physically and perceptually valid. We are attempting to rectify these inadequacies through the use of accurate lighting simulation. By determining the appropriate spectral data of the original light sources and using them to illuminate a scene, the viewer can perceive a site and its artefacts in close approximation to the original environment. The richly decorated and well-preserved frescoes of the House of thc Vettii in Pompeii have been chosen as a subject for the implementation of this study. This paper describes how, by using photographic records, modelling packages and luminaire values from a spectroradiometer, a three dimensional model can be created and then rendered in a lighting visualisation system to provide us with images that go beyond photorealistic, accurately simulating light behaviour and allowing us a physically and perceptually valid view of the reconstructed site. A method for capturing real flame and incorporating it in a virtual scene is also discussed, with the intention of recreating the movement of a fl~'ne in an animated scene.
We present a technique that allows distinguishing between index finger and thumb input on touchscreen phones, achieving an average accuracy of 82.6% in a real-life application with only a single touch. We divide the screen into a virtual grid of 9mm 2 units and use a dedicated set of training data and algorithms for classifying new touches in each screen location. Further, we present correlations between physical and digital touch properties to extend previous work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.