In this article we use an ElectroEncephaloGraph (EEG) to explore the perception of artifacts that typically appear during rendering and determine the perceptual quality of a sequence of images. Although there is an emerging interest in using an EEG for image quality assessment, one of the main impediments to the use of an EEG is the very low Signal-to-Noise Ratio (SNR) which makes it exceedingly difficult to distinguish neural responses from noise. Traditionally, event-related potentials have been used for analysis of EEG data. However, they rely on averaging and so require a large number of participants and trials to get meaningful data. Also, due the the low SNR ERP's are not suited for single-trial classification.We propose a novel wavelet-based approach for evaluating EEG signals which allows us to predict the perceived image quality from only a single trial. Our wavelet-based algorithm is able to filter the EEG data and remove noise, eliminating the need for many participants or many trials. With this approach it is possible to use data from only 10 electrode channels for single-trial classification and predict the presence of an artifact with an accuracy of 85%. We also show that it is possible to differentiate and classify a trial based on the exact type of artifact viewed. Our work is particularly useful for understanding how the human visual system responds to different types of degradations in images and videos. An understanding of the perception of typical image-based rendering artifacts forms the basis for the optimization of rendering and masking algorithms.
Image Based Rendering (IBR) allows interactive scene exploration from images alone. However, despite considerable development in the area, one of the main obstacles to better quality and more realistic visualizations is the occurrence of visually disagreeable artifacts. In this paper we present a methodology to map out the perception of IBR-typical artifacts. This work presents an alternative to traditional image and video quality evaluation methods by using an EEG device to determine the implicit visual processes in the human brain. Our work demonstrates the distinct differences in the perception of different types of visual artifacts and the implications of these differences.
This paper explores the opportunities and challenges in designing peer-support mechanisms for low-income,low-literate women in Pakistan, a patriarchal and religious context where women's movements, social relationsand access to digital technologies are restricted. Through a qualitative, empirical study with 21 participantswe examine the cultural and patriarchal framework where shame and fear of defamation restrict the seekingof support for personal narratives around taboo subjects like abortion, sexual harassment, rape and domesticabuse. Based on our qualitative data we also conduct a second qualitative study using a technology probe with15 low-income, low-literate women to explore the specific design of peer-support technologies for supportseeking and the sharing of sensitive and taboo narratives in a deeply patriarchal society. The design concernsraised by our participants regarding privacy, anonymity and safety provide CSCW researchers with valuableguidelines about designing for social connections and support for vulnerable populations within a particularcontext.
There is a continuous effort by animation experts to create increasingly realistic and more human-like digital characters. However, as virtual characters become more human they risk evoking a sense of unease in their audience. This sensation, called the Uncanny Valley effect, is widely acknowledged both in the popular media and scientific research but empirical evidence for the hypothesis has remained inconsistent. In this paper, we investigate the neural responses to computer-generated faces in a cognitive neuroscience study. We record brain activity from participants (N = 40) using electroencephalography (EEG) while they watch videos of real humans and computer-generated virtual characters. Our results show distinct differences in neural responses for highly realistic computer-generated faces such as Digital Emily compared with real humans. These differences are unique only to agents that are highly photorealistic, i.e. the 'uncanny' response. Based on these specific neural correlates we train a support vector machine (SVM) to measure the probability of an uncanny response for any given computer-generated character from EEG data. This allows the ordering of animated characters based on their level of 'uncanniness'.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.