This article describes a new method for assessing the effect of a given film on viewers' brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers' brains during movie watching. Our results demonstrate that some films can exert considerable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers' brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers' brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of "neurocinematic" studies.
Previous research has shown that facial motion can carry information about age, gender, emotion and, at least to some extent, identity. By combining recent computer animation techniques with psychophysical methods, we show that during the computation of identity the human face recognition system integrates both types of information: individual non-rigid facial motion and individual facial form. This has important implications for cognitive and neural models of face perception, which currently emphasize a separation between the processing of invariant aspects (facial form) and changeable aspects (facial motion) of faces.
This paper investigates whether the greater accuracy of emotion identification for dynamic versus static expressions, as noted in previous research, can be explained through heightened levels of either component or configural processing. Using a paradigm by Young, Hellawell, and Hay (1987 ), we tested recognition performance of aligned and misaligned composite faces with six basic emotions (happiness, fear, disgust, surprise, anger, sadness). Stimuli were created using 3D computer graphics and were shown as static peak expressions (static condition) and 7 s video sequences (dynamic condition). The results revealed that, overall, moving stimuli were better recognized than static faces, although no interaction between motion and other factors was found. For happiness, sadness, and surprise, misaligned composites were better recognized than aligned composites, suggesting that aligned composites fuse to form a single expression, while the two halves of misaligned composites are perceived as two separate emotions. For anger, disgust, and fear, this was not the case. These results indicate that emotions are perceived on the basis of both configural and component-based information, with specific activation patterns for separate emotions, and that motion has a quality of its own and does not increase configural or component-based recognition separately.
When we speak, laugh or cry our faces move in complex, non-rigid ways. Can such motion patterns influence our perception of facial identity? To explore this issue we took 3D laser scanned heads from the MPI database and animated them using motion sequences captured from different human actors. During an incidental learning phase, observers were exposed to FACE A moving with MOTION A and FACE B moving with MOTION B. Test stimuli consisted of two sets of morphed heads (shaded, no texture) ranging in 10 steps from FACE A to FACE B. One set of morphs were animated using MOTION A, the other with MOTION B. Observers were instructed to indicate whether each test face was structurally more similar to FACE A or FACE B. Across all levels of the morph sequence, motion biased the perception of identity. This bias was particularly strong at the 50 morph level where structural information was completely ambiguous. Here, "FACE A" responses occurred on 80 of trials in which the morph was animated with MOTION A, but on only 40 of trials in which the same morph was animated using MOTION B. We believe these results are the strongest evidence to date that facial motion can be used by observers to determine facial identity. The use of computer animation techniques in conjunction with motion capture technology appears to be a very fruitful direction for future research on dynamic aspects of face processing
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.