As artificial intelligence (AI) technology increasingly becomes a feature of everyday life, it is important to understand how creative acts, regarded as uniquely human, can be valued if produced by a machine. The current studies sought to investigate how observers respond to works of visual art created either by humans or by computers. Study 1 tested observers' ability to discriminate between computer-generated and man-made art, and then examined how categorisation of art works impacted on perceived aesthetic value, revealing a bias against computer-generated art. In Study 2 this bias was reproduced in the context of robotic art, however it was found to be reversed when observers were given the opportunity to see robotic artists in action. These findings reveal an explicit prejudice against computergenerated art, driven largely by the kind of art observers believe computer algorithms are capable of producing. These prejudices can be overridden in circumstances in which observers are able to infer anthropomorphic characteristics in the computer programs, a finding which has implications for the future of artistic AI.
The study of brain-damaged patients and advancements in neuroimaging have lead to the discovery of discrete brain regions that process visual image categories, such as objects and scenes. However, how these visual image categories interact remains unclear. For example, is scene perception simply an extension of object perception, or can global scene "gist" be processed independently of its component objects? Specifically, when recognizing a scene such as an "office," does one need to first recognize its individual objects, such as the desk, chair, lamp, pens, and paper to build up the representation of an "office" scene? Here, we show that temporary interruption of object processing through repetitive TMS to the left lateral occipital cortex (LO), an area known to selectively process objects, impairs object categorization but surprisingly facilitates scene categorization. This result was replicated in a second experiment, which assessed the temporal dynamics of this disruption and facilitation. We further showed that repetitive TMS to left LO significantly disrupted object processing but facilitated scene processing when stimulation was administered during the first 180 msec of the task. This demonstrates that the visual system retains the ability to process scenes during disruption to object processing. Moreover, the facilitation of scene processing indicates disinhibition of areas involved in global scene processing, likely caused by disrupting inhibitory contributions from the LO. These findings indicate separate but interactive pathways for object and scene processing and further reveal a network of inhibitory connections between these visual brain regions.
To build a representation of what we see, the human brain recruits regions throughout the visual cortex in cascading sequence. Recently, an approach was proposed to evaluate the dynamics of visual perception in high spatiotemporal resolution at the scale of the whole brain. This method combined functional magnetic resonance imaging (fMRI) data with magnetoencephalography (MEG) data using representational similarity analysis and revealed a hierarchical progression from primary visual cortex through the dorsal and ventral streams. To assess the replicability of this method, we here present the results of a visual recognition neuro-imaging fusion experiment and compare them within and across experimental settings. We evaluated the reliability of this method by assessing the consistency of the results under similar test conditions, showing high agreement within participants. We then generalized these results to a separate group of individuals and visual input by comparing them to the fMRI-MEG fusion data of Cichy et al (2016), revealing a highly similar temporal progression recruiting both the dorsal and ventral streams. Together these results are a testament to the reproducibility of the fMRI-MEG fusion approach and allows for the interpretation of these spatiotemporal dynamic in a broader context.
Some scenes are more memorable than others: they cement in minds with consistencies across observers and time scales. While memory mechanisms are traditionally associated with the end stages of perception, recent behavioral studies suggest that the features driving these memorability effects are extracted early on, and in an automatic fashion. This raises the question: is the neural signal of memorability detectable during early perceptual encoding phases of visual processing? Using the high temporal resolution of magnetoencephalography (MEG), during a rapid serial visual presentation (RSVP) task, we traced the neural temporal signature of memorability across the brain. We found an early and prolonged memorability related signal under a challenging ultra-rapid viewing condition, across a network of regions in both dorsal and ventral streams. This enhanced encoding could be the key to successful storage and recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.