Deep learning methods have been applied to randomly generate images, such as in fashion, furniture design. To date, consideration of human aspects which play a vital role in a design process has not been given significant attention in deep learning approaches. In this paper, results are reported from a human- in-the-loop design method where brain EEG signals are used to capture preferable design features. In the framework developed, an encoder extracting EEG features from raw signals recorded from subjects when viewing images from ImageNet are learned. Secondly, a GAN model is trained conditioned on the encoded EEG features to generate design images. Thirdly, the trained model is used to generate design images from a person's EEG measured brain activity in the cognitive process of thinking about a design. To verify the proposed method, a case study is presented following the proposed approach. The results indicate that the method can generate preferred designs styles guided by the preference related brain signals. In addition, this method could also help improve communication between designers and clients where clients might not be able to express design requests clearly.
Generating designs via machine learning has been an on-going challenge in computer-aided design. Recently, deep learning methods have been applied to randomly generate images in fashion, furniture and product design. However, such deep generative methods usually require a large number of training images and human aspects are not taken into account in the design process. In this work, we seek a way to involve human cognitive factors through brain activity indicated by electroencephalographic measurements (EEG) in the generative process. We propose a neuroscience-inspired design with a machine learning method where EEG is used to capture preferred design features. Such signals are used as a condition in generative adversarial networks (GAN). First, we employ a recurrent neural network Long Short-Term Memory as an encoder to extract EEG features from raw EEG signals; this data are recorded from subjects viewing several categories of images from ImageNet. Second, we train a GAN model conditioned on the encoded EEG features to generate design images. Third, we use the model to generate design images from a subject’s EEG measured brain activity. To verify our proposed generative design method, we present a case study, in which the subjects imagine the products they prefer, and the corresponding EEG signals are recorded and reconstructed by our model for evaluation. The results indicate that a generated product image with preference EEG signals gains more preference than those generated without EEG signals. Overall, we propose a neuroscience-inspired artificial intelligence design method for generating a design taking into account human preference. The method could help improve communication between designers and clients where clients might not be able to express design requests clearly.
Understanding human cognition plays a vital role in verifying the effectiveness of a design. Great progress has been made in the past in order to provide such understanding in cognitive neuroscience. However, current neuroscience-focused approaches for evaluating design are limited due to the lack of direct visualization of the mental activities. Seeking a tool to visualize the states of the mind from the measured brain signals/images is an intriguing challenge in interpreting cognition and also in understanding design impact. To tackle this challenge, we are inspired by the work of S.Palazzo et al. who introduced a mental image reconstruction method through measured electroencephalogram (EEG) using a generative adversarial network (GAN). Based on his work, we proposed a framework of revealing design impact to the brain by reconstructing mental images representing what is emerged in the brain when a design is presented. First, a recurrent neural network is used as the encoder to learn a latent representation from the raw EEG signals, which were recorded while subjects were looking at 50 categories of images. Then, a generative adversarial network conditioned on the EEG latent representation is trained for reconstructing these images. After training, the neural network is able to reconstruct mental images from brain activity recordings. To demonstrate the proposed method in the context of design verification, we performed a case study, in which we presented a set of iconic design images sequentially to the subject to explore if a subject had created a cognitive association. Each subject's brain activities related with a design image were recorded and fed to the proposed image reconstruction model to generate mental images. The experimental results indicate that a successful design could inspire the subject to associate the design with ideas or valued products. For instance, when subjects were shown an image of a bitten apple, a mental image of a phone instead of the apple itself was reconstructed, illustrating the cognitive association with the brand icon. The proposed method could have a great potential in verifying iconic designs by visualizing the cognitive understanding of the underlying human brain activities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.