BackgroundConventional methods of motor imagery brain computer interfaces (MI-BCIs) suffer from the limited number of samples and simplified features, so as to produce poor performances with spatial-frequency features and shallow classifiers.MethodsAlternatively, this paper applies a deep recurrent neural network (RNN) with a sliding window cropping strategy (SWCS) to signal classification of MI-BCIs. The spatial-frequency features are first extracted by the filter bank common spatial pattern (FB-CSP) algorithm, and such features are cropped by the SWCS into time slices. By extracting spatial-frequency-sequential relationships, the cropped time slices are then fed into RNN for classification. In order to overcome the memory distractions, the commonly used gated recurrent unit (GRU) and long-short term memory (LSTM) unit are applied to the RNN architecture, and experimental results are used to determine which unit is more suitable for processing EEG signals.ResultsExperimental results on common BCI benchmark datasets show that the spatial-frequency-sequential relationships outperform all other competing spatial-frequency methods. In particular, the proposed GRU-RNN architecture achieves the lowest misclassification rates on all BCI benchmark datasets.ConclusionBy introducing spatial-frequency-sequential relationships with cropping time slice samples, the proposed method gives a novel way to construct and model high accuracy and robustness MI-BCIs based on limited trials of EEG signals.
Action observation (AO) generates event-related desynchronization (ERD) suppressions in the human brain by activating partial regions of the human mirror neuron system (hMNS). The activation of the hMNS response to AO remains controversial for several reasons. Therefore, this study investigated the activation of the hMNS response to a speed factor of AO by controlling the movement speed modes of a humanoid robot's arm movements. Since hMNS activation is reflected by ERD suppressions, electroencephalography (EEG) with BCI analysis methods for ERD suppressions were used as the recording and analysis modalities. Six healthy individuals were asked to participate in experiments comprising five different conditions. Four incremental-speed AO tasks and a motor imagery (MI) task involving imaging of the same movement were presented to the individuals. Occipital and sensorimotor regions were selected for BCI analyses. The experimental results showed that hMNS activation was higher in the occipital region but more robust in the sensorimotor region. Since the attended information impacts the activations of the hMNS during AO, the pattern of hMNS activations first rises and subsequently falls to a stable level during incremental-speed modes of AO. The discipline curves suggested that a moderate speed within a decent inter-stimulus interval (ISI) range produced the highest hMNS activations. Since a brain computer/machine interface (BCI) builds a path-way between human and computer/mahcine, the discipline curves will help to construct BCIs made by patterns of action observation (AO-BCI). Furthermore, a new method for constructing non-invasive brain machine brain interfaces (BMBIs) with moderate AO-BCI and motor imagery BCI (MI-BCI) was inspired by this paper.
PurposeClothing patterns play a dominant role in costume design and have become an important link in the perception of costume art. Conventional clothing patterns design relies on experienced designers. Although the quality of clothing patterns is very high on conventional design, the input time and output amount ratio is relative low for conventional design. In order to break through the bottleneck of conventional clothing patterns design, this paper proposes a novel way based on generative adversarial network (GAN) model for automatic clothing patterns generation, which not only reduces the dependence of experienced designer, but also improve the input-output ratio.Design/methodology/approachIn view of the fact that clothing patterns have high requirements for global artistic perception and local texture details, this paper improves the conventional GAN model from two aspects: a multi-scales discriminators strategy is introduced to deal with the local texture details; and the self-attention mechanism is introduced to improve the global artistic perception. Therefore, the improved GAN called multi-scales self-attention improved generative adversarial network (MS-SA-GAN) model, which is used for high resolution clothing patterns generation.FindingsTo verify the feasibility and effectiveness of the proposed MS-SA-GAN model, a crawler is designed to acquire standard clothing patterns dataset from Baidu pictures, and a comparative experiment is conducted on our designed clothing patterns dataset. In experiments, we have adjusted different parameters of the proposed MS-SA-GAN model, and compared the global artistic perception and local texture details of the generated clothing patterns.Originality/valueExperimental results have shown that the clothing patterns generated by the proposed MS-SA-GAN model are superior to the conventional algorithms in some local texture detail indexes. In addition, a group of clothing design professionals is invited to evaluate the global artistic perception through a valence-arousal scale. The scale results have shown that the proposed MS-SA-GAN model achieves a better global art perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.