With the rapid development in social media, single-modal emotion recognition is hard to satisfy the demands of the current emotional recognition system. Aiming to optimize the performance of the emotional recognition system, a multimodal emotion recognition model from speech and text was proposed in this paper. Considering the complementarity between different modes, CNN (convolutional neural network) and LSTM (long short-term memory) were combined in a form of binary channels to learn acoustic emotion features; meanwhile, an effective Bi-LSTM (bidirectional long short-term memory) network was resorted to capture the textual features. Furthermore, we applied a deep neural network to learn and classify the fusion features. The final emotional state was determined by the output of both speech and text emotion analysis. Finally, the multimodal fusion experiments were carried out to validate the proposed model on the IEMOCAP database. In comparison with the single modal, the overall recognition accuracy of text increased 6.70%, and that of speech emotion recognition soared 13.85%. Experimental results show that the recognition accuracy of our multimodal is higher than that of the single modal and outperforms other published multimodal models on the test datasets.
Deep learning is the crucial technology in intelligent question answering research tasks. Nowadays, extensive studies on question answering have been conducted by adopting the methods of deep learning. The challenge is that it not only requires an effective semantic understanding model to generate a textual representation but also needs the consideration of semantic interaction between questions and answers simultaneously. In this paper, we propose a stacked Bidirectional Long Short-Term Memory (BiLSTM) neural network based on the coattention mechanism to extract the interaction between questions and answers, combining cosine similarity and Euclidean distance to score the question and answer sentences. Experiments are tested and evaluated on publicly available Text REtrieval Conference (TREC) 8-13 dataset and Wiki-QA dataset. Experimental results confirm that the proposed model is efficient and particularly it achieves a higher mean average precision (MAR) of 0.7613 and mean reciprocal rank (MRR) of 0.8401 on the TREC dataset.
With the rapid expansion of the Internet, intelligent question answering for information retrieval has once again gained widespread attention. However, current question answering models mainly focus on the general and common-sense questions in open domains and are incapable to effectively solve more complex professional domain questions. This paper proposed an integrated framework for Chinese intelligent question answering in restricted domains. The proposed model fused convolutional neural network and bidirectional long short-term memory network which performs efficient semantic analysis on the question pairs to extract more effective features of the text. Meanwhile, the coattention mechanism and attention mechanism were combined to obtain the semantic interaction and feature representation of the question pair for providing complete information for subsequent calculations. In addition, we introduced the method of question pair matching to implement the Chinese intelligent question answering in a restricted domain. Experiments were tested and evaluated on the open-source CCKS2018 dataset and our private self-built inverted pendulum control question answering (IPC-QA) dataset for automation control virtual learning environment. Experimental results confirm that the proposed models are efficient and achieve a high precision of 0.86042 and 0.8031 on CCKS2018 and IPC-QA respectively.
Abstract. According to the requirement of small satellite, this paper designed a digital sun sensor which diaphragm is a V-shaped cross-section structure. Using Position Sensitive Detector (PSD) as the light detector, we designed the V-shaped cross-section structure based on the pinhole imaging principle. The sun sensor realized the accurate calculation for two axis sun angle of the sun sensor. The mechanical test, thermal test and testing of the sun sensor are designed and carried out. The mechanical test and thermal test results verify the stability of the sun sensor. Testing result shows that the detection angle can reach (120°)×(120°), and the attitude determination accuracy is better than 6" in the entire viewing field. The mass, volume and power consumption of the sun sensor is 0.177 kg, 78 mm×77 mm×21 mm and 0.25 W. The sun sensor has low power consumption, large viewing angle and high precision characteristics, which realized the sun sensor the miniaturization and meet the requirements of the micro satellite. Its performance has been verified in orbit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.