The advent of social media and its prosperity enable users to share their opinions and views. Understanding users' emotional states might provide the potential to create new business opportunities. Automatically identifying users' emotional states from their texts and classifying emotions into finite categories such as joy, anger, disgust, etc., can be considered as a text classification problem. However, it introduces a challenging learning scenario where multiple emotions with different intensities are often found in a single sentence. Moreover, some emotions co-occur more often while other emotions rarely coexist. In this paper, we propose a novel approach based on emotion distribution learning in order to address the aforementioned issues. The key idea is to learn a mapping function from sentences to their emotion distributions describing multiple emotions and their respective intensities. Moreover, the relations of emotions are captured based on the Plutchik's wheel of emotions and are subsequently incorporated into the learning algorithm in order to improve the accuracy of emotion detection. Experimental results show that the proposed approach can effectively deal with the emotion distribution detection problem and perform remarkably better than both the state-of-theart emotion detection method and multi-label learning methods.
We introduce tf_geometric 1 , an efficient and friendly library for graph deep learning, which is compatible with both TensorFlow 1.x and 2.x. It provides kernel libraries for building Graph Neural Networks (GNNs) as well as implementations of popular GNNs. The kernel libraries consist of infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc. These infrastructures enable tf_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc.; therefore, tf_geometric can be used for a variety of graph deep learning tasks, such as node classification, link prediction, and graph classification. Based on the kernel libraries, tf_geometric implements a variety of popular GNN models. To facilitate the implementation of GNNs, tf_geometric also provides some other libraries for dataset management, graph sampling, etc. Different from existing popular GNN libraries, tf_geometric provides not only Object-Oriented Programming (OOP) APIs, but also Functional APIs, which enable tf_geometric to handle advanced tasks such as graph meta-learning. The APIs are friendly and suitable for both beginners and experts. CCS CONCEPTS• Information systems → Computing platforms.
The focus of fine-grained image classification tasks is to ignore interference information and grasp local features. This challenge is what the visual attention mechanism excels at. Firstly, we have constructed a two-level attention convolutional network, which characterizes the object-level attention and the pixel-level attention. Then, we combine the two kinds of attention through a second-order response transform algorithm. Furthermore, we propose a clustering-based grouping attention model, which implies the part-level attention. The grouping attention method is to stretch all the semantic features, in a deeper convolution layer of the network, into vectors. These vectors are clustered by a vector dot product, and each category represents a special semantic. The grouping attention algorithm implements the functions of group convolution and feature clustering, which can greatly reduce the network parameters and improve the recognition rate and interpretability of the network. Finally, the low-level visual features and high-level semantic information are merged by a multi-level feature fusion method to accurately classify fine-grained images. We have achieved good results without using pre-training networks and fine-tuning techniques.
With the continuous evolution of research on convolutional neural networks, it is an efficient and fashionable method to introduce attention mechanism into the convolutional structure. The channel attention designed in SENet has made a great contribution to the promotion of the attention convolution model. However, our research found that SENet focuses on certain feature channels rather than objects in the channels. It will simultaneously enhance or weaken the target objects and background information in a certain channel. On the basis of the channel attention convolution network, we first perform channel sorting and group convolution on the feature map, and expand each group to β times the original feature channel during the group convolution process to construct a channel expansion convolution network (CENet), where β is an array used to represent the channel expansion coefficient. CENet captures the attention of objects in the feature channel while expanding the proportion of features in the relatively important channel. Furthermore, we improved the structure of CENet and merged it into the intra-layer multi-scale convolutional model to construct an object-level attention multi-scale convolutional neural network (OAMS-CNN). We have conducted a large number of experiments on four data sets, CIFAR-10, CIFAR-100, FGVC-Aircraft and Stanford Cars. The experimental results show that our proposed new object-level attention convolution model has achieved good image classification results. INDEX TERMS Channel expansion network, object-level attention CNN, multi-scale CNN, image classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.