This paper proposes a method for estimating the emotions expressed by emoticons based on a distributed representation of the character meanings of the emoticon. Existing studies on emoticons have focused on extracting the emoticons from texts and estimating the associated emotions by separating them into their constituent parts and using the combination of parts as the feature. Applying a recently developed technique for word embedding, we propose a versatile approach to emotion estimation from emoticons by training the meanings of the characters constituting the emoticons and using them as the feature unit of the emoticon. A cross-validation test was conducted for the proposed model based on deep convolutional neural networks using distributed representations of the characters as the feature. Results showed that our proposed method estimates the emotion of unknown emoticons with a higher F1-score than the baseline method based on character n-grams.
In recent years, a lot of non-verbal expressions have been used on social media. Ascii art (AA) is an expression using characters with visual technique. In this paper, we set up an experiment to classify AA pictures by using character features and image features. We try to clarify which feature is more effective for a method to classify AA pictures. We proposed five methods: 1) a method based on character frequency, 2) a method based on character importance value and 3) a method based on image features, 4) a method based on image features using pre-trained neural networks and 5) a method based on image features of characters. We trained neural networks by using these five features. In the experimental result, the best classification accuracy was obtained in the feed forward neural networks that used image features of characters.
People often make decisions based on sensitivity rather than rationality. In the field of biological information processing, methods are available for analyzing biological information directly based on electroencephalogram: EEG to determine the pleasant/unpleasant reactions of users. In this study, we propose a sensitivity filtering technique for discriminating preferences (pleasant/unpleasant) for images using a sensitivity image filtering system based on EEG. Using a set of images retrieved by similarity retrieval, we perform the sensitivity-based pleasant/unpleasant classification of images based on the affective features extracted from images with the maximum entropy method: MEM. In the present study, the affective features comprised cross-correlation features obtained from EEGs produced when an individual observed an image. However, it is difficult to measure the EEG when a subject visualizes an unknown image. Thus, we propose a solution where a linear regression method based on canonical correlation is used to estimate the cross-correlation features from image features. Experiments were conducted to evaluate the validity of sensitivity filtering compared with image similarity retrieval methods based on image features. We found that sensitivity filtering using color correlograms was suitable for the classification of preferred images, while sensitivity filtering using local binary patterns was suitable for the classification of unpleasant images. Moreover, sensitivity filtering using local binary patterns for unpleasant images had a 90% success rate. Thus, we conclude that the proposed method is efficient for filtering unpleasant images.
This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.