Sentiment classification is a key task in exploring people’s opinions; improved sentiment classification can help individuals make better decisions. Social media users are increasingly using both images and text to express their opinions and share their experiences, instead of only using text in conventional social media. As a result, understanding how to fully utilize them is critical in a variety of activities, including sentiment classification. In this work, we provide a fresh multimodal sentiment classification approach: visual distillation and attention network or VisdaNet. First, this method proposes a knowledge augmentation module, which overcomes the lack of information in short text by integrating the information of image captions and short text; secondly, aimed at the information control problem in the multi-modal fusion process in the product review scene, this paper proposes a knowledge distillation based on the CLIP module to reduce the noise information of the original modalities and improve the quality of the original modal information. Finally, regarding the single-text multi-image fusion problem in the product review scene, this paper proposes visual aspect attention based on the CLIP module, which correctly models the text-image interaction relationship in special scenes and realizes feature-level fusion across modalities. The results of the experiment on the Yelp multimodal dataset reveal that our model outperforms the previous SOTA model. Furthermore, the ablation experiment results demonstrate the efficacy of various tactics in the suggested model.
Small objects detection is a challenging task in computer vision due to the limited semantic information that can be extracted and the susceptibility to background interference. In this paper, we propose a decoupled multi-scale small object detection algorithm named DMS-YOLOv5. The algorithm incorporates a receptive field module into the feature extraction network for better focus on low-resolution small objects. The coordinate attention mechanism, which combines spatial and channel attention information, is introduced to reduce interference from background information and enhance the network’s attention to object information. A detection layer tailored to small-sized objects is added to compensate for the loss of small object information in multiple downsampling operations, greatly improving the detection capability of small objects. Next, The decoupled network is introduced into the detection head network for branch processing of classification and bounding box regression tasks. Finally, the bounding box loss function is improved to alleviate missed detection problems caused by the concentration of small objects and mutual occlusion between objects. The improved method achieved a mean average precision improvement of 12.1% on VisDrone2019-DET dataset compared to the original method. In comparison experiments with similar methods, our proposed method also demonstrated good performance, validating its effectiveness.
Emotional speech synthesis is an important branch of human–computer interaction technology that aims to generate emotionally expressive and comprehensible speech based on the input text. With the rapid development of speech synthesis technology based on deep learning, the research of affective speech synthesis has gradually attracted the attention of scholars. However, due to the lack of quality emotional speech synthesis corpus, emotional speech synthesis research under low-resource conditions is prone to overfitting, exposure error, catastrophic forgetting and other problems leading to unsatisfactory generated speech results. In this paper, we proposed an emotional speech synthesis method that integrates migration learning, semi-supervised training and robust attention mechanism to achieve better adaptation to the emotional style of the speech data during fine-tuning. By adopting an appropriate fine-tuning strategy, trade-off parameter configuration and pseudo-labels in the form of loss functions, we efficiently guided the learning of the regularized synthesis of emotional speech. The proposed SMAL-ET2 method outperforms the baseline methods in both subjective and objective evaluations. It is demonstrated that our training strategy with stepwise monotonic attention and semi-supervised loss method can alleviate the overfitting phenomenon and improve the generalization ability of the text-to-speech model. Our method can also enable the model to successfully synthesize different categories of emotional speech with better naturalness and emotion similarity.
Image-text matching is a crucial aspect of multi-modal intelligence. The main challenge in this area is accurately measuring the relevance between the image and text, using evidence obtained through matching. Previous studies either concentrated on obtaining a well-represented global feature to measure similarity directly or on investigating complex matching patterns at a local level before aggregating them, with little attention paid to combining them. We propose a Globally Guided Confidence Enhancement Network that combines both approaches by obtaining a good global representation to guide fine-grained local interactions. In this process, content that better matches the text from a global perspective is enhanced and represented with confidence scores. Extensive experiments demonstrate that the approach we have employed achieves superior performance on Flickr30K and MSCOCO datasets.
In sentiment analysis, biased user reviews can have a detrimental impact on a company’s evaluation. Therefore, identifying such users can be highly beneficial as their reviews are not based on reality but on their characteristics rooted in their psychology. Furthermore, biased users may be seen as instigators of other prejudiced information on social media. Thus, proposing a method to help detect polarized opinions in product reviews would offer significant advantages. This paper proposes a new method for sentiment classification of multimodal data, which is called UsbVisdaNet (User Behavior Visual Distillation and Attention Network). The method aims to identify biased user reviews by analyzing their psychological behaviors. It can identify both positive and negative users and improves sentiment classification results that may be skewed due to subjective biases in user opinions by leveraging user behavior information. Through ablation and comparison experiments, the effectiveness of UsbVisdaNet is demonstrated, achieving superior sentiment classification performance on the Yelp multimodal dataset. Our research pioneers the integration of user behavior features, text features, and image features at multiple hierarchical levels within this domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.