Abstract:Sarcasm is a language phrase that conveys the polar opposite of what is being said, generally something highly unpleasant to offend or mock somebody. Sarcasm is widely used on social media platforms every day. Because sarcasm may change the meaning of a statement, the opinion analysis procedure is prone to errors. Concerns about the integrity of analytics have grown as the usage of automated social media analysis tools has expanded. According to preliminary research, sarcastic statements alone have significant… Show more
“…L NSP (x,y) =-logP(d/x,y) (16) By conducting dynamic language pre training, the ACC model can better understand and generate English contextual language, and generate smoother and more accurate dialogues. This is of great significance for implementing applications such as intelligent dialogue systems and chatbots [30].…”
Section: Acc Model Dynamic Language Pre Trainingmentioning
The contextual understanding ability in complex conversation scenarios has been a challenging issue, and existing methods mostly failed to possess such characteristics. To bridge such gap, this paper formulates a novel composite large language model to investigate such issue. As a result, taking English context as the scene, a Transformer-BERT integrated model-based automatic conversation model is proposed in this work. Firstly, the unidirectional BERT-based automatic conversation model is improved by introducing attention mechanism. It is expected to enhance feature expression for conversation texts by linking context to identify long-difficult sentences. Besides, a bidirectional Transformer encoder is utilized as the input layer before the BERT encoder. Through the two modules, dynamic language training based on English situational conversations can be completed to build the automatic conversation model. The proposed conversation model is further assessed on massive real-world English language context in terms of conversation performance. The experimental results show that compared with traditional rule-based or machine learning methods, the proposal has significantly improved response quality and fluency in English context. It can more accurately understand context, capture subtle semantic differences, and generate more coherent responses.
“…L NSP (x,y) =-logP(d/x,y) (16) By conducting dynamic language pre training, the ACC model can better understand and generate English contextual language, and generate smoother and more accurate dialogues. This is of great significance for implementing applications such as intelligent dialogue systems and chatbots [30].…”
Section: Acc Model Dynamic Language Pre Trainingmentioning
The contextual understanding ability in complex conversation scenarios has been a challenging issue, and existing methods mostly failed to possess such characteristics. To bridge such gap, this paper formulates a novel composite large language model to investigate such issue. As a result, taking English context as the scene, a Transformer-BERT integrated model-based automatic conversation model is proposed in this work. Firstly, the unidirectional BERT-based automatic conversation model is improved by introducing attention mechanism. It is expected to enhance feature expression for conversation texts by linking context to identify long-difficult sentences. Besides, a bidirectional Transformer encoder is utilized as the input layer before the BERT encoder. Through the two modules, dynamic language training based on English situational conversations can be completed to build the automatic conversation model. The proposed conversation model is further assessed on massive real-world English language context in terms of conversation performance. The experimental results show that compared with traditional rule-based or machine learning methods, the proposal has significantly improved response quality and fluency in English context. It can more accurately understand context, capture subtle semantic differences, and generate more coherent responses.
“…Whereas, in [35], the social graphs methodology is used to detect hoaxes over social platforms. Sharma et al [61] analyzed the sarcastic tweets and built a hybrid model to detect the sarcastic tweets. Eliciting out sarcastic tweets helps to improve the fake text accuracy as sometimes sarcastic tweets are marked as fake.…”
Social media play a significant role in communicating information across the globe, connecting with loved ones, getting the news, communicating ideas, etc. However, a group of people uses social media to spread fake information, which has a bad impact on society. Therefore, minimizing fake news and its detection are the two primary challenges that need to be addressed. This paper presents a multi-modal deep learning technique to address the above challenges. The proposed modal can use and process visual and textual features. Therefore, it has the ability to detect fake information from visual and textual data. We used EfficientNet-B0 and a sentence transformer, respectively, for detecting counterfeit images and for textural learning. Feature embedding is performed at individual channels, whilst fusion is done at the last classification layer. The late fusion is applied intentionally to mitigate the noisy data that are generated by multi-modalities. Extensive experiments are conducted, and performance is evaluated against state-of-the-art methods. Three real-world benchmark datasets, such as MediaEval (Twitter), Weibo, and Fakeddit, are used for experimentation. Result reveals that the proposed modal outperformed the state-of-the-art methods and achieved an accuracy of 86.48%, 82.50%, and 88.80%, respectively, for MediaEval (Twitter), Weibo, and Fakeddit datasets.
“…Platforms are in an ongoing technological arms race, striving to outpace the capabilities of AI generators with more advanced and precise detection algorithms (Singh & Sharma, 2021). This backand-forth has significant implications for the future of digital content curation and the role of AI in shaping the trustworthiness of shared media (Salim et al, 2022;Sharma et al, 2022). The importance of understanding and improving AI-generated image detection extends beyond the technical realm; it is a matter that affects the very foundation of how information is perceived and trusted online.…”
The proliferation of images generated by artificial intelligence (AI) has significantly impacted the digital landscape, especially on social media platforms where the distinction between natural and synthetic content is increasingly blurred. This study embarks on a comparative review of the strategies used by major social media platforms-Facebook/Instagram, Twitter, TikTok, and YouTube-to detect AI-generated images. Employing a comprehensive methodology that includes a systematic review of academic literature, analysis of platform policies, and expert interviews, this research assesses the effectiveness of various detection methods, ranging from sophisticated AI tools to user reporting mechanisms. The findings reveal diverse approaches: Facebook and Instagram utilise a blend of AI detection and human moderation; Twitter integrates machine learning algorithms with user reports; TikTok emphasises AI tools within moderation workflows and educational initiatives; and YouTube relies on its Content ID system alongside AI analysis. The study highlights the critical role of effective detection systems in maintaining content authenticity and user trust, underscoring the importance of balancing automated detection with human oversight. The ongoing development and refinement of these technologies, alongside collaborative efforts and evolving regulatory frameworks, are identified as essential for ensuring a trustworthy digital environment. This research contributes to the discourse on digital integrity, offering insights into the complexities of safeguarding social media ecosystems against the challenges posed by AI-generated content.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.