Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) 2022
DOI: 10.18653/v1/2022.semeval-1.117
|View full text |Cite
|
Sign up to set email alerts
|

CS-UM6P at SemEval-2022 Task 6: Transformer-based Models for Intended Sarcasm Detection in English and Arabic

Abstract: Sarcasm is a form of figurative language where the intended meaning of a sentence differs from its literal meaning. This poses a serious challenge to several Natural Language Processing (NLP) applications such as Sentiment Analysis, Opinion Mining, and Author Profiling. In this paper, we present our participating system to the intended sarcasm detection task in English and Arabic languages. Our system 1 consists of three deep learning-based models leveraging two existing pre-trained language models for Arabic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 12 publications
(23 reference statements)
0
1
0
1
Order By: Relevance
“…For NestedNER, the ELYADATA team (Laouirine et al, 2023) ranks in the first position with an F 1 score of 93.73, followed by UM6P & UL team (El Mahdaouy et al, 2023) with a score of 93.09 and in third place AlexU-AIC with a score of 92.61. Notably, there are four teams that outperform baseline-I with F 1 score gap between the baseline and the best model of 2.05%.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…For NestedNER, the ELYADATA team (Laouirine et al, 2023) ranks in the first position with an F 1 score of 93.73, followed by UM6P & UL team (El Mahdaouy et al, 2023) with a score of 93.09 and in third place AlexU-AIC with a score of 92.61. Notably, there are four teams that outperform baseline-I with F 1 score gap between the baseline and the best model of 2.05%.…”
Section: Resultsmentioning
confidence: 99%
“…The highest performance for the product is 66.67% which is obtained by ThinkNER team. For the quantity, the 63.16% F1-score is obtained by (El Mahdaouy et al, 2023). For website, the best performance is 69.26% F1-score.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Several approaches are adopted to enhance the base performance of BERT for sentiment analysis and sarcasm detection. Mahdaouy et al (2021) added an attention layer on top of MARBERT and constructed the sentence embedding by concatenating the [CLS] embedding with the output of the attention layer. They also tackled multitask learning through the same architecture by adding an extra attention layer for the other task.…”
Section: Related Workmentioning
confidence: 99%
“…На сегодняшний день, наиболее высокое качество во многих задачах классификации текстов показывают нейросетевые модели, основанные на архитектуре Transformer [28] и, в частности, на использовании лингвистических моделей Bidirectional Encoder Representations from Transformers (BERT) [29], RoBERTa [30] и их модификаций. Так, в ряде соревнований по машинному обучению, связанных с тематикой классификации текстов социальных сетей и проводимых в рамках крупнейших конференций, лучшие результаты были получены с помощью BERT и ее вариаций (например, [31][32][33]).…”
Section: обзор смежных работunclassified