The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021 6th International Conference on Computer Science and Engineering (UBMK) 2021
DOI: 10.1109/ubmk52708.2021.9559007
|View full text |Cite
|
Sign up to set email alerts
|

The Effect of BERT, ELECTRA and ALBERT Language Models on Sentiment Analysis for Turkish Product Reviews

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 0 publications
0
2
0
1
Order By: Relevance
“…DM, kelime tahmini, sınıflandırma gibi görevler için metinlerin gövdelerini analiz etmektedir. Metinlerin gövdelerine ait kelime dizilerini girdi olarak kullanarak görev için olasılık dağılımı hesaplamaktadır (Guven, 2021b). DM, tek yönlü ve çift yönlü olarak iki modele ayrılmaktadır.…”
Section: Dil Modelleriunclassified
“…DM, kelime tahmini, sınıflandırma gibi görevler için metinlerin gövdelerini analiz etmektedir. Metinlerin gövdelerine ait kelime dizilerini girdi olarak kullanarak görev için olasılık dağılımı hesaplamaktadır (Guven, 2021b). DM, tek yönlü ve çift yönlü olarak iki modele ayrılmaktadır.…”
Section: Dil Modelleriunclassified
“…Their results show that it is possible to achieve few-shot performance similar to GPT-3 with much smaller language models. Due to the instability of manually designed prompts, many subsequent studies explore automatically searching the prompts, either in a discrete space (Gao, Fisch, and Chen 2021;Jiang et al 2020;Haviv, Berant, and Globerson 2021;Shin et al 2020;Ben-David, Oved, and Reichart 2021) or in a continuous space (Qin and Eisner 2021;Hambardzumyan, Khachatrian, and May 2021;Han et al 2021;Liu et al 2021b). The discrete prompt is usually designed as natural language phrases with blank to be filled while the continuous prompt is a sequence of vectors that can be updated arbitrarily during learning.…”
Section: Prompt-based Few-shot Learningmentioning
confidence: 99%
“…Similar to the structure of GAN (Goodfellow et al 2014), it pre-trains a small generator to replace some tokens in an input with their plausible alternatives and then a large discriminator to distinguish whether each word has been replaced by the generator or not. The unique effectiveness of pre-trained token-replaced detection model intrigues many studies to apply it in many NLP tasks, such as fact verification (Naseer, Asvial, and Sari 2021), question answering (Alrowili and Shanker 2021;Yamada, Asai, and Hajishirzi 2021), grammatical error detection (Yuan et al 2021), emotional classification (Zhang, Yu, and Zhu 2021;Guven 2021), and medication mention detection (Lee et al 2020). There are also some other studies that upgrade or extend the token-replaced detection pre-training mechanism.…”
Section: Token-replaced Detectionmentioning
confidence: 99%