The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021 6th International Conference on Computer Science and Engineering (UBMK) 2021
DOI: 10.1109/ubmk52708.2021.9559014
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of BERT Models and Machine Learning Methods for Sentiment Analysis on Turkish Tweets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(11 citation statements)
references
References 0 publications
0
10
0
1
Order By: Relevance
“…Choosing the appropriate supervised learning algorithm is essential for achieving accurate content classification and categorization. The task involves evaluating various algorithms such as support vector machines (SVM), random forests, or deep learning models like convolutional neural networks (CNNs) or transformerbased architectures (e.g., BERT, GPT) [4][5][6]. Additionally, model optimization techniques such as hyperparameter tuning, cross-validation, and regularization methods need to be applied to enhance the model's performance [16,17].…”
Section: Statement For the Taskmentioning
confidence: 99%
See 2 more Smart Citations
“…Choosing the appropriate supervised learning algorithm is essential for achieving accurate content classification and categorization. The task involves evaluating various algorithms such as support vector machines (SVM), random forests, or deep learning models like convolutional neural networks (CNNs) or transformerbased architectures (e.g., BERT, GPT) [4][5][6]. Additionally, model optimization techniques such as hyperparameter tuning, cross-validation, and regularization methods need to be applied to enhance the model's performance [16,17].…”
Section: Statement For the Taskmentioning
confidence: 99%
“…Transformer-based architectures such as BERT (Bidirectional Encoder Representations from Transformers) have gained significant attention in natural language processing tasks due to their ability to capture contextual information effectively [4][5][6]. In the context of content management, these architectures can be leveraged to enhance the accuracy and efficiency of content classification and categorization.…”
Section: Main Partmentioning
confidence: 99%
See 1 more Smart Citation
“…The most important achievement of this model is that it is pre-trained on 104 different multilingual corpora and it performs quite well even in low-resource languages. In addition, the M-BERT model performs training taking into account the structures of all languages [37]. In this study, a pre-trained M-BERT model which supports 104 languages including Turkish with 12 stacked Transformer blocks, hidden dimensions 768, 12 self-attention heads, and overall 110,000,000 parameters was used.…”
Section: M-bert Modelmentioning
confidence: 99%
“…Comparative investigations demonstrate that the bidirectional LSTM-connected conditional random field (CRF) model outperforms the LSTMconnected conditional random field (CRF) model. Existing event extraction methods [16,17], usually for news and other corpora, mainly rely on trigger words to detect certain events and then extract relevant event parameters, which are not suitable for unstructured personnel resume texts [18]. Author [19] proposed that event types can be detected through the critical parameters in the event, without relying on trigger words to see possibilities and extract event parameters.…”
Section: Introductionmentioning
confidence: 99%