2022
DOI: 10.1016/j.jksuci.2022.05.002
|View full text |Cite
|
Sign up to set email alerts
|

Distributed application execution in fog computing: A taxonomy, challenges and future directions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 79 publications
0
7
0
Order By: Relevance
“…The variability in recognition performance can be attributed to biases in the training data and the labeling methodologies used [12]. In prior research that attempted to generate datasets with emotion labels using ChatGPT, Koptyra et al [3] reported variability in the number of generated labels.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The variability in recognition performance can be attributed to biases in the training data and the labeling methodologies used [12]. In prior research that attempted to generate datasets with emotion labels using ChatGPT, Koptyra et al [3] reported variability in the number of generated labels.…”
Section: Discussionmentioning
confidence: 99%
“…However, one of the caveats associated with LLMs is their tendency to exhibit biases in the generated outputs. These biases often have roots in the non-uniformity of their training data and the labeling methodologies employed [12]. Consequently, it is likely that potential biases manifest in the nuances of emotional recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Transformer architecture was developed to help reduce limitations of earlier sequence‐to‐sequence models for natural language processing (e.g., recurrent neural networks) and uses a stack of 13 transformer blocks: each block has 12 attention heads with 768 hidden units (Ray 2023, pp. 212, 124; Sohail et al, 2023, p. 3). The tokenizer ‘divides raw text into smaller units called tokens for easier processing’.…”
Section: Introductionmentioning
confidence: 98%
“…ChatGPT has emerged from the subfield of natural language processing (NLP), a mix of computer science, linguistics, and, increasingly, AI. Developed by the San Francisco‐based OpenAI company, the chatbot is a large language model (LLM) engineered for text generation, language translation, text summarization, and data analysis (Sohail et al, 2023, pp. 2–3, 11, table 2).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation