2023
DOI: 10.1007/s10586-023-04203-7
|View full text |Cite
|
Sign up to set email alerts
|

Foundation and large language models: fundamentals, challenges, opportunities, and social impacts

Devon Myers,
Rami Mohawesh,
Venkata Ishwarya Chellaboina
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 141 publications
0
3
0
Order By: Relevance
“…This suggests that language models serve as invaluable allies in the pursuit of writing excellence, offering indispensable support in identifying and rectifying errors and inconsistencies. This result aligns with that of the prior investigation (Myers et al, 2023) which found that Large language models can be used to identify (possible) grammatical problems at the semantic level and recommend appropriate and customized remediation techniques. The simplest assistance can be provided at the syntactic level, which involves finding and fixing errors.…”
Section: "Language Models Such As Grammarly Helped Me Meticulously Re...supporting
confidence: 89%
“…This suggests that language models serve as invaluable allies in the pursuit of writing excellence, offering indispensable support in identifying and rectifying errors and inconsistencies. This result aligns with that of the prior investigation (Myers et al, 2023) which found that Large language models can be used to identify (possible) grammatical problems at the semantic level and recommend appropriate and customized remediation techniques. The simplest assistance can be provided at the syntactic level, which involves finding and fixing errors.…”
Section: "Language Models Such As Grammarly Helped Me Meticulously Re...supporting
confidence: 89%
“…Another notable development was the integration of machine learning techniques that focused on anomaly detection within the generated text, flagging content that deviated significantly from established facts or logical coherence [39], [40], [41]. Despite the promise of these methodologies, they often required extensive computational resources and faced challenges in scaling up for broader applications [42]. This limitation highlighted the need for more efficient and scalable solutions that could maintain high standards of accuracy without compromising on the generative efficiency of LLMs, paving the way for innovative approaches like the integration of Cross-Referential Validation modules in models such as GPT-Neo-CRV.…”
Section: Existing Methods For Enhancing Llm Reliabilitymentioning
confidence: 99%
“…Studies within this theme have consistently demonstrated that LLMs can inherit and amplify biases present in their training data, affecting their language generation and decisionmaking processes [4], [8], [23]- [25]. A common finding across several studies is that biases related to gender, race, and ethnicity are particularly prevalent, often leading to stereotypical or prejudicial outputs [12], [20], [21], [26], [27].…”
Section: A Identification and Analysis Of Bias In Llmsmentioning
confidence: 99%