2023
DOI: 10.3390/a16030146
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning and Analogical Inference: A Critical Comparison of Algorithms, Methods, and Applications

Abstract: Artificial intelligence and machine learning (AI/ML) research has aimed to achieve human-level performance in tasks that require understanding and decision making. Although major advances have been made, AI systems still struggle to achieve adaptive learning for generalization. One of the main approaches to generalization in ML is transfer learning, where previously learned knowledge is utilized to solve problems in a different, but related, domain. Another approach, pursued by cognitive scientists for several… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 85 publications
0
3
0
Order By: Relevance
“…(ELMo) [104], XLNet [105], and the Bidirectional Encoder Representations from Transformers (BERT) family of models [106]. BERT was released in 2018 (see [107]) and was soon followed by the Robustly optimized BERT pre-training Approach (RoBERTa) (see [108]), A Lite BERT (ALBERT) (see [109]), and a distilled version of BERT and RoBERTa (DistilBERT and DistilRoBERTa, respectively) (see [110]) [106].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…(ELMo) [104], XLNet [105], and the Bidirectional Encoder Representations from Transformers (BERT) family of models [106]. BERT was released in 2018 (see [107]) and was soon followed by the Robustly optimized BERT pre-training Approach (RoBERTa) (see [108]), A Lite BERT (ALBERT) (see [109]), and a distilled version of BERT and RoBERTa (DistilBERT and DistilRoBERTa, respectively) (see [110]) [106].…”
Section: Methodsmentioning
confidence: 99%
“…Word2Vec was followed by several other vector space models, with Global Vectors (GloVe) (see [101]) and FastText (see [102]) being the most prominent [103]. The embeddings of vector space models are static, meaning there is no variation for words with multiple meanings; however, this was addressed in more recent word embedding models that have contextualized vectors, such as Embeddings from Language Models (ELMo) [104], XLNet [105], and the Bidirectional Encoder Representations from Transformers (BERT) family of models [106]. BERT was released in 2018 (see [107]) and was soon followed by the Robustly optimized BERT pre-training Approach (RoBERTa) (see [108]), A Lite BERT (ALBERT) (see [109]), and a distilled version of BERT and RoBERTa (DistilBERT and DistilRoBERTa, respectively) (see [110]) [106].…”
Section: Metric Description Citationmentioning
confidence: 99%
See 1 more Smart Citation