2021
DOI: 10.3390/fi14010010
|View full text |Cite
|
Sign up to set email alerts
|

Dis-Cover AI Minds to Preserve Human Knowledge

Abstract: Modern AI technologies make use of statistical learners that lead to self-empiricist logic, which, unlike human minds, use learned non-symbolic representations. Nevertheless, it seems that it is not the right way to progress in AI. The structure of symbols—the operations by which the intellectual solution is realized—and the search for strategic reference points evoke important issues in the analysis of AI. Studying how knowledge can be represented through methods of theoretical generalization and empirical ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 57 publications
0
6
0
Order By: Relevance
“…Another method, called Knowledge Inference or Latent Knowledge Estimation, is associated with using EDM techniques to "assess the skills of students based on their responses to a problem-solving exercise" (p. 184) [21]. For example, natural language processing technologies (e.g., Transformers) have the potential to be used to measure human language knowledge acquisition [23]. This research sits the classification method, one of the commonly seen methods in predictive analysis [20].…”
Section: Learning Analytics and Educational Data Miningmentioning
confidence: 99%
“…Another method, called Knowledge Inference or Latent Knowledge Estimation, is associated with using EDM techniques to "assess the skills of students based on their responses to a problem-solving exercise" (p. 184) [21]. For example, natural language processing technologies (e.g., Transformers) have the potential to be used to measure human language knowledge acquisition [23]. This research sits the classification method, one of the commonly seen methods in predictive analysis [20].…”
Section: Learning Analytics and Educational Data Miningmentioning
confidence: 99%
“…However, as a transformer-based model, BERT relies heavily on the knowledge derived from the experience learned from a large corpus [22]. In addition, BERT models require considerable computing resources in the training stage.…”
Section: Transformer-based Modelmentioning
confidence: 99%
“…In addition, BERT models require considerable computing resources in the training stage. BERT appears to contain appropriate mechanisms for learning universal linguistic representations that are taskindependent [22]. Therefore, publicizing BERT models that have been pretrained based on various types of corpora can provide pretrained models to help other researchers finetune their models according to specific tasks, contributing to efficiency improvement of research and business practice.…”
Section: Transformer-based Modelmentioning
confidence: 99%
“…USEs such as BERT (Devlin et al, 2019) are encoding semantic features in hidden layers (Jawahar et al, 2019;Miaschi et al, 2020). However, USEs' success in downstream tasks may be due to superficial heuristics (as supposed in McCoy et al (2019) and Ranaldi et al (2022)) and not to deep modeling of semantic features. Therefore, our study can contribute to this debate.…”
Section: Background and Related Workmentioning
confidence: 99%