2022
DOI: 10.48550/arxiv.2206.14672
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Is it possible not to cheat on the Turing Test_Exploring the potential and challenges for true natural language 'understanding' by computers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…For instance, due to their architecture and training regime, Transformers often fail at simple arithmetic (Floridi andChiriatti 2020, Patel, Bhattamishra andGoyal 2021), arrive at bizarre deductions in scenarios that require real-world knowledge, and sometimes output obvious non sequiturs with sudden and extreme topic shifts that would be absurd coming from a human writer or speaker (Marcus and Davis 2020). Furthermore, given that any "knowledge" about the world that may be encoded in the model is not grounded in experience or reasoning but is filtered through language and its statistical properties (e.g., Alberts 2022), such as the frequent co-occurrence of certain terms, Transformers often resort to heuristics: they produce associatively plausible rather than factually correct answers to information questions (Sobieszek and Price 2022), and to some extent rely on simple lexical overlap between a premise and a hypothesis to predict entailment or non-entailment (McCoy, Pavlick and Linzen 2019).…”
mentioning
confidence: 99%
“…For instance, due to their architecture and training regime, Transformers often fail at simple arithmetic (Floridi andChiriatti 2020, Patel, Bhattamishra andGoyal 2021), arrive at bizarre deductions in scenarios that require real-world knowledge, and sometimes output obvious non sequiturs with sudden and extreme topic shifts that would be absurd coming from a human writer or speaker (Marcus and Davis 2020). Furthermore, given that any "knowledge" about the world that may be encoded in the model is not grounded in experience or reasoning but is filtered through language and its statistical properties (e.g., Alberts 2022), such as the frequent co-occurrence of certain terms, Transformers often resort to heuristics: they produce associatively plausible rather than factually correct answers to information questions (Sobieszek and Price 2022), and to some extent rely on simple lexical overlap between a premise and a hypothesis to predict entailment or non-entailment (McCoy, Pavlick and Linzen 2019).…”
mentioning
confidence: 99%
“…For instance, due to their architecture and training regime, Transformers often fail at simple arithmetic (Floridi & Chiriatti 2020, Patel et al 2021), arrive at bizarre deductions in scenarios that require real-world knowledge, and sometimes output obvious non sequiturs with sudden and extreme topic shifts that would be absurd coming from a human writer or speaker (Marcus & Davis 2020). Furthermore, given that any "knowledge" about the world that may be encoded in the model is not grounded in experience or reasoning but is filtered through language and its statistical properties (e.g., Alberts 2022), such as the frequent co-occurrence of certain terms, Transformers often resort to heuristics: they produce associatively plausible rather than factually correct answers to information questions (Sobieszek & Price 2022), and to some extent rely on simple lexical overlap between a premise and a hypothesis to predict entailment or non-entailment (McCoy et al 2019).…”
mentioning
confidence: 99%