2022
DOI: 10.1145/3529755
|View full text |Cite
|
Sign up to set email alerts
|

On the Explainability of Natural Language Processing Deep Models

Abstract: Despite their success, deep networks are used as black-box models with outputs that are not easily explainable during the learning and the prediction phases. This lack of interpretability is significantly limiting the adoption of such models in domains where decisions are critical such as the medical and legal fields. Recently, researchers have been interested in developing methods that help explain individual decisions and decipher the hidden representations of machine learning models in general and deep netw… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(41 citation statements)
references
References 83 publications
0
22
0
Order By: Relevance
“…We started our analysis plan by firstly adopting a computational approach to operationalize the abstract features we required participants to manipulate during the generation of the stories. Our approach relied on recent advances in natural language processing based on deep neural network models that learn statistical patterns from large text corpora 50,51,58,66 and allowed us to successfully differentiate the novel and appropriate components of complex textual data such as narratives. Note that, although the use of semantic distance as a measure of novelty or originality has been extensively used in previous research 42,47,61 , our measure of appropriateness, derived from the BERT model’s surprise, has not been used yet and in our opinion can be a valid measure for further studies attempting at quantifying the usefulness of a narrative.…”
Section: Discussionmentioning
confidence: 99%
“…We started our analysis plan by firstly adopting a computational approach to operationalize the abstract features we required participants to manipulate during the generation of the stories. Our approach relied on recent advances in natural language processing based on deep neural network models that learn statistical patterns from large text corpora 50,51,58,66 and allowed us to successfully differentiate the novel and appropriate components of complex textual data such as narratives. Note that, although the use of semantic distance as a measure of novelty or originality has been extensively used in previous research 42,47,61 , our measure of appropriateness, derived from the BERT model’s surprise, has not been used yet and in our opinion can be a valid measure for further studies attempting at quantifying the usefulness of a narrative.…”
Section: Discussionmentioning
confidence: 99%
“…The healthcare industry is currently undergoing a transition from small-scale AI models to large-scale foundation AI models. Notably, in 2022, LLMs such as Chatbot Generative Pretrained Transformer (ChatGPT) [45] have exhibited reasonable performance in various domains [46,47 ▪▪ ]. Typically, small-scale AI systems are engineered to execute precise and limited-scope operations, such as health data screening or medical image analysis [48].…”
Section: Artificial Intelligence-enabled Large Language Models In Oph...mentioning
confidence: 99%
“…Nowadays there is a growing interest on methods providing explanations about the working mechanism or the predictions of neural networks. A lot of progress has been made in the domains of pattern recognition [9] and natural language processing [10]. However, only a few works deal with network architectures processing video data.…”
Section: Related Workmentioning
confidence: 99%