Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.3390/s20164385
|View full text |Cite
|
Sign up to set email alerts
|

Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty

Abstract: Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy sub… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 49 publications
(46 citation statements)
references
References 45 publications
0
46
0
Order By: Relevance
“…In recent years, interest in XAI has been increasing owing to its classification accuracy and to features that significantly contributed to classification. Dindorf et al [ 12 ] used the local interpretable model-agnostic explanations (LIME) to understand the features for identifying total hip arthroplasty (THA), and found that the sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task, as shown in Table 2 .…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, interest in XAI has been increasing owing to its classification accuracy and to features that significantly contributed to classification. Dindorf et al [ 12 ] used the local interpretable model-agnostic explanations (LIME) to understand the features for identifying total hip arthroplasty (THA), and found that the sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task, as shown in Table 2 .…”
Section: Related Workmentioning
confidence: 99%
“…In the field of gait analysis, domain knowledge to detect gait parameters remains important for designing the inputs of the model. The explainable artificial intelligence (XAI) method is receiving increased attention as a method used to obtain domain knowledge based on machine learning [ 12 ].…”
Section: Introductionmentioning
confidence: 99%
“…Studies focusing on classification of PD gait achieved good accuracy, but relied on “black box” systems which are difficult to interpret [ 25 ]. Previous studies focusing on interpretability of gait analysis systems used marker based motion capture systems [ 26 , 27 ], and devices such as inertial measurement units [ 28 ]. However, to our knowledge, no study has focused on interpretability of systems that utilise markerless pose estimation for classification of PD gait.…”
Section: Introductionmentioning
confidence: 99%
“…This opacity does not comply with the requirements of the European General Data Protection Regulation (GDPR, EU 2016/679) [ 29 ] and strongly limits practical applications in clinical contexts [ 30 ]. Recently, through advances in the application of explainable artificial intelligence (XAI) methods in the biomechanical clinical domain, machine learning is becoming more and more applicable in practical clinical settings [ 31 , 32 ]. XAI offers methods for increasing the trustworthiness and transparency of black box models [ 27 ].…”
Section: Introductionmentioning
confidence: 99%