2022
DOI: 10.3390/biom12111604
|View full text |Cite
|
Sign up to set email alerts
|

A Machine Learning Approach for Recommending Herbal Formulae with Enhanced Interpretability and Applicability

Abstract: Herbal formulae (HFs) are representative interventions in Korean medicine (KM) for the prevention and treatment of various diseases. Here, we proposed a machine learning-based approach for HF recommendation with enhanced interpretability and applicability. A dataset consisting of clinical symptoms, Sasang constitution (SC) types, and prescribed HFs was derived from a multicenter study. Case studies published over 10 years were collected and curated by experts. Various classifiers, oversampling methods, and dat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 22 publications
0
0
0
Order By: Relevance
“…Owing to the ‘black box’ nature of deep learning models, the decision-making processes are opaque and difficult to comprehend, which may affect both physician and patient trust and understanding of model predictions ( 83 ). To address this limitation, some well-known methods can be utilized: The Class Activation Mapping (CAM) technique helps in understanding the regions of interest within images as attended by the model ( 84 ); Shapley Additive exPlanations (SHAP) elucidate the global impact of each feature on the model ( 85 ); and Local Interpretable Model-agnostic Explanations (LIME) explicate the local prediction process for individual samples ( 86 ). Collectively, these methods provide interpretability tools that enhance comprehension of the model’s decision-making process and improve its interpretability.…”
Section: Models In Precision Diagnosis and Therapeutics For Ramentioning
confidence: 99%
“…Owing to the ‘black box’ nature of deep learning models, the decision-making processes are opaque and difficult to comprehend, which may affect both physician and patient trust and understanding of model predictions ( 83 ). To address this limitation, some well-known methods can be utilized: The Class Activation Mapping (CAM) technique helps in understanding the regions of interest within images as attended by the model ( 84 ); Shapley Additive exPlanations (SHAP) elucidate the global impact of each feature on the model ( 85 ); and Local Interpretable Model-agnostic Explanations (LIME) explicate the local prediction process for individual samples ( 86 ). Collectively, these methods provide interpretability tools that enhance comprehension of the model’s decision-making process and improve its interpretability.…”
Section: Models In Precision Diagnosis and Therapeutics For Ramentioning
confidence: 99%