2019
DOI: 10.1073/pnas.1900654116
|View full text |Cite
|
Sign up to set email alerts
|

Definitions, methods, and applications in interpretable machine learning

Abstract: Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

4
723
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,271 publications
(828 citation statements)
references
References 65 publications
4
723
0
3
Order By: Relevance
“…Finally, ML and DL approaches are "black box" with limited process-based interpretation. Integrating a process-based model with data-driven approaches could not only attain interpretable ML/DL models but, more importantly, are computational efficiency and readily extrapolate outside the range of training conditions [18,91], which is recommended for future large-scale yield estimation, management optimization, and disaster monitoring.…”
Section: Uncertainties In the Studymentioning
confidence: 99%
“…Finally, ML and DL approaches are "black box" with limited process-based interpretation. Integrating a process-based model with data-driven approaches could not only attain interpretable ML/DL models but, more importantly, are computational efficiency and readily extrapolate outside the range of training conditions [18,91], which is recommended for future large-scale yield estimation, management optimization, and disaster monitoring.…”
Section: Uncertainties In the Studymentioning
confidence: 99%
“…α * p = i α i and θ * p,k = i α i θ LOO i,k . Second, traverse the tree bottom-up to calculate the gradients for each internal node level by level (line [10][11][12][13][14][15][16]. Last, return the cost.…”
Section: Algorithm Descriptionmentioning
confidence: 99%
“…So, the problem of making a single tree perform well in inference arises, and one can ask does a single decision tree beat a random forest with 10 trees. Moreover, trees also serve as one of the few global models considered to be interpretable, an increasingly important requirement in applications [12]. Thus, quality single decision tree built efficiently have many uses.…”
Section: Introductionmentioning
confidence: 99%
“…Though there are several 26 different available implementations of this overall idea, the principles are 27 similar [1, [21][22][23]: tractometry begins by delineating the parts of the white matter that 28 belong to different major "tracts" (i.e. anatomical or functional groups of white matter 29 fibers), such as the corticospinal tract or arcuate fasciculus, assigning tractography 30 generated streamlines to "bundles," which approximate the anatomical tracts, and 31 sampling biophysical properties (such as fractional anisotropy or mean diffusivity) along 32 the length of these bundles. the parts of the white matter that belong to different major 33 tracts (i.e.…”
mentioning
confidence: 99%
“…Different 50 approaches can be taken to resolving this challenge. For example, Colby and descriptive power [29,30]. Accordingly, tractometry analysis should simultaneously 66 capitalize on all the data across all tracts to make the best possible prediction, while 67 also retaining and elucidating spatial information about the locations that are most 68 informative for a prediction.…”
mentioning
confidence: 99%