“…Many testing strategies have been developed [3,17,49] to detect unfairness in software systems. Recently, a few tools have been proposed [2,4,44,48] to enhance fairness of ML classifiers. However, we are not aware how much fairness issues exist in ML models from practice.…”
“…Many testing strategies have been developed [3,17,49] to detect unfairness in software systems. Recently, a few tools have been proposed [2,4,44,48] to enhance fairness of ML classifiers. However, we are not aware how much fairness issues exist in ML models from practice.…”
“…3. LIME -Local Interpretable Model-agnostic Explanations ( 25) are a standard tool of explainable AI and are implemented in (22). They provide a local measure of feature contribution that can be used with any machine learning classifier.…”
Section: Interpretability Methodsmentioning
confidence: 99%
“…In this study we developed graphical methods to explain which textual elements contribute to the classification of a health records from CAP by a machine learning algorithm. We also used methods from the FAT Forensics toolbox (22) and the TreeInterpreter Python package (23) to quantify feature contributions. These contributions were then displayed to the user in an interpretable and visually engaging format.…”
Purpose: Accurately assigning cause of death is vital to understanding health outcomes in the population and improving health care provision. Cancer-specific cause of death is a key outcome in clinical trials, but assignment of cause of death from death certification is prone to misattribution, therefore can have an impact on cancer-specific trial mortality outcome measures.
Methods: We developed an interpretable machine learning classifier to predict prostate cancer death from free-text summaries of medical history for prostate cancer patients (CAP). We developed visualisations to highlight the predictive elements of the free-text summaries. These were used by the project analysts to gain an insight of how the predictions were made.
Results: Compared to independent human expert assignment, the classifier showed >90% accuracy in predicting prostate cancer death in test subset of the CAP dataset. Informal feedback suggested that these visualisations would require adaptation to be useful to clinical experts when assessing the appropriateness of these ML predictions in a clinical setting. Notably, key features used by the classifier to predict prostate cancer death and emphasised in the visualisations, were considered to be clinically important signs of progressing prostate cancer based on prior knowledge of the dataset.
Conclusion: The results suggest that our interpretability approach improve analyst confidence in the tool, and reveal how the approach could be developed to produce a decision-support tool that would be useful to health care reviewers. As such, we have published the code on GitHub to allow others to apply our methodology to their data (https://zenodo.org/badge/latestdoi/294910364).
“…These works propose methods to generate appropriate test data inputs for the model and prediction on those inputs characterizes fairness. Some research has been conducted to build automated tools [2,64,67] and libraries [8] for fairness. In addition, empirical studies have been conducted to compare, contrast between fairness aspects, interventions, tradeoffs, developers concerns, and human aspects of fairness [10,26,33,35,77].…”
In recent years, many incidents have been reported where machine learning models exhibited discrimination among people based on race, sex, age, etc. Research has been conducted to measure and mitigate unfairness in machine learning models. For a machine learning task, it is a common practice to build a pipeline that includes an ordered set of data preprocessing stages followed by a classifier. However, most of the research on fairness has considered a single classifier based prediction task. What are the fairness impacts of the preprocessing stages in machine learning pipeline? Furthermore, studies showed that often the root cause of unfairness is ingrained in the data itself, rather than the model. But no research has been conducted to measure the unfairness caused by a specific transformation made in the data preprocessing stage. In this paper, we introduced the causal method of fairness to reason about the fairness impact of data preprocessing stages in ML pipeline. We leveraged existing metrics to define the fairness measures of the stages. Then we conducted a detailed fairness evaluation of the preprocessing stages in 37 pipelines collected from three different sources. Our results show that certain data transformers are causing the model to exhibit unfairness. We identified a number of fairness patterns in several categories of data transformers. Finally, we showed how the local fairness of a preprocessing stage composes in the global fairness of the pipeline. We used the fairness composition to choose appropriate downstream transformer that mitigates unfairness in the machine learning pipeline.
CCS CONCEPTS• Software and its engineering → Software creation and management; • Computing methodologies → Machine learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.