The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1007/s00521-022-07136-1
|View full text |Cite
|
Sign up to set email alerts
|

Image fairness in deep learning: problems, models, and challenges

Abstract: In recent years, it has been revealed that machine learning models can produce discriminatory predictions. Hence, fairness protection has come to play a pivotal role in machine learning. In the past, most studies on fairness protection have used traditional machine learning methods to enforce fairness. However, these studies focus on low dimensional inputs, such as numerical inputs, whereas more recent deep learning technologies have encouraged fairness protection with image inputs through deep model methods. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 75 publications
0
3
0
Order By: Relevance
“…Arrieta et al ( 2020 ) reviewed 400 articles and proposed a novel definition of explainability, and emphasized that XAI is necessary to ensure security. In machine learning, fairness is considered a subsection of machine learning interpretability and addresses the social and ethical consequences of machine learning algorithms (Tian et al, 2022 ). Linardatos et al ( 2020 ) studied the fairness of machine learning models where the authors mentioned that researchers favor groups of individuals with different attributes over ensuring individuals are treated similarly; thus, the importance of individuals is often ignored.…”
Section: Introductionmentioning
confidence: 99%
“…Arrieta et al ( 2020 ) reviewed 400 articles and proposed a novel definition of explainability, and emphasized that XAI is necessary to ensure security. In machine learning, fairness is considered a subsection of machine learning interpretability and addresses the social and ethical consequences of machine learning algorithms (Tian et al, 2022 ). Linardatos et al ( 2020 ) studied the fairness of machine learning models where the authors mentioned that researchers favor groups of individuals with different attributes over ensuring individuals are treated similarly; thus, the importance of individuals is often ignored.…”
Section: Introductionmentioning
confidence: 99%
“…Different combinations of hyperparameters were systematically searched and evaluated to find the best configuration for each model. The GridSearch [47] was utilized to identify the optimal hyperparameters for our models. The performance of each configuration was evaluated using the F1 score as the scoring metric.…”
Section: Methodsmentioning
confidence: 99%
“…While deep learning techniques are powerful in image classification tasks, their applicability is constrained due to their requisites of substantial datasets and considerable computational resources. This may limit the scalability and accessibility of the models for some users or scenarios [16,47]. For the hand-crafted-based classifiers, the literature has a common drawback in which the features are extracted from the entire X-ray images.…”
Section: Introductionmentioning
confidence: 99%
“… 29 , 33 Multiple studies have demonstrated that data augmentation can effectively eliminate learned shortcuts from the original dataset. 34 , 35 , 36 This is further evidenced by a recent study employing an adversarial U-Net architecture to alter natural images, thereby removing shortcut features. 36 If shortcut learning potentiates bias in healthcare DL algorithms, data augmentation may assist in improving model fairness by counteracting shortcut learning.…”
Section: Introductionmentioning
confidence: 95%