2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01496
|View full text |Cite
|
Sign up to set email alerts
|

Towards Robust Classification Model by Counterfactual and Invariant Data Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Generating diverse sets of realistic counterfactuals has proven to improve the model's training efficiency and overall results [26]. For example, in classification problems, the models trained on CAD were not sensitive to spurious features unlike modified data [21,7]. While, in discrimination and fairness literature counterfactual data substitution and CAD helped to mitigate gender bias by replacing duplicate text and handling conditional discrimination respectively [25,44].…”
Section: Related Workmentioning
confidence: 99%
“…Generating diverse sets of realistic counterfactuals has proven to improve the model's training efficiency and overall results [26]. For example, in classification problems, the models trained on CAD were not sensitive to spurious features unlike modified data [21,7]. While, in discrimination and fairness literature counterfactual data substitution and CAD helped to mitigate gender bias by replacing duplicate text and handling conditional discrimination respectively [25,44].…”
Section: Related Workmentioning
confidence: 99%
“…Recently, it has also attracted increasing attention in computer vision society [13]- [18] for removing the dataset bias in domainspecific applications. Among the causal inference approaches, counterfactual has been widely investigated to explain and remove spurious associations [1]- [4], [6] and achieved promising performance. Counterfactuals describe situations that did not actually occur, allowing for comparison between actual and hypothetical scenarios.…”
Section: Related Work a Causal Inference And Counterfactualmentioning
confidence: 99%
“…Counterfactual and implicit semantic augmentation strategies are reviewed here. Counterfactual augmentation generates hypothetical samples (i.e., counterfactuals) by making small changes to the original samples, which can be divided into hand-crafted [5], [6] and using causal generative models [19], [20], demonstrating competitive performance. However, explicitly finding the non-causal attributes is challenging and training models based on the augmented data is inefficient.…”
Section: B Data Augmentationmentioning
confidence: 99%
See 1 more Smart Citation
“…One particular example is SHAP (Lundberg and Lee, 2017) which adjusts the input of the model principally to get the average contribution of each feature, pixels in our case, yet it has some limitations on highly correlated set of features making it not suitable to explain image classification models (Frye et al, 2019). This has an interesting connection to erasing explainability techniques where parts of the input are removed and the feature importance is the magnitude of change in prediction confidence Zolna et al (2020); Fong and Vedaldi (2017); Chang et al (2018).…”
Section: Chapter III Related Workmentioning
confidence: 99%