2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE) 2021
DOI: 10.1109/ase51524.2021.9678840
|View full text |Cite
|
Sign up to set email alerts
|

Human-in-the-Loop XAI-enabled Vulnerability Detection, Investigation, and Mitigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 1 publication
0
4
0
Order By: Relevance
“…The primary objective of this user study is to assess the effectiveness of feature attribution and counterfactual explainers in addressing code vulnerabilities, specifically examining their utility for both experienced and novice developers. While existing literature [6,[25][26][27] highlights a focus on feature attribution explanations for knowledgeable users in the XAI research, our hypothesis posits that counterfactual explanations may prove more informative for both skilled and trainee developers aiming to correct code vulnerabilities. Table 3 presents the user study protocol; enumeration indicates the order in which the questions were presented; Green colour indicates content presented to the participant (code segment or explanation) and the protocol is grouped by different intents (Blue).…”
Section: User Evaluationmentioning
confidence: 84%
See 1 more Smart Citation
“…The primary objective of this user study is to assess the effectiveness of feature attribution and counterfactual explainers in addressing code vulnerabilities, specifically examining their utility for both experienced and novice developers. While existing literature [6,[25][26][27] highlights a focus on feature attribution explanations for knowledgeable users in the XAI research, our hypothesis posits that counterfactual explanations may prove more informative for both skilled and trainee developers aiming to correct code vulnerabilities. Table 3 presents the user study protocol; enumeration indicates the order in which the questions were presented; Green colour indicates content presented to the participant (code segment or explanation) and the protocol is grouped by different intents (Blue).…”
Section: User Evaluationmentioning
confidence: 84%
“…Feature attribution explainers have been explored as a way to pinpoint code lines or segments that may have contributed to a vulnerable prediction by an ML algorithm. Authors of [25] describe the design of a human-in-the-loop XAI system for vulnerability mitigation, whereby model predictions are explained to forensic experts by way of feature attributions to enable them to make necessary corrections. Authors of [26] explore the explanation needs of target user groups of a code analyser to recognise two: a global explanation where the common behaviours of the tool are explained; and a local explanation where feature attribution explains why a specific code snippet is predicted to be vulnerable.…”
Section: Explainable Ai In Vulnerability Detectionmentioning
confidence: 99%
“…Several key technologies can be employed in designing explainable AIGC models, including interpretable AI algorithms, model visualization techniques [110], and human-in-the-loop (HITL) approaches [111]. Interpretable machine learning algorithms, such as decision trees and rule-based models, enable capturing complex relationships between input features and outputs, thus providing explanations for the model's output.…”
Section: B Explainable Aigc Modelsmentioning
confidence: 99%
“…Moreover, model visualization techniques [110], such as activation mapping, saliency maps, and feature visualization, offer a graphical representation of the decision-making process of AIGC models, thus facilitating users' understanding of how models categorize, cluster, or associate different inputs. HITL methods involve the incorporation of human experts in the decision-making process of AIGC models, which can be done through co-designing interfaces and interactive feedback mechanisms for better results [111]. The combination of these technologies can improve the transparency and interpretability of AIGC models.…”
Section: B Explainable Aigc Modelsmentioning
confidence: 99%