2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2023
DOI: 10.1109/cvprw59228.2023.00403
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 29 publications
0
1
0
Order By: Relevance
“…Concepts (see Concept Bottlenecks [118] and Concept Activation Vectors [119] for typical examples) in this context refer to high-level semantic information such as the stripes of a zebra, the distinctive yellow beak that identifies a parrot, or the narrowing of the space in the knee joint that characterizes arthritis disease. This class of explainability frameworks [118], [120], [121], [122], [123] are popularly referred to as Concept Bottleneck Models (CBMs). CBMs operate by first predicting labels for human-understandable high-level concepts from the input data, and then using the information on predicted concepts to make the final prediction about the category of the image.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%
“…Concepts (see Concept Bottlenecks [118] and Concept Activation Vectors [119] for typical examples) in this context refer to high-level semantic information such as the stripes of a zebra, the distinctive yellow beak that identifies a parrot, or the narrowing of the space in the knee joint that characterizes arthritis disease. This class of explainability frameworks [118], [120], [121], [122], [123] are popularly referred to as Concept Bottleneck Models (CBMs). CBMs operate by first predicting labels for human-understandable high-level concepts from the input data, and then using the information on predicted concepts to make the final prediction about the category of the image.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%