2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA) 2020
DOI: 10.1109/tps-isa50397.2020.00042
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Objectness Gradient Attacks in Real-time Object Detection Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
46
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 61 publications
(46 citation statements)
references
References 17 publications
0
46
0
Order By: Relevance
“…Object detection is an instance of multi-task learning, which takes an input image and performs three learning tasks: object existence prediction (detecting the number of objects), bounding box estimation (bounding box of each detected object), and object classification (class label for each object). Existing state-of-the-art attacks to object detectors include DAG [26], RAP [13], UEA [24], TOG [5], and adversarial physical patches [21] or digital patches [15]. Unlike adversarial attacks to DNN-based image classifiers [10], all object detection attacks can critically compromise the object existence prediction capability and the bounding box estimation capability, causing real objects to vanish from the detection or causing the detection to fabricate fake objects that do not exist, consequently deceiving object classification (i.e., the third learning task).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Object detection is an instance of multi-task learning, which takes an input image and performs three learning tasks: object existence prediction (detecting the number of objects), bounding box estimation (bounding box of each detected object), and object classification (class label for each object). Existing state-of-the-art attacks to object detectors include DAG [26], RAP [13], UEA [24], TOG [5], and adversarial physical patches [21] or digital patches [15]. Unlike adversarial attacks to DNN-based image classifiers [10], all object detection attacks can critically compromise the object existence prediction capability and the bounding box estimation capability, causing real objects to vanish from the detection or causing the detection to fabricate fake objects that do not exist, consequently deceiving object classification (i.e., the third learning task).…”
Section: Related Workmentioning
confidence: 99%
“…where 1{•} is the indicator function, O * denotes the target detection for targeted attacks and any incorrect detection for untargeted attacks, and is the maximum perturbation in ℓ -norm. Directly optimizing the indicator function in Equation 2 is challenging, and hence existing attacks [5,13,26] reformulate the optimization to be…”
Section: Overviewmentioning
confidence: 99%
See 3 more Smart Citations