“…It assumes the existence of a malicious attacker trying to fail the ego car by tampering with either the environment or the ego car's internal states directly. Regarding the former, the attacker creates adversarial examples or sends malicious signals to fool the ego car's sensor processing models, e.g., perturbing front camera images [16], [17], [18], road signs [19], [20], rendering malicious shapes on the road [21] or billboard [22], spoofing GPS signals [23], spoofing LiDAR signals [24], [25], or influencing both LiDAR and camera inputs [26]. Regarding the latter, an attacker can directly inject faults inside the system to fail it [27], [28], [29], [30], [31], [32], [33], [34], [35].…”