“…Regarding the former, the attacker creates adversarial examples or sends malicious signals to fool the ego car's sensor processing models, e.g., perturbing front camera images [16], [17], [18], road signs [19], [20], rendering malicious shapes on the road [21] or billboard [22], spoofing GPS signals [23], spoofing LiDAR signals [24], [25], or influencing both LiDAR and camera inputs [26]. Regarding the latter, an attacker can directly inject faults inside the system to fail it [27], [28], [29], [30], [31], [32], [33], [34], [35]. One issue with this category of methods is the assumption of an attacker's presence.…”