Deep neural networks (DNNs) are widely used for various facial image-recognition purposes, including facial recognition and subsequent authentication, and the detection of altered facial images. Unfortunately, due to their widespread use, there have been many works that focus on attacking such DNN-based systems for nefarious purposes. One type of attack on DNNs is called a "targeted data poisoning" attack, which has the goal of injecting photos into the DNNs training set in such a way as to cause the DNN to learn malicious behavior. In the context of facial authentication, this could correspond to unauthorized users gaining access to a target's account, whereas, in deepfake detection, this could translate to causing the DNN to fail to identify when a target's face is the subject of a deepfake image. This report describes targeted data poisoning attacks and proposed defenses on DNN-based systems for facial authentication and deepfake detection, each achieving high accuracy ([greater than] 95 percent) in most cases.