Detecting intentionally antagonistic behavior in robot swarms brings challenges that exceed identifying merely erroneous behavior. We investigate a data-based approach to recognize anomalous and, in particular, antagonistic behavior in robots executing a deployment task. The task requires a robot swarm of variable size and starting positions to be optimally distributed within an arbitrary convex surveillance area. Combining a long short-term memory neural network and a normalizing flow, our approach learns to approximate the probability of a robot action. Thus, actions with low probability density values can be categorized as anomalous. The applicability of the proposed approach is validated on simulated runs containing benevolent, antagonistic, and erroneous robots. Both antagonistic and erroneous robots are detected with more than 90 percent accuracy.