Model robustness against adversarial examples of single perturbation type such as the p -norm has been widely studied, yet its generalization to more realistic scenarios involving multiple semantic perturbations and their composition remains largely unexplored. In this paper, we firstly propose a novel method for generating composite adversarial examples. By utilizing componentwise projected gradient descent and automatic attack-order scheduling, our method can find the optimal attack composition. We then propose generalized adversarial training (GAT) to extend model robustness from p -norm to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation. The results on ImageNet and CIFAR-10 datasets show that GAT can be robust not only to any single attack but also to any combination of multiple attacks. GAT also outperforms baseline ∞ -norm bounded adversarial training approaches by a significant margin.