In this paper we show how common training criteria like for example MPE or MMI can be extended to incorporate a margin term. In addition, a transducer-based training implementation is presented, which covers a large variety of discriminative training criteria for ASR, including the standard MMI, MPE, and MCE criteria, as well as the modifications to these criteria presented here. The modified criteria are directly related with the conventional large margin formulation of SVMs. In the proposed approach, we can take advantage of the generalization guarantees of large margin classifiers while keeping the existing framework for the discriminative training, including the efficient algorithms for conventional MPE or MMI. On the conceptual side, this allows for a direct evaluation of the margin term. Finally, experimental results are presented for different large vocabulary continuous speech recognition tasks (one of which is trained on a very large amount of training data) using these modified criteria.