Machine translation (MT) between English and Amharic is one of the least studied and, performance-wise, least successful topics in the MT field. We therefore propose to apply corpus transliteration and augmentation techniques in this study to address this issue and improve MT performance for the language pairs. This paper presents the creation, the augmentation, and the use of an Amharic to English transliteration corpus for NMT experiments. The created corpus has a total of 450,608 parallel sentences before preprocessing and is used to train three different NMT architectures after preprocessing. These models are actually built using Recurrent Neural Networks with attention mechanism (RNN), Gated Recurrent Units (GRUs), and Transformers. Specifically, for Transformer-based experiments, three different Transformer models with different hyperparameters are created. Compared to previous works, the BLEU score results of all NMT models used in this study are improved. One of the three Transformer models, in particular, achieves the highest BLEU score ever recorded for the language pairs. Povzetek: Raziskava se ukvarja z izboljšanjem strojnega prevajanja (MT) med angleščino in amharščino, eno izmed najmanj proučevanih in uspešnih področij v MT. Predlagana je uporaba tehnike transliteracije in razširjanja korpusov. Izdelali so korpus za preizkušanje NMT, ki obsega 450,608 paralelnih stavkov.