Many endeavors have sought to develop countermeasure techniques as enhancements on Automatic Speaker Verification (ASV) systems, in order to make them more robust against spoof attacks. As evidenced by the latest ASVspoof 2019 countermeasure challenge, models currently deployed for the task of ASV are, at their best, devoid of suitable degrees of generalization to unseen attacks. Upon further investigation of the proposed methods, it appears that a broader three-tiered view of the proposed systems; comprised of the classifier, feature extraction phase, and model loss function, may to some extent lessen the problem. Accordingly, the present study proposes the Efficient Attention Branch Network (EABN) modular architecture with a combined loss function to address the generalization problem. The EABN architecture is based on attention and perception branches; the purpose of the attention branch-also interpretable from a human's point of view-is to produce an attention mask meant to improve classification performance. The perception branch, on the other hand, is used for the primary purpose of the problem at hand, that is, spoof detection. The new EfficientNet-A0 architecture was employed for the perception branch, with nearly ten times fewer parameters and approximately seven times fewer floating-point operations than the top performing SE-Res2Net50 network. The final evaluation results on ASVspoof 2019 dataset suggest an EER = 0.86% and t-DCF = 0.0239 in the Physical Access (PA) scenario using the log-PowSpec input feature, the EfficientNet-A0 for the perception branch, and the combined loss function. Furthermore, using the LFCC input feature, the SE-Res2Net50 for the perception branch, and the combined loss function, the proposed model performed at figures of EER = 1.89% and t-DCF = 0.507 in the Logical Access (LA) scenario, which to the best of our knowledge, is the best single system ASV spoofing countermeasure.