“…There have been different approaches to reduce computational complexity when training deep neural networks, such as designing novel low-complexity network architectures (Kiranyaz et al, 2017;Tran et al, 2019c;Tran & Iosifidis, 2019;Tran et al, 2020;Kiranyaz et al, 2020;Heidari & Iosifidis, 2020), replacing existing ones with their low-rank counterparts (Denton et al, 2014;Jaderberg et al, 2014;Tran et al, 2018;Huang & Yu, 2018;Ruan et al, 2020), or adapting the pre-trained models to new tasks, i.e., performing Transfer Learning (TL) (Shao et al, 2014;Yang et al, 2015;Ding et al, 2016;Ding & Fu, 2018;Fons et al, 2020) or Domain Adaptation (DA) learning (Duan et al, 2012;Wang et al, 2019;Zhao et al, 2020;Hedegaard et al, 2021). Among these approaches, model adaptation is the most versatile since a method in this category is often architecture-agnostic, being complementary to other approaches.…”