“…Kim et al, 2016;Seong et al, 2016) and a deep learning model (Xiong et al, 2018). Overcoming the limitations of dysarthric speech as training data, researchers (1) used models that require less training data (Gemmeke et al, 2014), (2) augment data by artificially generating dysarthric speech (Green et al, 2021;Jin et al, 2021;Ko et al, 2017;Liu et al, 2021;Mariya Celin et al, 2020;Vachhani et al, 2018;Xiong et al, 2019), and (3) adapt data to a given speaker (Geng et al, 2021;Takashima et al, 2020). Further, Sriranjani et al (2015) used "data pooling," in which normal speech recordings were pooled from databases and combined with dysarthric speech data to train systems.…”