“…Considering the number of examiners who performed the measurements, one study used four examiners (Patino‐Marin et al, 2011), five studies used three examiners (Alafandy, 2018; Chougule et al, 2012; de Alencar et al, 2019; Kumar et al, 2016; Sivadas et al, 2013), six studies used two examiners (Caliskan et al, 2021; Davalbhakta et al, 2021; Ghaemmaghami et al, 2008; Kielbassa et al, 2003; Koruyucu et al, 2018; Odabaş et al, 2011), and 17 studies used a single examiner (Abdullah et al, 2016; Awasthi et al, 2017; Balaji & Pravallika, 2019; Beltrame et al, 2011; Bhat et al, 2017; Dandempally et al, 2013; Hafiz, 2018a; Neena et al, 2011; Nellamakkada et al, 2020; Nogorani et al, 2014; Oznurhan et al, 2015; Rathore et al, 2020; Sankar & Jeevanandan, 2021; Saritha et al, 2012; Senthil et al, 2016; Soruri et al, 2013; Wankhade et al, 2013). Inter‐ and intra‐examiner agreement was assessed only in 11 studies, in which values were considered excellent for all analyses used: Kappa (0.87 to 0.98) (Davalbhakta et al, 2021; Koruyucu et al, 2018; Kumar et al, 2016; Odabaş et al, 2011), ICC (0.80 to 0.99) (Alafandy, 2018; Hafiz, 2018a; Patino‐Marin et al, 2011; Wankhade et al, 2013), Cronbach's alpha (0.95 to 0.99) (Alafandy, 2018; de Alencar et al, 2019), and Altmann & Bland (0.98) (Kielbassa et al, 2003).…”