“…Briefly, these include the lack of full reporting of missing data and the missing data mechanism (eg, any assumptions and what was included in the imputation model), lack of a reported intercept to enable other researchers to implement or validate the model, weak assessment of calibration (eg, the Hosmer-Lemeshow test has long been disregarded for assessing calibration), and unclear implementation of bootstrapping methods for internal validation (eg, important to replay all modeling steps, including any variable selection steps), evaluating shrinkage (eg, degree of overfitting and to shrink the regression coefficients), and optimism (eg, adjust model performance measures by this optimism due to any overfitting). 2,9 It is not the motivation of these authors to critique the overall well-designed prospective study of Feijen et al 4 However, as clinical prognostic models can have a direct influence on patient and athlete health, using best practice methods is imperative to improve patient and athlete outcomes, potentially having a direct effect on the athletes' health, well-being, and careers. The authors would be interested in collaborating on redeveloping this model utilizing these methodological considerations to compare and contrast model performance and risk factor inferences.…”