Objective: To compare the prevalence of selective reporting in ME/CFS research areas: psychosocial versus cellular.Method: A bias appraisal was conducted on three trials (1x psychosocial and 2x cellular) to compare risk of bias in study design, selection and measurement. The primary outcome compared evidence and justifications in resolving biases by proportions (%) and ORs (Odds Ratio); the secondary outcome determined the proportion (in %) of ME/CFS grants at risk of bias.Results: NS (cellular study) was twice as likely to present evidence in resolving biases over PACE (psychosocial trial) (OR = 2·16; 65·6% vs 46·9%), but this difference was not significant (p = 0·13). However, NS was five times more likely to justify biases over PACE (OR = 4·76; 46·9% vs 15·6%) and this difference was significant (p = 0·0095; p < 0·05). PACE was weak in place (operational aspects 32%) and NS for data practices (37%). The proportion of grants were more biased in evidence for PACE (72%) compared to NS (28%), and also more biased in justifications for PACE (86%) than NS (14%).Conclusion: Psychosocial trials on ME/CFS are more likely to engage in selective reporting indicative of research waste than cellular trials. Improvements to place may help reduce these biases, whereas cellular trials may benefit from adopting more translatable data methods. However, these findings are based on two trials. Further risk of bias appraisals are needed to determine the number of trials required to make robust these findings.Research waste in clinical trials are seen in outcomes that are not pub-2 lished, or in selective reporting of incidental and spurious findings that cannot 3 be reproduced or translated in practice. When outcomes are not published: 4 resources are wasted, research is stilted, and the study protocol cannot be 5 validated nor repudiated in future protocols. Reviews on publication rates 6 indicate: 50% of randomised trials are not published (Kasenda et. al., 2014); 7 88% for cohort studies (Bogert et al., 2015); and 50% for pre-clinical and 8 clinical studies (Schmucker et al., 2014). On the other hand, selective re-9 porting is suspected when data is fabricated (intentionally misrepresented); 10 or falsified (intentionally manipulated) in favour of a desired outcome. The 11 potential causes of selective reporting include: poor recruitment, irrelevant 12 endpoints, biased selection criteria and discontinuation, for instance: of 1017 13RCTs, 25% were discontinued, and of those, 9.9% were discontinued due to 14 poor recruitment (Kasenda et. al., 2014). When outcomes are not published, 15 authors are contacted for missing data in instances of imputing data in un-16 published trials (systematic review). However in selective reporting, even if 17 the reported outcomes appear distinctly remarkable: p hacking (extremely 18 good p values); file drawer problem (only positive results), it is difficult to 19 substantiate who is responsible for it, and whether it was intentional; and 20 whether institutional enquiries into research misconduc...