Background and objectivesRigorous and transparent management strategies for conflicts of interest and clinical practice guidelines with the best available evidence are necessary for the development of nephrology guidelines. However, there was no study assessing financial and nonfinancial conflicts of interest, quality of evidence underlying the Japanese guidelines for CKD, and conflict of interest policies for guideline development.Design, setting, participants, & measurementsThis cross-sectional study examined financial and nonfinancial conflicts of interest among all 142 authors of CKD guidelines issued by the Japanese Society of Nephrology using a personal payment database from all 92 major Japanese pharmaceutical companies between 2016 and 2019 and self-citations by guideline authors. Also, the quality of evidence and strength of recommendations underlying the guidelines and conflicts of interest policies of Japanese, US, and European nephrology societies were evaluated.ResultsAmong 142 authors, 125 authors (88%) received $6,742,889 in personal payments from 56 pharmaceutical companies between 2016 and 2019. Four-year combined median payment per author was $8258 (interquartile range, $2230‒$51,617). The amounts of payments and proportion of guideline authors with payments remained stable during and after guideline development. The chairperson, vice chairperson, and group leaders received higher personal payments than other guideline authors. Of 861 references in the guidelines, 69 (8%) references were self-cited by the guideline authors, and 76% of the recommendations were on the basis of low or very low quality of evidence. There were no fully rigorous and transparent conflicts of interest policies for nephrology guideline authors in the United States, Europe, and Japan.ConclusionsMost of the Japanese CKD guideline recommendations were on the basis of low quality of evidence by the guideline authors tied with pharmaceutical companies, suggesting the need for better financial conflicts of interest management.
Objective
To assess financial conflicts of interest (COI) and nonfinancial COI among psoriatic arthritis (PsA) clinical practice guideline (CPG) authors in Japan and the US, and to evaluate the quality of evidence and strength of recommendations of PsA CPGs.
Methods
We performed a retrospective analysis using payment data from major Japanese pharmaceutical companies and the US Open Payments Database from 2016 to 2018. All authors of PsA CPGs issued by the Japanese Dermatological Association (JDA) and American College of Rheumatology (ACR) were included.
Results
Of 23 CPG authors in Japan, 21 (91.3%) received at least 1 payment, with a combined total of $3,335,413 between 2016 and 2018. Regarding 25 US authors, 21 (84.0%) received at least 1 payment, with a combined total of $4,081,629 during the same period. The 3‐year combined mean ± SD payment per author was $145,018 ± $114,302 in Japan and $162,825 ± $259,670 in the US. A total of 18 authors (78.3%) of the JDA PsA CPG and 12 authors (48.0%) of the ACR PsA CPG had undisclosed financial COI worth $474,663 and $218,501, respectively. The percentage of citations with at least 1 CPG author relative to total citations was 3.4% in Japan and 33.6% in the US. In sum, 71.4% and 88.8% of recommendations for PsA in the JDA and ACR were supported by low or very low quality of evidence.
Conclusion
More rigorous cross‐checking of information disclosed by pharmaceutical companies and self‐reported by physicians and more stringent and transparent COI policies are necessary.
ChatGPT is gaining widespread acceptance for its ability to generate natural language sentences in response to various inputs and is expected to become a supplementary tool for diagnosing and determining treatment policies in clinical settings. ChatGPT was used to evaluate its ability to perform clinical inference and accuracy in answering questions on the 117th Japanese National Medical Licensing Examination held in February 2023. The exam questions were manually inputted into ChatGPT's window, and the accuracy of ChatGPT's responses was determined based on answers provided by a preparatory school. ChatGPT provided answers for 389 out of 400 questions, and its overall correct answer rate was 55.0%. The correct answer rate for 5-choice-1, 5-choice-2, and 5-choice-3 were 57.8%, 42.9%, and 41.2%, respectively. The highest correct answer rate was for the compulsory exam (67.0%), followed by the specific knowledge exam (54.1%) and the cross category exam (47.9%). The correct answer rate for non-image questions was 56.2% and for image questions, it was 51.5%. The study suggests that ChatGPT has potential to support healthcare professionals in clinical decision-making in Japanese clinical settings, but caution should be exercised in interpreting and using the answers generated by ChatGPT due to room for improvement in performance.
These data indicate the potential of combinational therapy; the combination of two DNA vaccines or combination of DNA vaccine with antibiotic drug. Thus, it will provide a novel strategy for the treatment of MDR-TB.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.