Objective The triglyceride–glucose (TyG) index has been shown to be a new alternative measure for insulin resistance. However, no study has attempted to investigate the association of the TyG index with incident atrial fibrillation (AF) in the general population without known cardiovascular diseases. Methods Individuals without known cardiovascular diseases (heart failure, coronary heart disease, or stroke) from the Atherosclerosis Risk in Communities (ARIC) cohort were recruited. The baseline TyG index was calculated as the Ln [fasting triglycerides (mg/dL) × fasting glucose (mg/dL)/2]. The association between the baseline TyG index and incident AF was examined using Cox regression. Results Of 11,851 participants, the mean age was 54.0 years; 6586 (55.6%) were female. During a median follow-up of 24.26 years, 1925 incidents of AF cases (0.78/per 100 person-years) occurred. An increased AF incidence with a graded TyG index was found by Kaplan‒Meier curves (P < 0.001). In multivariable-adjusted analysis, both < 8.80 (adjusted hazard ratio [aHR] = 1.15, 95% confidence interval [CI] 1.02, 1.29) and > 9.20 levels (aHR 1.18, 95% CI 1.03, 1.37) of the TyG index were associated with an increased risk of AF compared with the middle TyG index category (8.80–9.20). The exposure-effect analysis confirmed the U-shaped association between the TyG index and AF incidence (P = 0.041). Further sex-specific analysis showed that a U-shaped association between the TyG index and incident AF still existed in females but not in males. Conclusions A U-shaped association between the TyG index and AF incidence is observed in Americans without known cardiovascular diseases. Female sex may be a modifier in the association between the TyG index and AF incidence. Graphical Abstract
Background: The ChatGPT, a Large-scale language models-based Artificial intelligence (AI), has fueled interest in medical care. However, the ability of AI to understand and generate text is constrained by the quality and quantity of training data available for that language. This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. Methods: A dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate was used to assess the effectiveness of ChatGPT3.5 in medical knowledge in Chinese language. The indictor of accuracy, concordance (explaining affirms the answer) and frequency of insights was used to assess performance of ChatGPT in original and encoding medical questions. Result: According to our evaluation, ChatGPT received a score of 153.5/300 for original questions in Chinese, which is slightly above the passing threshold of 129/300. Additionally, ChatGPT showed low accuracy in answering open-ended medical questions, with total accuracy of 31.5%. While ChatGPT demonstrated a commendable level of concordance (achieving 90% concordance across all questions) and generated innovative insights for most problems (at least one significant insight for 80% of all questions). Conclusion: ChatGPT's performance was suboptimal for medical education and clinical decision-making in Chinese compared with in English. However, ChatGPT demonstrated high internal concordance and generated multiple insights in Chinese language. Further research should investigate language-based differences in ChatGPT's healthcare performance.
BACKGROUND The ChatGPT, a Large-scale language models-based Artificial intelligence (AI), has fueled interest in medical care. However, the ability of AI to understand and generate text is constrained by the quality and quantity of training data available for that language. This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. OBJECTIVE This study aims to provide qualitative feedback on ChatGPT's problem-solving capabilities in medical education and clinical decision-making in Chinese. METHODS A dataset of Clinical Medicine Entrance Examination for Chinese Postgraduate was used to assess the effectiveness of ChatGPT3.5 in medical knowledge in Chinese language. The indictor of accuracy, concordance (explaining affirms the answer) and frequency of insights was used to assess performance of ChatGPT in original and encoding medical questions. RESULTS According to our evaluation, ChatGPT received a score of 153.5/300 for original questions in Chinese, which is slightly above the passing threshold of 129/300. Additionally, ChatGPT showed low accuracy in answering open-ended medical questions, with total accuracy of 31.5%. While ChatGPT demonstrated a commendable level of concordance (achieving 90% concordance across all questions) and generated innovative insights for most problems (at least one significant insight for 80% of all questions). CONCLUSIONS ChatGPT's performance was suboptimal for medical education and clinical decision-making in Chinese compared with in English. However, ChatGPT demonstrated high internal concordance and generated multiple insights in Chinese language. Further research should investigate language-based differences in ChatGPT's healthcare performance. INTERNATIONAL REGISTERED REPORT RR2-https://doi.org/10.1101/2023.04.12.23288452
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.