Medical education is perceived as being stressful, and a high level of stress may have a negative effect on cognitive functioning and learning of students in a medical school. This cross-sectional study was conducted to determine the prevalence of stress among medical students and to observe an association between the levels of stress and their academic performance, including the sources of their stress. All the medical students from year one to year five levels from the College of Medicine, King Saud University, were enrolled in the study. The study was conducted using Kessler10 psychological distress (K10) inventory, which measures the level of stress according to none, mild, moderate, and severe categories. The prevalence of stress was measured and compared with the five study variables, such as gender, academic year, academic grades, regularity to course attendance, and perceived physical problems. The response rate among the study subjects was 87% (n=892). The total prevalence of stress was 63%, and the prevalence of severe stress was 25%. The prevalence of stress was higher (p<0.5) among females (75.7%) than among males (57%) (odds ratio=2.3, χ2=27.2, p<0.0001). The stress significantly decreased as the year of study increased, except for the final year. The study variables, including being female (p<0.0001), year of study (p<0.001), and presence of perceived physical problems (p<0.0001), were found as independent significant risk factors for the outcome variables of stress. Students' grade point average (academic score) or regularity to attend classes was not significantly associated with the stress level. The prevalence of stress was higher during the initial three years of study and among the female students. Physical problems are associated with high stress levels. Preventive mental health services, therefore, could be made an integral part of routine clinical services for medical students, especially in the initial academic years, to prevent such occurrence.
Data availabilitySummary statistics generated by COVID-19 Host Genetics Initiative are available online (https://www.covid19hg.org/results/r6/). The analyses described here use the freeze 6 data. The COVID-19 Host Genetics Initiative continues to regularly release new data freezes. Summary statistics for samples from individuals of non-European ancestry are not currently available owing to the small individual sample sizes of these groups, but the results for 23 loci lead variants are reported in Supplementary Table 3. Individual-level data can be requested directly from the authors of the contributing studies, listed in Supplementary Table 1.
Objective To evaluate whether favipiravir reduces the time to viral clearance as documented by negative SARS-CoV-2 RT-PCR in mild COVID-19 cases compared to placebo. Methods In this randomized, double-blinded, multicenter, and placebo-controlled trial, adults with PCR confirmed mild COVID-19 were recruited in an outpatient setting at seven medical facilities across Saudi Arabia. Participants were randomized in a 1:1 ratio to receive either favipiravir 1800 mg by mouth twice daily on day one followed by 800 mg twice daily (n=112) or a matching placebo (n=119), for a total of 5 to 7 days. The primary outcome was the effect of favipiravir on reducing the time to viral clearance (by PCR test) within 15 days of starting the treatment compared to the placebo group. The trial included the following secondary outcomes: symptom resolution, hospitalization, ICU admissions, adverse events, and 28-day mortality. Results 231 patients were randomized and began the study (median age, 37 [interquartile range: 32-44] years; 155 [67%] men), and 112 (48.5%) were assigned to the treatment group and 119 (51.5%) into the placebo group. The data and safety monitoring board (DSMB) recommended stopping enrollment because of futility at the interim analysis. The median time to viral clearance was 10 (IQR: 6-12) days in the favipiravir group and 8 (IQR: 6-12) days in the placebo group, with a hazard ratio of 0.87 for the favipiravir group (95% CI 0.571 to 1.326; p-value =0.51). The median time to clinical recovery was 7 days (IQR: 4-11) in the favipiravir group and 7 days (IQR: 5-10) in the placebo group. There was no difference between the two groups on the secondary outcome of hospital admission. There were no drug-related severe adverse events. Conclusion In this clinical trial, favipiravir therapy in mild COVID-19 patients did not reduce the time to viral clearance within 15 days of starting the treatment. Clinical Trial Registration ClinicalTrials.gov identifier ( NCT04464408 ): https://clinicaltrials.gov/ct2/show/NCT04464408 .
Background Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), emerged in Wuhan, China, in late 2019 and created a global pandemic that overwhelmed healthcare systems. COVID-19, as of July 3, 2021, yielded 182 million confirmed cases and 3.9 million deaths globally according to the World Health Organization. Several patients who were initially diagnosed with mild or moderate COVID-19 later deteriorated and were reclassified to severe disease type. Objective The aim is to create a predictive model for COVID-19 ventilatory support and mortality early on from baseline (at the time of diagnosis) and routinely collected data of each patient (CXR, CBC, demographics, and patient history). Methods Four common machine learning algorithms, three data balancing techniques, and feature selection are used to build and validate predictive models for COVID-19 mechanical requirement and mortality. Baseline CXR, CBC, demographic, and clinical data were retrospectively collected from April 2, 2020, till June 18, 2020, for 5739 patients with confirmed PCR COVID-19 at King Abdulaziz Medical City in Riyadh. However, of those patients, only 1508 and 1513 have met the inclusion criteria for ventilatory support and mortalilty endpoints, respectively. Results In an independent test set, ventilation requirement predictive model with top 20 features selected with reliefF algorithm from baseline radiological, laboratory, and clinical data using support vector machines and random undersampling technique attained an AUC of 0.87 and a balanced accuracy of 0.81. For mortality endpoint, the top model yielded an AUC of 0.83 and a balanced accuracy of 0.80 using all features with balanced random forest. This indicates that with only routinely collected data our models can predict the outcome with good performance. The predictive ability of combined data consistently outperformed each data set individually for intubation and mortality. For the ventilator support, chest X-ray severity annotations alone performed better than comorbidity, complete blood count, age, or gender with an AUC of 0.85 and balanced accuracy of 0.79. For mortality, comorbidity alone achieved an AUC of 0.80 and a balanced accuracy of 0.72, which is higher than models that use either chest radiograph, laboratory, or demographic features only. Conclusion The experimental results demonstrate the practicality of the proposed COVID-19 predictive tool for hospital resource planning and patients’ prioritization in the current COVID-19 pandemic crisis.
Introduction: Multidrug-resistant Pseudomonas aeruginosa isolates have multiple resistance mechanisms, and there are insufficient therapeutic options to target them. Ceftolozane-tazobactam is a novel antipseudomonal agent that contains a combination of an oxyimino-aminothiazolyl cephalosporin (ceftolozane) and a β-lactamase inhibitor (tazobactam). Methods: A single-center retrospective observational study between January 2017 and December 2018 for patients who had been diagnosed with carbapenem-resistant P aeruginosa infections and treated with ceftolozane-tazobactam for more than 72 hours. We assessed clinical success based on microbiological clearance as well as the clinical resolution of signs and symptoms of infection. Results: A total of 19 patients fit the inclusion criteria, with a median age was 57 years, and 53% were female. The types of infections were nosocomial pneumonia, acute bacterial skin, and skin structure infections; complicated intra-abdominal infections; and central line–associated bloodstream infections. All of the isolates were resistant to both meropenem and imipenem. The duration of therapy was variable (average of 14 days). At day 14 of starting ceftolozane-tazobactam, 18 of 19 patients had a resolution of signs and symptoms of the infection. Only 14 of 19 patients (74%) had proven microbiological eradication observed at the end of therapy. During therapy, there was no adverse event secondary to ceftolozane-tazobactam, and no Clostridium difficile infection was identified. The 30-day mortality rate was 21% (4/19). Conclusions: Multidrug-resistant P aeruginosa infection is associated with high mortality, which would potentially be improved using a new antibiotic such as ceftolozane-tazobactam. Studies are required to explain the role of combination therapy, define adequate dosing, and identify the proper duration of treatment.
Both community and hospital-acquired infections carry high mortality. Hospital-acquired severe sepsis is frequent in medical wards and ICUs, and measures to further evaluate risk factors are prudent.
Purpose Bloodstream infection among hospitalized patients is associated with serious adverse outcomes. Blood culture is routinely ordered in patients with suspected infections, although 90% of blood cultures do not show any growth of organisms. The evidence regarding the prediction of bacteremia is scarce. Patients And Methods A retrospective review of blood cultures requested for a cohort of admitted patients between 2017 and 2019 was undertaken. Several machine-learning models were used to identify the best prediction model. Additionally, univariate and multivariable logistic regression was used to determine the predictive factors for bacteremia. Results A total of 36,405 blood cultures of 7157 patients were done. There were 2413 (6.62%) positive blood cultures. The best prediction was by using NN with the high specificity of 88% but low sensitivity. There was a statistical difference in the following factors: longer admission days before the blood culture, presence of a central line, and higher lactic acid—more than 2 mmol/L. Conclusion Despite the low positive rate of blood culture, machine learning could predict positive blood culture with high specificity but minimum sensitivity. Yet, the SIRS score, qSOFA score, and other known factors were not good prognostic factors. Further improvement and training would possibly enhance machine-learning performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.