The purpose of this study was to synthesize internal consistency reliability for the subscale scores on the Maslach Burnout Inventory (MBI). The authors addressed three research questions: (a) What is the mean subscale score reliability for the MBI across studies? (b) What factors are associated with observed variance in MBI subscale score reliability? (c) What are the implications for appropriate use based on MBI subscale mean internal consistency estimates? Of the 221 studies reviewed, 84 provided alpha coefficients and were used in the current analysis. Results suggest that mean alpha estimates across subscales generally fell within the .70 to .80 range. Scale variance and language most often accounted for the variance in coefficient alpha, although some variations were apparent between subscales. Of the three MBI subscales, Personal Accomplishment and Depersonalization mean alpha estimates were well below recommended levels for high-stakes decisions, such as the diagnosis of burnout syndrome. Recommendations for the use of the current version of the instrument's scale scores as well as suggestions for scale refinement are provided. Keywords MBI, burnout, reliability generalization, meta-analysisBurnout emerged approximately 25 years ago as a term to describe a physical and emotional reaction to occupational stress characterized by exhaustion and Validity Study
IMPORTANCEIt is well documented that financial conflicts of interest influence medical research and clinical practice. Prior to the Open Payments provisions of the Affordable Care Act, financial ties became apparent only through self-disclosure. The nature of financial interests has not been studied among physicians who develop dermatology clinical practice guidelines.OBJECTIVE To evaluate payments received by physicians who author dermatology clinical practice guidelines, compare disclosure statements for accuracy, determine whether pharmaceutical companies from which the authors received payments manufactured products related to the guidelines, and examine the extent to which the American Academy of Dermatology enforced their Administrative Regulations for guideline development.DESIGN, SETTING, AND PARTICIPANTS Three American Academy of Dermatology guidelines published from 2013 to 2016 were retrieved. Double data extraction was used to record financial payments received by 49 guideline authors using the Open Payments database. Payments received by the authors from the date of the initial literature search to the date of publication were used to evaluate disclosure statement accuracy, detail the companies providing payments, and evaluate Administrative Regulations enforcement. This study is applicable to clinical practice guideline panels drafting recommendations, physicians using clinical practice guidelines to inform patient care, and those establishing policies for guideline development.MAIN OUTCOMES AND MEASURES Our main outcomes are the monetary values and types of payments received by physicians who author dermatology guidelines and the accuracy of disclosure statements. Data were collected from the Open Payments database and analyzed descriptively.
Life satisfaction, Reliability generalization, Meta-analysis, Satisfaction With Life Scale, SWLS, Psychometric,
This study provides a summary of 45 exploratory and confirmatory factor-analytic studies that examined the internal structure of scores obtained from the Maslach Burnout Inventory (MBI). It highlights characteristics of the studies that account for differences in reporting of the MBI factor structure. This approach includes an examination of the various sample characteristics, forms of the instrument, factor-analytic methods, and the reported factor structure across studies that have attempted to examine the dimensionality of the MBI. This study also investigates the dimensionality of MBI scale scores using meta-analysis. Both descriptive and empirical analysis supported a three-factor model. The pattern of reported dimensions across validation studies should enhance understanding of the structural dimensions that the MBI measures as well as provide a more meaningful interpretation of its test scores.
Some CPG authors failed to fully disclose all financial conflicts of interest, and most guideline development panels and chairpersons had conflicts. In addition, adherence to IOM standards for guideline development was lacking. This study is relevant to CPG panels authoring recommendations, physicians implementing CPGs to guide patient care, and the organizations establishing policies for guideline development.
reported only when respondents had already committed themselves deeply to patients' treatment (Q4) or when patients had arranged a second opinion without informing them (Q5). Specialists who provided second opinions struggled with feelings of helplessness toward patients if their opinion was in accordance with the first opinion and they thus took away the patient's hope (Q6). Moreover, respondents struggled with patients' unwillingness to be referred back to the first specialist after the second opinion. To reduce patients' reluctance, they actively tried to restore trust in the first specialist (Q7, Q8). Respondents were hesitant to communicate minor discrepancies with the first opinion to patients, fearing this would harm the patients' trust in the referring specialist, their own relationship with their colleague, or both (Q9, Q10). When differences in opinion were conveyed bluntly between the 2 specialists involved, this resulted in tension or anger (Q11, Q12). After back-referral, most referring specialists perceived that the physician-patient relationship had strengthened. Especially when both opinions aligned, patients gained acceptance, certainty, and trust (Q13). Discussion | The second-opinion process is complex and places great demands on the communication skills of medical specialists because of the emotions involved, especially when the attitudes they wish to convey conflict with their true beliefs and emotions. The physicians must balance objectivity with diplomacy to avoid harming their relationship with their patient or colleague. Interpersonal sensitivities between physicians and patients or colleagues may be managed by explicitly ascertaining patients' motivations and expectations, both when conducting and referring patients for second opinions. Although respondents in this study may not have been fully open about their personal experiences (a potential limitation of this study), the range of emotions identified suggests that acceptable candor was achieved. Addressing the identified challenges in medical training may improve the second-opinion process and enhance collaboration among medical specialists. Our research indicates that although some physicians believe they are often unnecessary, second opinions can strengthen the physicianpatient relationship after back-referral. Future research incorporating subjective and objective outcomes of second opinions should further establish their value.
Objective: Manual searches are supplemental approaches to database searches to identify additional primary studies for systematic reviews. The authors argue that these manual approaches, in particular hand-searching and perusing reference lists, are often considered the same yet lead to different outcomes.Methods: We conducted a PubMed search for systematic reviews in the top 10 dermatology journals (January 2006-January 2016. After screening, the final sample comprised 292 reviews. Statements related to manual searches were extracted from each review and categorized by the primary and secondary authors. Each statement was categorized as either ''Search of Reference List,'' ''Hand Search,'' ''Both,'' or ''Unclear.''Results: Of the 292 systematic reviews included in our sample, 143 reviews (48.97%) did not report a hand-search or scan of reference lists. One-hundred thirty-six reviews (46.58%) reported searches of reference lists, while 4 reviews (1.37%) reported systematic hand-searches. Three reviews (1.03%) reported use of both hand-searches and scanning reference lists. Six reviews (2.05%) were classified as unclear due to vague wording.Conclusions: Authors of systematic reviews published in dermatology journals in our study sample scanned reference lists more frequently than they conducted hand-searches, possibly contributing to biased search outcomes. We encourage systematic reviewers to routinely practice hand-searching in order to minimize bias. Well-conducted systematic reviews are the apex of the evidence hierarchy and routinely used for developing care guidelines and informing clinical decision making [1]. While each aspect of systematic review methodology is important, the search process, when thorough and well produced, leads to a set of research evidence to consider for inclusion that minimizes bias. There is a substantial body of evidence pointing to the importance of thorough and prespecified search strategies involving multiple databases to locate relevant studies and minimize the potential for publication and language bias [2]. As part of the search process, systematic reviewers often review reference lists of other studies or conduct hand-searches to identify additional primary studies. It is the authors' experience that reviewing reference lists and conducting hand-searches are often considered the same, yet we argue that these processes are quite different and lead to different outcomes.Our objective was to assess how often systematic reviewers in dermatology actually conducted handsearches when performing a systematic review.
We have identified ‘spin’ in abstracts of randomised controlled trials (RCTs) with nonsignificant primary endpoints in psychiatry and psychology journals. This is a cross-sectional review of clinical trials with nonsignificant primary endpoints published in psychiatry and psychology journals from January 2012 to December 2017. The main outcome was the frequency and manifestation of spin in the abstracts. We define spin as the ‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results’. We have also assessed the relationship between industry funding and spin. Of the 486 RCTs examined, 116 were included in our analysis of spin. Spin was identified in 56% (n=65) of those included. Spin was found in 2 (2%) titles, 24 (21%) abstract results sections and 57 (49.1%) abstract conclusion sections. Evidence of spin was simultaneously identified in both results and conclusions sections in 15% of RCTs (n=17). Twelve articles reported industry funding (10%). Industry funding was not associated with increased odds of spin in the abstract (unadjusted OR: 1.0; 95% CI: 0.3 to 3.2). We found no relationship between industry funding and spin in abstracts. These findings raise concerns about the effects spin may have on clinicians. Further steps could be taken to address spin, including inviting reviewers to comment on the presence of spin and updating Consolidated Standards of Reporting Trials guidelines to contain language discouraging spin.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.