Objective: To systematically review the literature regarding how statistical process control-with control charts as a core tool-has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application. Methods: A standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically. Results: Statistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogues 11 approaches to such smart application, including risk adjustment and data stratification. Conclusion: Statistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.
Lundberg, J., et al. What-You-Look-For-Is-What-You-Find -The consequences of underlying accident models. Safety Sci. (2009Sci. ( ), doi:10.1016Sci. ( /j.ssci.2009 1 Article in press. Page numbers and formatting does not reflect print formatting and numbering. What you look for is what you find -The consequences of underlying accident models in eight accident investigation manuals AbstractAccident investigation manuals are influential documents on various levels in a safety management system, and it is therefore important to appraise them in the light of what we currently know -or assume -about the nature of accidents. Investigation manuals necessarily embody or represent an accident model, i.e., a set of assumptions about how accidents happen and what the important factors are. In this paper we examine three aspects of accident investigation as described in a number of investigation manuals. Firstly, we focus on accident models and in particular the assumptions about how different factors interact to cause -or prevent -accidents, i.e., the accident "mechanisms." Secondly, we focus on the scope in the sense of the factors (or factor domains) that are considered in the models -for instance (hu)man, technology, and organisation (MTO). Thirdly, we focus on the system of investigation or the activities that together constitute an accident investigation project/process. We found that the manuals all used complex linear models. The factors considered were in general (hu)man, technology, organization, and information. Keywords: Accident investigation, accident models Lundberg, J., et al. What-You-Look-For-Is-What-You-Find -The consequences of underlying accident models.
This paper shows that existing software metric tools interpret and implement the definitions of object-oriented software metrics differently. This delivers tool-dependent metrics results and has even implications on the results of analyses based on these metrics results. In short, the metricsbased assessment of a software system and measures taken to improve its design differ considerably from tool to tool. To support our case, we conducted an experiment with a number of commercial and free metrics tools. We calculated metrics values using the same set of standard metrics for three software systems of different sizes. Measurements show that, for the same software system and metrics, the metrics values are tool depended. We also defined a (simple) software quality model for "maintainability" based on the metrics selected. It defines a ranking of the classes that are most critical wrt. maintainability. Measurements show that even the ranking of classes in a software system is metrics tool dependent.
Analysing co-authored publications has become the standard way to measure research collaborations. At the same time bibliometric researchers have advised that co-authorship based indicators should be handled with care as a source of evidence on actual scientific collaboration. The aim of this study is to assess how well university-industry collaborations can be identified and described using co-authorship data. This is done through a comparison of co-authorship data with industrial funding to a medical university. In total 436 companies were identified through the two methods. Our results show that one third of the companies that have provided funding to the university had not co-authored any publications with the university. Further, the funding indicator identified only 16% of the companies that had co-authored publications. Thus, both co-authorship and funding indicators provide incomplete results. We also observe a case of conflicting trends between funding and co-authorship indicators. We conclude that uncritical use of the two indicators may lead to misinterpretation of the development of collaborations and thus provide incorrect data for decision-making.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.