Background Subtle abnormal motor signs are indications of serious neurological diseases. Although neurological deficits require fast initiation of treatment in a restricted time, it is difficult for nonspecialists to detect and objectively assess the symptoms. In the clinical environment, diagnoses and decisions are based on clinical grading methods, including the National Institutes of Health Stroke Scale (NIHSS) score or the Medical Research Council (MRC) score, which have been used to measure motor weakness. Objective grading in various environments is necessitated for consistent agreement among patients, caregivers, paramedics, and medical staff to facilitate rapid diagnoses and dispatches to appropriate medical centers. Objective In this study, we aimed to develop an autonomous grading system for stroke patients. We investigated the feasibility of our new system to assess motor weakness and grade NIHSS and MRC scores of 4 limbs, similar to the clinical examinations performed by medical staff. Methods We implemented an automatic grading system composed of a measuring unit with wearable sensors and a grading unit with optimized machine learning. Inertial sensors were attached to measure subtle weaknesses caused by paralysis of upper and lower limbs. We collected 60 instances of data with kinematic features of motor disorders from neurological examination and demographic information of stroke patients with NIHSS 0 or 1 and MRC 7, 8, or 9 grades in a stroke unit. Training data with 240 instances were generated using a synthetic minority oversampling technique to complement the imbalanced number of data between classes and low number of training data. We trained 2 representative machine learning algorithms, an ensemble and a support vector machine (SVM), to implement auto-NIHSS and auto-MRC grading. The optimized algorithms performed a 5-fold cross-validation and were searched by Bayes optimization in 30 trials. The trained model was tested with the 60 original hold-out instances for performance evaluation in accuracy, sensitivity, specificity, and area under the receiver operating characteristics curve (AUC). Results The proposed system can grade NIHSS scores with an accuracy of 83.3% and an AUC of 0.912 using an optimized ensemble algorithm, and it can grade with an accuracy of 80.0% and an AUC of 0.860 using an optimized SVM algorithm. The auto-MRC grading achieved an accuracy of 76.7% and a mean AUC of 0.870 in SVM classification and an accuracy of 78.3% and a mean AUC of 0.877 in ensemble classification. Conclusions The automatic grading system quantifies proximal weakness in real time and assesses symptoms through automatic grading. The pilot outcomes demonstrated the feasibility of remote monitoring of motor weakness caused by stroke. The system can facilitate consistent grading with instant assessment and expedite dispatches to appropriate hospitals and treatment initiation by sharing auto-MRC and auto-NIHSS scores between prehospital and hospital responses as an objective observation.
Objectives: This study evaluated the incidence of colorectal cancer (CRC) according to the number of metabolic syndrome (MetS) components. Methods: Using health checkup and insurance claims data of 6,365,409 subjects, the occurrence of CRC according to stage of MetS by sex was determined from the date of the health checkup in 2009 until December 31, 2018. Results: Cumulative incidence rates (CIR) of CRC in men and women was 3.9 and 2.8 per 1000 (p < 0.001), respectively. CIR of CRC for the normal, pre-MetS, and MetS groups in men was 2.6, 3.9, and 5.5 per 1000 (p < 0.001) and CIR in women was 2.1, 2.9, and 4.5 per 1000 (p < 0.001), respectively. Compared with the normal group, the hazard ratio (HR) of CRC for the pre-MetS group was 1.25 (95% CI 1.17-1.33) in men and 1.09 (95% CI 1.02-1.17) in women, and the HR of CRC for the MetS group was 1.54 (95% CI 1.43-1.65) in men and 1.39 (95% CI 1.26-1.53) in women after adjustment. Conclusions: We found that MetS is a risk factor for CRC in this study. Therefore, the prevention and active management of MetS would contribute to the prevention of CRC.
BackgroundThe purpose of this study was to determine the benefits and limitations of screening for breast cancer using mammography.MethodsDescriptive design with follow-up was used in the study. Data from breast cancer screening and health insurance claim data were used. The study population consisted of all participants in breast cancer screening from 2009 to 2014. Crude detection rate, positive predictive value and sensitivity and specificity of breast cancer screening and, incidence rate of interval cancer of the breast were calculated.ResultsThe crude detection rate of breast cancer screening per 100,000 participants increased from 126.3 in 2009 to 182.1 in 2014. The positive predictive value of breast cancer screening per 100,000 positives increased from 741.2 in 2009 to 1,367.9 in 2014. The incidence rate of interval cancer of the breast per 100,000 negatives increased from 51.7 in 2009 to 76.3 in 2014. The sensitivities of screening for breast cancer were 74.6% in 2009 and 75.1% in 2014 and the specificities were 83.1% in 2009 and 85.7% in 2014.ConclusionsTo increase the detection rate of breast cancer by breast cancer screening using mammography, the participation rate should be higher and an environment where accurate mammography and reading can be performed and reinforcement of quality control are required. To reduce the incidence rate of interval cancer of the breast, it will be necessary to educate women after their 20s to perform self-examination of the breast once a month regardless of participation in screening for breast cancer.
External quality assessment (EQA) is a commonly used tool to track the performance of laboratory tests. In Korea, EQA participation is not mandatory, and even basic data about EQA participation are not available. We used data of a 10-year period extracted from two databases (2009–2018): (1) the database of the National Health Insurance Service to calculate the number of medical institutions that claimed health insurance benefits, and (2) the database of the Korean Association of External Quality Assessment Service to calculate the number of medical institutions participating in EQA. The proportion of institutions that made claims for the performance of laboratory testing throughout the 10 years were 73.6%–76.0% for clinics, 91.9%–97.5% for long-term care hospitals, 97.9%–99.5% for small to medium hospitals, 99.6%–100% for general hospitals, and 100% for tertiary hospitals. The mean EQA participation rate of institutions that performed laboratory testing for the 10 years was 1.9% for clinics, 3.1% for long-term care hospitals, 27.7% for small to medium hospitals, 96.6% for general hospitals, and 100% for tertiary hospitals. The mean EQA participation of clinics, long-term care hospitals, and small to medium hospitals are increasing but is still not sufficient. Regulatory approaches are needed to increase participation rates. This result would be used for health policymaking on the quality improvement of laboratory tests.
IoT technology is used in various industries, including the manufacturing, energy, finance, education, transportation, smart home, and medical fields. In the medical field, IoT applications can provide high-quality medical services through the efficient management of patients and mobile assets in hospitals. In this paper, we introduce an IoT system to the medical field using Sigfox, a low-power communication network for indoor location monitoring used as a hospital network. A proof-of-concept (PoC) was implemented to evaluate the effectiveness of medical device and patient safety management. Specific requirements should be considered when applying the IoMT system in a hospital environment. In this study, the location and temperature of various targets sending signals to the monitoring system using three different networks (Sigfox, Hospital and Non-Hospital) were collected and compared with true data, the average accuracy of which were 69.2%, 72.5%, and 83.3%, respectively. This paper shows the significance in the application of an IoMT using the Sigfox network in a hospital setting in Korea compared with existing hospital networks.
To compare the epidemiological characteristics of a breast cancer screening program of patients between 40–69 years of age and ≥70 years of age, we calculated the age-standardized detection rate of the breast cancer screening program and compared it with the age-standardized incidence rate from the Korea Central Cancer Registry. The data of the breast cancer screening program from January 2009 to December 2016 and the data of the health insurance claims from January 2006 to August 2017 were used. In the 40–69 year age group, the age-standardized detection rate of breast cancer increased annually from 106.1 in 2009 to 158.6 in 2015 and did not differ from the age-standardized incidence rate. In the ≥70 year age group, the age-standardized detection rate of breast cancer increased annually from 65.7 in 2009 to 120.3 in 2015 and was 1.9 to 2.7 fold of the age-standardized incidence rate. It shows that the early detection effect of breast cancer screening was greater for patients over 70 years old. Further studies are needed to evaluate the effect of breast cancer detection in the ≥70 year age group on all-cause mortality or breast cancer mortality.
Korea introduced a new diagnosis-related group (NDRG), which is a mixed-bundle reimbursement system. We evaluated the effects of NDRGs on laboratory test quality by analyzing data over three years (2016–2018) from the Korean Association of External Quality Assessment Service (KEQAS). A total of 42 NDRG-participating hospitals (CASE), 84 non-participating similar size-hospitals (CON-1), and 42 tertiary hospitals (CON-2) were included. We assumed the proportion of KEQAS results with a larger than 2 standard deviation index (SDI) to be a bad laboratory quality marker (BLQM). CASE BLQMs were lower than CON-1 BLQMs for more than 2 years in alkaline phosphatase (ALP), alanine aminotransferase (ALT), chloride, glucose, sodium, and total protein, and higher in creatinine. CASE BLQMs were higher than CON-2 BLQMs for more than 2 years in ALP, chloride, creatinine, glucose, lactate dehydrogenase (LDH), phosphorus, potassium, sodium, total calcium, total cholesterol, triglyceride, and uric acid. Mean SDIs for general chemistry tests were not significantly different depending on NDRG participation. However, the NDRG is currently a pilot program that compensates the amount of each institution’s reimbursement based on the fee-for-service system, and most participants were public hospitals. Thus, the effects of NDRGs on laboratory test quality should be re-evaluated after the NDRG program has stabilized and more private hospitals are participating.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.