BackgroundAs more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs.ObjectiveTo attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence.MethodsA multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method.ResultsThe process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models.ConclusionsA set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.
The use of a transanal drainage tube in anterior resection for rectal cancer may be a simple, safe, and effective means of preventing or reducing the occurrence of anastomotic leakage and bleeding. A larger-scale single or multi-center prospective randomized study or a meta-analysis including similar studies is necessary for further elucidation of this issue.
BackgroundTo date, our ability to accurately identify patients at high risk from suicidal behaviour, and thus to target interventions, has been fairly limited. This study examined a large pool of factors that are potentially associated with suicide risk from the comprehensive electronic medical record (EMR) and to derive a predictive model for 1–6 month risk.Methods7,399 patients undergoing suicide risk assessment were followed up for 180 days. The dataset was divided into a derivation and validation cohorts of 4,911 and 2,488 respectively. Clinicians used an 18-point checklist of known risk factors to divide patients into low, medium, or high risk. Their predictive ability was compared with a risk stratification model derived from the EMR data. The model was based on the continuation-ratio ordinal regression method coupled with lasso (which stands for least absolute shrinkage and selection operator).ResultsIn the year prior to suicide assessment, 66.8% of patients attended the emergency department (ED) and 41.8% had at least one hospital admission. Administrative and demographic data, along with information on prior self-harm episodes, as well as mental and physical health diagnoses were predictive of high-risk suicidal behaviour. Clinicians using the 18-point checklist were relatively poor in predicting patients at high-risk in 3 months (AUC 0.58, 95% CIs: 0.50 – 0.66). The model derived EMR was superior (AUC 0.79, 95% CIs: 0.72 – 0.84). At specificity of 0.72 (95% CIs: 0.70-0.73) the EMR model had sensitivity of 0.70 (95% CIs: 0.56-0.83).ConclusionPredictive models applied to data from the EMR could improve risk stratification of patients presenting with potential suicidal behaviour. The predictive factors include known risks for suicide, but also other information relating to general health and health service utilisation.
We report a high-throughput and label-free computational imaging technique that simultaneously measures in three-dimensional (3D) space the locomotion and angular spin of the freely moving heads of microswimmers and the beating patterns of their flagella over a sample volume more than two orders-of-magnitude larger compared to existing optical modalities. Using this platform, we quantified the 3D locomotion of 2133 bovine sperms and determined the spin axis and the angular velocity of the sperm head, providing the perspective of an observer seated at the moving and spinning sperm head. In this constantly transforming perspective, flagellum-beating patterns are decoupled from both the 3D translation and spin of the head, which provides the opportunity to truly investigate the 3D spatio-temporal kinematics of the flagellum. In addition to providing unprecedented information on the 3D locomotion of microswimmers, this computational imaging technique could also be instrumental for micro-robotics and sensing research, enabling the high-throughput quantification of the impact of various stimuli and chemicals on the 3D swimming patterns of sperms, motile bacteria and other micro-organisms, generating new insights into taxis behaviors and the underlying biophysics.
Deep Learning (DL) is a disruptive technology that has changed the landscape of cyber security research. Deep learning models have many advantages over traditional Machine Learning (ML) models, particularly when there is a large amount of data available. Android malware detection or classification qualifies as a big data problem because of the fast booming number of Android malware, the obfuscation of Android malware, and the potential protection of huge values of data assets stored on the Android devices. It seems a natural choice to apply DL on Android malware detection. However, there exist challenges for researchers and practitioners, such as choice of DL architecture, feature extraction and processing, performance evaluation, and even gathering adequate data of high quality. In this survey, we aim to address the challenges by systematically reviewing the latest progress in DL-based Android malware detection and classification. We organize the literature according to the DL architecture, including FCN, CNN, RNN, DBN, AE, and hybrid models. The goal is to reveal the research frontier, with the focus on representing code semantics for Android malware detection. We also discuss the challenges in this emerging field and provide our view of future research opportunities and directions.
Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of laboratory environments. Although the laboratory-controlled FER systems achieve very high accuracy, around 97%, the technical transferring from the laboratory to real-world applications faces a great barrier of very low accuracy, approximately 50%. In this survey, we comprehensively discuss three significant challenges in the unconstrained real-world environments, such as illumination variation, head pose, and subject-dependence, which may not be resolved by only analysing images/videos in the FER system. We focus on those sensors that may provide extra information and help the FER systems to detect emotion in both static images and video sequences. We introduce three categories of sensors that may help improve the accuracy and reliability of an expression recognition system by tackling the challenges mentioned above in pure image/video processing. The first group is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the feature of faces. The second is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to visual dimension and improve the recognition reliability for example in illumination variation and position shift situation. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the FER systems to filter useless visual contents and may help resist illumination variation. Also, we discuss the methods of fusing different inputs obtained from multimodal sensors in an emotion system. We comparatively review the most prominent multimodal emotional expression recognition approaches and point out their advantages and limitations. We briefly introduce the benchmark data sets related to FER systems for each category of sensors and extend our survey to the open challenges and issues. Meanwhile, we design a framework of an expression recognition system, which uses multimodal sensor data (provided by the three categories of sensors) to provide complete information about emotions to assist the pure face image/video analysis. We theoretically analyse the feasibility and achievability of our new expression recognition system, especially for the use in the wild environment, and point out the future directions to design an efficient, emotional expression recognition system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.