Objective: To examine the risk of stroke in relation to quality of hypertension control in routine general practice across an entire health district. Design: Population based matched case-control study. Setting: East Lancashire Health District with a participating population of 388 821 aged ≤ 80. Subjects: Cases were patients under 80 with their first stroke identified from a population based stroke register between 1 July 1994 and 30 June 1995. For each case two controls matched with the case for age and sex were selected from the same practice register. Hypertension was defined as systolic blood pressure ≥ 160 mm Hg or diastolic blood pressure ≥ 95 mm Hg, or both, on at least two occasions within any three month period or any history of treatment with antihypertensive drugs. Main outcome measures: Prevalence of hypertension and quality of control of hypertension (assessed by using the mean blood pressure recorded before stroke) and odds ratios of stroke (derived from conditional logistic regression). Results: Records of 267 cases and 534 controls were examined; 61% and 42% of these subjects respectively were hypertensive. Compared with non-hypertensive subjects hypertensive patients receiving treatment whose average pre-event systolic blood pressure was controlled to < 140 mm Hg had an adjusted odds ratio for stroke of 1.3 (95% confidence interval 0.6 to 2.7). Those fairly well controlled (140-149 mm Hg), moderately controlled (150-159 mm Hg), or poorly controlled ( ≥ 160 mm Hg) or untreated had progressively raised odds ratios of 1.6, 2.2, 3.2, and 3.5 respectively. Results for diastolic pressure were similar; both were independent of initial pressures before treatment. Around 21% of strokes were thus attributable to inadequate control with treatment, or 46 first events yearly per 100 000 population aged 40-79. Conclusions: Risk of stroke was clearly related to quality of control of blood pressure with treatment. In routine practice consistent control of blood pressure to below 150/90 mm Hg seems to be required for optimal stroke prevention.
Probabilistic topic models are statistical methods whose aim is to discover the latent structure in a large collection of documents. The intuition behind topic models is that, by generating documents by latent topics, the word distribution for each topic can be modelled and the prior distribution over the topic learned. In this paper we propose to apply this concept by modelling the topics of sentences for the aspect detection problem in review documents in order to improve sentiment analysis systems. Aspect detection in sentiment analysis helps customers effectively navigate into detailed information about their features of interest. The proposed approach assumes that the aspects of words in a sentence form a Markov chain. The novelty of the model is the extraction of multiword aspects from text data while relaxing the bag-of-words assumption. Experimental results show that the model is indeed able to perform the task significantly better when compared with standard topic models.
BackgroundOne of the best and most accurate methods for identifying disease-causing genes is monitoring gene expression values in different samples using microarray technology. One of the shortcomings of microarray data is that they provide a small quantity of samples with respect to the number of genes. This problem reduces the classification accuracy of the methods, so gene selection is essential to improve the predictive accuracy and to identify potential marker genes for a disease. Among numerous existing methods for gene selection, support vector machine-based recursive feature elimination (SVMRFE) has become one of the leading methods, but its performance can be reduced because of the small sample size, noisy data and the fact that the method does not remove redundant genes.MethodsWe propose a novel framework for gene selection which uses the advantageous features of conventional methods and addresses their weaknesses. In fact, we have combined the Fisher method and SVMRFE to utilize the advantages of a filtering method as well as an embedded method. Furthermore, we have added a redundancy reduction stage to address the weakness of the Fisher method and SVMRFE. In addition to gene expression values, the proposed method uses Gene Ontology which is a reliable source of information on genes. The use of Gene Ontology can compensate, in part, for the limitations of microarrays, such as having a small number of samples and erroneous measurement results.ResultsThe proposed method has been applied to colon, Diffuse Large B-Cell Lymphoma (DLBCL) and prostate cancer datasets. The empirical results show that our method has improved classification performance in terms of accuracy, sensitivity and specificity. In addition, the study of the molecular function of selected genes strengthened the hypothesis that these genes are involved in the process of cancer growth.ConclusionsThe proposed method addresses the weakness of conventional methods by adding a redundancy reduction stage and utilizing Gene Ontology information. It predicts marker genes for colon, DLBCL and prostate cancer with a high accuracy. The predictions made in this study can serve as a list of candidates for subsequent wet-lab verification and might help in the search for a cure for cancers.
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.