Clinical Decision Support systems link health observations with health knowledge to determine health option by clinicians for improved health care. The main idea of Clinical Decision Support System is a set of rules derived from medical professionals applied on a dynamic knowledge. Data mining is well suited to give decision support for the healthcare. There are several classification techniques available that can be used for clinical decision support system. Different techniques are used for different diagnosis. In this paper, various classification techniques for clinical decision support system are discussed with example. .
Classification is a machine learning technique which is used to categorize the different input patterns into different classes. To select the best classifier for a given dataset is one of the critical issues in Classification. Using cross-validation approach, it is possible to apply candidate algorithms on a given dataset and best classifier is selected by considering various evaluation measures of Classification. But computational cost is significant. Meta Learning automates this process by acquiring knowledge in form of Meta-features and performance information of candidate algorithm on datasets and creates a Meta Knowledge Base. Once Meta Knowledge Base is generated, system uses k-Nearest Neighbor as a Meta Learner that identifies the most similar datasets to new dataset. But generation of Meta Example is a costly process due to a large number of candidate algorithms and datasets with different characteristics involved. So Active Learning is incorporated into Meta Learning System that reduces generation of Meta example and at the same time maintaining performance of candidate algorithms. Once the training phase is completed based on Active Meta Learning approach, ranking is provided based on Success Rate Ratio (SRR) method that considers accuracy as a performance evaluation measure.
Data mining is becoming gradually popular and vital to healthcare organizations, finding useful patterns in complex data, transforming it into beneficial information for decision making. The latest statistics of WHO and UNICEF show that annually approximately 55,000 women die due to preventable pregnancy-related causes in India. Therefore, the current focus of health care researchers is to promote the use of e-health technology in developing countries. There have been many studies that apply data mining methods to recognize solutions for health care limitations in obstetrics and maternal care domain. Some of those studies included high risk pregnancy, prediction of preeclampsia, Identification of obstetric risk factors, discovering the risk factors of preterm birth, and predicting risk pregnancy in women performing voluntary interruption of pregnancy. This paper provides a survey and analysis of data mining methods that have been applied to maternal care domain.
Falling is often accepted as a natural part of the aging process Elderly people are typically, more unsteady and frailer. Thus are more likely to fall and be injured than younger individuals. Falls can have a serious affect on both the quality of life of elder people and on health as well as social care costs. In this paper, we give a comprehensive survey of different wearable sensors for fall detection and their underlying algorithms and comparing their strengths and weaknesses. Conclusion is derived with some discussions on techniques of fall detection and pros and cons to wear wearable device for fall detection.
IoT Enabled Wearable Camera is used to make world's smallest wearable camera that can capture video clips. This hardware is a wearable camera device, which users can mount on their sunglasses, goggles and helmets using magnets already available in the camera. This Camera gives better resolution of video. And to store these videos and to charge cameras we have one docking station. The docking station that can charge camera and also transferred video wirelessly. With the help of wifi , stored video can be directed uploaded On cloud. With the use of mobile application we can directly take audio and video from cloud. The main aim is to develop smallest wearable camera that have most capabilities that can fulfill our daily basis need.
The cross-modal retrieval (CMR) has attracted much attention in the research community due to flexible and comprehensive retrieval. The core challenge in CMR is the heterogeneity gap, which is generated due to different statistical properties of multi-modal data. The most common solution to bridge the heterogeneity gap is representation learning, which generates a common sub-space. In this work, we propose a framework called “Improvement of Deep Cross-Modal Retrieval (IDCMR)”, which generates real-valued representation. The IDCMR preserves both intra-modal and inter-modal similarity. The intra-modal similarity is preserved by selecting an appropriate training model for text and image modality. The inter-modal similarity is preserved by reducing modality-invariance loss. The mean average precision (mAP) is used as a performance measure in the CMR system. Extensive experiments are performed, and results show that IDCMR outperforms over state-of-the-art methods by a margin 4% and 2% relatively with mAP in the text to image and image to text retrieval tasks on MSCOCO and Xmedia dataset respectively.
The tremendous proliferation of Multi-Modal data and the flexible need of users has drawn attention to the field of Cross-Modal Retrieval (CMR), which can perform image-sketch matching, text-image matching, audio-video matching and near infrared-visual image matching. Such retrieval is useful in many applications like criminal investigation, recommendation systems and person reidentification. The real challenge in CMR is to preserve semantic similarities between various modalities of data. To preserve semantic similarities, existing deep learning-based approaches use pairwise labels and generate binary-valued representation. The generated binary-valued representation provides fast retrieval with low storage requirement. However, the relative similarity between heterogeneous data is ignored. So, the objective of this work is to reduce the modality-gap by preserving relative semantic similarities among various modalities. So, a model named "Deep Cross-Modal Retrieval (DCMR)" is proposed, which takes triplet labels as the input and generates binary-valued representation. The triplet labels locate semantic similar data points nearer and dissimilar points far in the vector space. Extensive experiments are performed and the result is compared with deep learning-based approaches, which shows that the performance of DCMR increases by 2% to 3% for Image→Text retrieval and by 2% to 5% for Text→Image retrieval in mean average precision (mAP) on MSCOCO, XMedia, and NUS-WIDE datasets. So, the binary-valued representation generated from triplet labels preserve better relative semantic similarities than pairwise labels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.