The farmers of agricultural farms manage and monitor different types of livestock. The manual inspection and monitoring of livestock are tedious since the cattle do not stay at fixed locations. Fencing many cattle requires a considerable cost and involves farmers’ physical intervention to keep an eye to stop them from crossing beyond the access points. Visual tracking of livestock and fencing is a time-consuming and challenging job. This research proposes a smart solution for livestock tracking and geofencing using state-of-the-art IoT technology. The study creates a geographical safe zone for cattle based on IoT and GPRS, where the cattle are assigned dedicated IoT sensors. The cattle can be easily remotely monitored and controlled without having any need for farmers to intervene for livestock management physically. The smart system collects the data regarding the location, well-being, and health of the livestock. This kind of livestock management may help prevent the spread of COVID-19, lower the farming costs, and enable remote monitoring.
Abstract-Uncertainty is a major barrier in knowledge discovery from complex problem domains. Knowledge discovery in such domains requires qualitative rather than quantitative analysis. Therefore, the quantitative measures can be used to represent uncertainty with the integration of various models. The Bayesian Network (BN) is a widely applied technique for characterization and analysis of uncertainty in real world domains. Thus, the real application of BN can be observed in a broad range of domains such as image processing, decision making, system reliability estimation and PPDM (Privacy Preserving in Data Mining) in association rule mining and medical domain analysis. BN techniques can be used in these domains for prediction and decision support. In this article, a discussion on general BN representation, draw inferences, learning and prediction is followed by applications of BN in some specific domains. Domain specific BN representation, inferences and learning process are also presented. Building upon the knowledge presented, some future research directions are also highlighted.Index Terms-Uncertainty, knowledge discovery, Bayesian network, image processing, decision making, privacy preservation, system reliability estimation. I. INTRODUCTIONUncertainty is a commonly faced problem in real world applications. Uncertainty can be described as an inadequate amount of information [1]. Nevertheless, uncertainty may also exist in situations that have enough amount of information [2]. Furthermore, uncertainty may be alleviated or eliminated with the addition of new information. Addition of more information in complex processes may lead to mining of limited knowledge. Uncertainty can be computed mathematically with probability theory. In uncertain situations, there is an involvement of possibility of states of attributes. Consequently, the models established on probabilistic inferences have the capability to assign a probabilistic value according to a defined principle. Accordingly, the prediction with large number of states in a model is accomplished. The question rises "how prediction is realized in the presence of large number of states in a model?" An answer to this question is the employment of Bayesian Network (BN) with several variables [3]- [5]. BNs, also known as belief networks, belong to the family of probabilistic graphical models. These graphical structures correspond to knowledge about an uncertain domain. More specifically, each node in the graphical structure represents a random variable, while the edges/arcs between the nodes represent conditional dependencies among nodes. These conditional dependencies are estimated by using acknowledged statistical and computational methods. Consequently, BNs incorporate concepts from graph and probability theory, computer science, and statistics.Since last two decades, BN is recognized as an important tool for a number of expert systems especially in domains involving uncertainty [6]. This recognition of BN has several reasons behind it. First, BN encodes the depen...
Climate change is unexpected weather patterns that can create an alarming situation. Due to climate change, various sectors are affected, and one of the sectors is healthcare. As a result of climate change, the geographic range of several vector-borne human infectious diseases will expand. Currently, dengue is taking its toll, and climate change is one of the key reasons contributing to the intensification of dengue disease transmission. The most important climatic factors linked to dengue transmission are temperature, rainfall, and relative humidity. The present study carries out a systematic literature review on the surveillance system to predict dengue outbreaks based on Machine Learning modeling techniques. The systematic literature review discusses the methodology and objectives, the number of studies carried out in different regions and periods, the association between climatic factors and the increase in positive dengue cases. This study also includes a detailed investigation of meteorological data, the dengue positive patient data, and the pre-processing techniques used for data cleaning. Furthermore, correlation techniques in several studies to determine the relationship between dengue incidence and meteorological parameters and machine learning models for predictive analysis are discussed. In the future direction for creating a dengue surveillance system, several research challenges and limitations of current work are discussed.
The rationale of XML design is to transfer and store data at different levels. A key feature of these levels in an XML document is to identify its components for additional processing. XML components can expose sensitive information after application of data mining techniques over a shared database. Therefore, privacy preservation of sensitive information must be ensured prior to signify the outcome especially in sensitive XML Association Rules. Privacy issues in XML domain are not exceptionally addressed to determine a solution by the academia in a reliable and precise manner. In this paper, we have proposed a model for identifying sensitive items (nodes) to declare sensitive XML association rules and then to hide them. Bayesian networks-based central tendency measures are applied in declaration of sensitive XML association rules. K2 algorithm is used to generate Bayesian networks to ensure reliability and accuracy in preserving privacy of XML Association Rules. The proposed model is tested and compared using several case studies and large UCI machine learning datasets. The experimental results show improved accuracy and reliability of proposed model without any side effects such as new rules and lost rules. The proposed model uses the same minimum support threshold to find XML Association Rules from the original and transformed data sources. The significance of the proposed model is to minimize an incredible disclosure risk involved in XML association rule mining from external parties in a competitive business environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.