Smart parsimonious and economical ways of irrigation have build up to fulfill the sweet water requirements for the habitants of this world. In other words, water consumption should be frugal enough to save restricted sweet water resources. The major portion of water was wasted due to incompetent ways of irrigation. We utilized a smart approach professionally capable of using ontology to make 50% of the decision, and the other 50% of the decision relies on the sensor data values. The decision from the ontology and the sensor values collectively become the source of the final decision which is the result of a machine learning algorithm (KNN). Moreover, an edge server is introduced between the main IoT server and the GSM module. This method will not only avoid the overburden of the IoT server for data processing but also reduce the latency rate. This approach connects Internet of Things with a network of sensors to resourcefully trace all the data, analyze the data at the edge server, transfer only some particular data to the main IoT server to predict the watering requirements for a field of crops, and display the result by using an android application edge.
The modern age is an era of fast-growing technology, all thanks to the Internet of Things. The IoT becomes a prime factor of human life. As in this running world, no one cares about the wastage of food. However, this causes environment pollution as well as loss of many lives. A lot of researchers help in this era by introducing some great and beneficial projects. Our work is introducing a new approach by utilizing some low-cost sensors. In this work, Arduino UNO is used as a microcontroller. We use the eNose system that comprises MQ4 and MQ135 to detect gas emission from different food items, i.e., meat, rice, rice and meat, and bread. We collect our data from these food items. The MQ4 sensor detects the CH4 gas while the MQ135 sensor detects CO2 and NH3 in this system. We use a 5 kg strain gauge load cell sensor and HX711 A/D converter as a weight sensor to measure the weight of food being wasted. To ensure the accuracy and efficiency of our system, we first calibrate our sensors as per recommendations to run in the environment with the flow. We collect our data using cooked, uncooked, and rotten food items. To make this system a smart system, we use a machine learning algorithm to predict the food items on the basis of gas emission. The decision tree algorithm was used for training and testing purposes. We use 70 instances of each food item in the dataset. On the rule set, we implement this system working to measure the weight of food wastage and to predict the food item. The Arduino UNO board fetches the sensor data and sends it to the computer system for interpretation and analysis. Then, the machine learning algorithm works to predict the food item. At the end, we get our data of which food item is wasted in what amount in one day. We found 92.65% accuracy in our system. This system helps in reducing the amount of food wastage at home and restaurants as well by the daily report of food wastage in their computer system.
Visual saliency techniques based on Convolutional Neural Networks (CNNs) exhibit an excessive performance for saliency fixation in a scene, but it is harder to train a network in view of their complexity. The imparting Residual Network Model (ResNet) that is more capable to optimize features for predicting salient area in the form of saliency maps within the images. To get saliency maps, an amalgamated framework is presented that contains two streams of Residual Network Model (ResNet-50). Each stream of Reset-50 that is used to enhance the low level and high level semantics features and build a network of 99 layers at two different image scales for generating the normal saliency attention. This model is trained with transfer learning for initialization that is pre-trained on ImageNet for object detection and with some modification to minimize prediction error. At the end, the two streams integrate the features by fusion at low and high scale dimensions of images. This model is fine-tuned on four commonly used datasets and examines both qualitative and quantitative evaluation metrics with state of the art deep saliency models outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.