The Internet of Things (IoT) is transforming the agriculture industry and enables farmers to deal with the vast challenges in the industry. Internet of Farming (IoF) applications increases the quantity, quality, sustainability as well as cost effectiveness of agricultural production. Farmers leverage IoF to monitor remotely, sensors that can detect soil moisture, crop growth and livestock feed levels, manage and control remotely the smart connected harvesters and irrigation equipment, and utilize artificial intelligence based tools to analyze operational data combined with 3rd party information, such as weather services, to provide new insights and improve decision making. The Internet of Farming relies on data gathered from sensor of Wireless Sensor Network (WSN). The WSN requires a reliable connectivity to provide accurate prediction of the farming system. This chapter proposes a strategy that provides always best connectivity (ABC). The strategy considers a routing protocol to support Low-power and lossy networks (LLN), with a minimum energy usage. Two scenarios are presented.
Traditionally human interacts with a computer by using keyboard and mouse. Considering person with handicapped from the wrist to the fingertip or amputated wrists or fingertips need alternative way; using voice or hand gesture. This work focuses on the use of hand-gesture image recognition. There are two main issues should be considered; less interactivity in static hand gesture recognition, and less accuracy in dynamic hand gesture recognition. This paper attempts to improve the accuracy of hand-gesture image recognition by experimenting simple deep learning neural network (DLNN). As this work uses a simple DLNN, the relation between the hidden layers is not considered. The number of hidden layers in the proposed architecture of the DLNN for the experiments vary from one to five.With the aims to understand the effect of the number of neurons in the hidden layers, the DLNN is experimented using different numbers of hidden neurons. Six different types of hand gestures are considered. 800 videos on hand gestures taken from Vision for Intelligent Vehicles and Applications (VIVA) portal are used in the experiment. The data is divided into two; one as training data and another part is for testing. The best result is achieved when the DLNN uses two hidden layers with 250 neurons in the first hidden layer, and 100 neurons in the second hidden layer. The average of the achieved accuracy level is 77.56%. Experimental results also show that the more number of hidden layer causes over-fitting (does not make the recognition better). It is also observed that the increase of hidden layer number and hidden neurons only affect the accuracy of recognition of the trained dataset and does not improve the recognition of untrained dataset. This result is because the interrelation among the hidden layer are not considered.
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.