Nowadays, much research attention is focused on human–computer interaction (HCI), specifically in terms of biosignal, which has been recently used for the remote controlling to offer benefits especially for disabled people or protecting against contagions, such as coronavirus. In this paper, a biosignal type, namely, facial emotional signal, is proposed to control electronic devices remotely via emotional vision recognition. The objective is converting only two facial emotions: a smiling or nonsmiling vision signal captured by the camera into a remote control signal. The methodology is achieved by combining machine learning (for smiling recognition) and embedded systems (for remote control IoT) fields. In terms of the smiling recognition, GENKl-4K database is exploited to train a model, which is built in the following sequenced steps: real-time video, snapshot image, preprocessing, face detection, feature extraction using HOG, and then finally SVM for the classification. The achieved recognition rate is up to 89% for the training and testing with 10-fold validation of SVM. In terms of IoT, the Arduino and MCU (Tx and Rx) nodes are exploited for transferring the resulting biosignal remotely as a server and client via the HTTP protocol. Promising experimental results are achieved by conducting experiments on 40 individuals who participated in controlling their emotional biosignals on several devices such as closing and opening a door and also turning the alarm on or off through Wi-Fi. The system implementing this research is developed in Matlab. It connects a webcam to Arduino and a MCU node as an embedded system.
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Background: Coronavirus (COVID-19) has appeared first time in Wuhan, China, as an acute respiratory syndrome and spread rapidly. It has been declared a pandemic by the WHO. Thus, there is an urgent need to develop an accurate computer-aided method to assist clinicians in identifying COVID-19-infected patients by computed tomography CT images. The contribution of this paper is that it proposes a pre-processing technique that increases the recognition rate compared to the techniques existing in the literature. Methods: The proposed pre-processing technique, which consists of both contrast enhancement and open-morphology filter, is highly effective in decreasing the diagnosis error rate. After carrying out pre-processing, CT images are fed to a 15-layer convolution neural network (CNN) as deep-learning for the training and testing operations. The dataset used in this research has been publically published, in which CT images were collected from hospitals in Sao Paulo, Brazil. This dataset is composed of 2482 CT scans images, which include 1252 CT scans of SARS-CoV-2 infected patients and 1230 CT scans of non-infected SARS-CoV-2 patients. Results: The proposed detection method achieves up to 97.8% accuracy, which outperforms the reported accuracy of the dataset by 97.3%. Conclusion: The performance in terms of accuracy has been improved up to 0.5% by the proposed methodology over the published dataset and its method.
The revolution in the automotive industry over time led to more and more electronics to be included in the vehicle and this increased the number and space allocated for cables. Therefore, the in-vehicle cabling network has been replaced with a two-wire bus serial communications protocol called Controller Area Network (CAN). The proposed paper describes the implementation of the CAN controller as a listener to monitor the state of the CAN bus in a real-time approach. The CAN listener obtains the data from the CAN bus by using an external signals converter. The work is realized using development platform called ZedBoard. The controller performs a sequence of processes on the received CAN frames including decoding, buffering and filtering. The processed data is stored in an implemented FIFO to keep the data from loss. After that, the data is sent serially to the processor system over the implemented SPI that connects the controller with the processor of the Zynq-7000 device. A single-threaded, simple operating system is run over the processor to provide a set of libraries and drivers that are utilized to access specific processor functions. It enables the execution of the C code that was written to configure the operation of the onboard display unit. The design procedure and simulation process for the implemented CAN listener is achieved using the Xilinx ISE WebPACK environment, while the final complete design is properly tested and verified by connecting the module to a CAN network consisting of six CAN nodes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.