This paper evaluates the performance of the Neural Architecture Search Network (NASNet) in the automatic detection of COVID-19 (Coronavirus Disease 2019) from chest x-ray images. COVID-19 is a disease caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) that produces in patients fever, cough, shortness of breath, muscle pain, sputum production, diarrhea, and even sore throat. The virus spreads through the air, and to date, is expanding as a global pandemic. There is no vaccine, and it is fatal to approximately 2-7% of the infected population. Among the clinical and paraclinical characteristics of infected patients, nodules have been identified in images of chest x-rays that can be visually identified, producing a simple, rapid, and generally available method of identification. However, the rapid spread of the disease means that there is a lack of specialized medical personnel capable of identifying it, which is why automated schemes are being developed. We propose the tuning of a NASNet-type convolutional model to automatically determine the initial state of a patient in the triage process or intervention protocol of health care centers. The neural network is trained with public images of cases positively identified as patients infected with the virus and patients in normal conditions without infection. Performance evaluation is also done with real images unknown to the neuronal model. As for performance metrics, we use the function of loss of cross-entropy (categorical cross-entropy), the accuracy (or success rate), and the MSE (Mean Squared Error). The tuned model was able to correctly classify the test images with an accuracy of 97%.
In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.
This paper presents a low cost strategy for real-time estimation of the position of obstacles in an unknown environment for autonomous robots. The strategy was intended for use in autonomous service robots, which navigate in unknown and dynamic indoor environments. In addition to human interaction, these environments are characterized by a design created for the human being, which is why our developments seek morphological and functional similarity equivalent to the human model. We use a pair of cameras on our robot to achieve a stereoscopic vision of the environment, and we analyze this information to determine the distance to obstacles using an algorithm that mimics bacterial behavior. The algorithm was evaluated on our robotic platform demonstrating high performance in the location of obstacles and real-time operation.
Automated medical image processing, particularly of radiological images, can reduce the number of diagnostic errors, increase patient care and reduce medical costs. This paper seeks to evaluate the performance of three recent convolutional neural networks in the autonomous identification of fissures over two-dimensional radiological images. These architectures have been proposed as deep neural network types specially designed for image classification, which allows their integration with traditional image processing strategies for automatic analysis of medical images. In particular, we use three convolutional networks: ResNet (residual neural network), DenseNet (dense convolutional network), and NASNet (neural architecture search network) to learn information from a set of 200 images labeled half as fissured bones and half as seamless bones. All three networks are trained and adjusted under the same conditions, and their performance was evaluated with the same metrics. The final results consider not only the model's ability to predict the characteristics of an unknown image but also its internal complexity. The three neural models were optimized to reduce classification errors without producing network over-adjustment. In all three cases, generalization of behavior was observed, and the ability of the models to identify the images with fissures, however the expected performance was only achieved with the NASNet model.
Autonomous mobility remains an open research problem in robotics. This is a complex problem that has its characteristics according to the type of task and environment intended for the robot’s activity. Service robotics has in this sense problems that have not been solved satisfactorily. These robots must interact with human beings in environments designed for human beings, which implies that one of the basic sensors for structuring motion control and navigation schemes are those that replicate the human optical sense. In their normal activity, robots are expected to interpret visual information in the environment while following a certain motion policy that allows them to move from one point to another in the environment, consistent with their tasks. A good optical sensing system can be structured around digital cameras, with which it can apply visual identification routines of both the trajectory and its environment. This research proposes a parallel control scheme (with two loops) for the definition of movements of a service robot from images. On the one hand, there is a control loop based on a visual memory strategy using a convolutional neural network. This system contemplates a deep learning model that is trained from images of the environment containing characteristic elements of the navigation environment (various types of obstacles and different cases of free trajectories with and without navigation path). To this first loop is connected in parallel a second loop in charge of defining the specific distances to the obstacles using a stereo vision system. The objective of this parallel loop is to quickly identify the obstacle points in front of the robot from the images using a bacterial interaction model. These two loops form an information feedback motion control framework that quickly analyzes the environment and defines motion strategies from digital images, achieving real-time control driven by visual information. Among the advantages of our scheme are the low processing and memory costs in the robot, and the no need to modify the environment to facilitate the navigation of the robot. The performance of the system is validated by simulation and laboratory experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.