The ability to classify rooms in a home is one of many attributes that are desired for social robots. In this paper, we address the problem of indoor room classification via several convolutional neural network (CNN) architectures, i.e., VGG16, VGG19, & Inception V3. The main objective is to recognize five indoor classes (bathroom, bedroom, dining room, kitchen, and living room) from a Places dataset. We considered 11600 images per class and subsequently fine-tuned the networks. The simulation studies suggest that cleaning the disparate data produced much better results in all the examined CNN architectures. We report that VGG16 & VGG19 fine-tuned models with training on all layers produced the best validation accuracy, with 93.29% and 93.61% on clean data, respectively. We also propose and examine a combination model of CNN and a multi-binary classifier referred to as error correcting output code (ECOC) with the clean data. The highest validation accuracy of 15 binary classifiers reached up to 98.5%, where the average of all classifiers was 95.37%. CNN and CNN-ECOC, and an alternative form called CNN-ECOC Regression, were evaluated in real-time implementation on a NAO humanoid robot. The results show the superiority of the combination model of CNN and ECOC over the conventional CNN. The implications and the challenges of real-time experiments are also discussed in the paper.
In this paper, we propose a novel algorithm to detect a door and its orientation in indoor settings from the view of a social robot equipped with only a monocular camera. The challenge is to achieve this goal with only a 2D image from a monocular camera. The proposed system is designed through the integration of several modules, each of which serves a special purpose. The detection of the door is addressed by training a convolutional neural network (CNN) model on a new dataset for Social Robot Indoor Navigation (SRIN). The direction of the door (from the robot’s observation) is achieved by three other modules: Depth module, Pixel-Selection module, and Pixel2Angle module, respectively. We include simulation results and real-time experiments to demonstrate the performance of the algorithm. The outcome of this study could be beneficial in any robotic navigation system for indoor environments.
We present a fractional order PI controller (FOPI) with SLAM method, and the proposed method is used in the simulation of navigation of NAO humanoid robot from Aldebaran. We can discretize the transfer function by the Al-Alaoui generating function and then get the FOPI controller by Power Series Expansion (PSE). FOPI can be used as a correction part to reduce the accumulated error of SLAM. In the FOPI controller, the parameters (Kp,Ki, and α) need to be tuned to obtain the best performance. Finally, we compare the results of position without controller and with PI controller, FOPI controller. The simulations show that the FOPI controller can reduce the error between the real position and estimated position. The proposed method is efficient and reliable for NAO navigation.
We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM ). K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.
<p><span>Rover is a robotic system that integrates a simple system implementing electrical and mechanical components together. In this study, we propose a rover using mechanical components which consist of a robotic arm, joint and mechanical gripper, backbone chassis and continues track, while the electrical components include servo motor, servo controller, transmitter and receiver for vision system and wireless controller via USB host as its control system. The purpose of this project is for monitoring and safety purposes. In addition, the main goal of this project is to develop a simple robotic rover that is easy to build and manufacture as well as cost-effective. To add more functionality on this rover, it is equipped with a robotic arm and real-time view camera integration. This rover is equipped with a first-person view (FPV) camera, an integrated camera on the rover that can give clear visibility and direction to the rover pilot. The live feed can be viewed on the monitor inside the command station box. It can be used to assist safety authorities to collect information & insights, work lift to collect and remove the load and to conduct search and rescue operation. As for the result, the mobility system of the robotic rover at terrain surfaces and analyses the capabilities of the chassis during lifting load had been tested.</span></p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.