Creating complex 3D objects from a flat sheet of material using origami folding techniques has attracted attention in science and engineering. Here, we introduce the concept of “Norigami” that is a mixture of three Japanese words: “Nori” that means glue, “Ori” that means Folding, and “Kami”/“Gami” that means paper. Using traditional origami, spherical or other spatial object are very difficult to achieve by a robot due to the complexity of the movements involved. In Norigami complex 3D shapes can be achieved by a machine or robot mixing simple origami folding with pasting patterns. In the current work, a Norigami robot is designed and developed using Lego NXT technology in order to create a spherical object that can be mass produced.
People rapidly form impressions from facial appearance, and these impressions affect social decisions. Data-driven, computational models are the best available tools for identifying the source of such impressions. However, the computational models cannot be accepted unless they have passed the tests of validation to ascertain their credibility. In this paper, the condition of the eyes of the person is used to validate the fuzzy rules extracted from the computational models. A simple and effective classifier is proposed to evaluate the closeness of the eyes during the evaluation of a small database of portraits. The experimental results show that closed-eyes can be detected only after the proposed shift of the normalized histogram is applied. Although it is very simple, the proposed classifier can achieve better accuracy than other state of the art classifiers. The relationship between the closeness of the eyes and the evaluation of the subjects is also analyzed.
In robotics, one of the most difficult task is to perform a precisely and fast movement of a robotic arm. For paper-folding robots, it is still extremely difficult to execute the required manipulations of the paper mainly because the difficulties in modeling and control of the paper. In this paper two control models are proposed to solve this problem. One of the best approaches comes from Neuroscience, where using a human’s brain inspired control system known as Cerebellar control model (CCM), precisely and fast movements of a robotic arm can be performed. In the CCM a Feedback controller motor command is used as a target signal to train an Artificial Neural Network (NN), and use the output of the NN as a Feed-forward signal. In this paper two training methods were evaluated in order to improve the behavior in CCM: the traditional Back propagation and a Holographic method.
Recent advances in simulation in science and engineering are focusing on automatic measuring of human-feelings about a product design by using Kansei Engineering words.Iyashi is a Japanese word used to describe a peculiar phenomenonthat is mentally soothing,but is yet to be clearly defined. This paper explores the analysis of Electro Encephalogram (EEG) brain signals (Alpha, Beta) as a method to determine when a stimulus (face images)generates Iyashi in a person and produces good feelings. NeuroSky B3 Headband is used as a device to obtain the EEG signals and Holographic Neural Networks (HNN) are evaluated to predict the level of Iyashi. Nine data dimensionality reduction algorithms for brain signals are explored to improve the level of prediction of the HNN. The experimental results show that the percent of prediction with HNN can be increased in more than 10% if data reduction methods are applied to Beta brain waves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.