Reasoning about object affordances allows an autonomous agent to perform generalised manipulation tasks among object instances. While current approaches to grasp affordance estimation are effective, they are limited to a single hypothesis. We present an approach for detection and extraction of multiple grasp affordances on an object via visual input. We define semantics as a combination of multiple attributes, which yields benefits in terms of generalisation for grasp affordance prediction. We use Markov Logic Networks to build a knowledge base graph representation to obtain a probability distribution of grasp affordances for an object. To harvest the knowledge base, we collect and make available a novel dataset that relates different semantic attributes. We achieve reliable mappings of the predicted grasp affordances on the object by learning prototypical grasping patches from several examples. We show our method's generalisation capabilities on grasp affordance prediction for novel instances and compare with similar methods in the literature. Moreover, using a robotic platform, on simulated and real scenarios, we evaluate the success of the grasping task when conditioned on the grasp affordance prediction.
Cognitive load has been widely studied to help understand human performance. It is desirable to monitor user cognitive load in applications such as automation, robotics, and aerospace to achieve operational safety and to improve user experience. This can allow efficient workload management and can help to avoid or to reduce human error. However, tracking cognitive load in real time with high accuracy remains a challenge. Hence, we propose a framework to detect cognitive load by non-intrusively measuring physiological data from the eyes and heart. We exemplify and evaluate the framework where participants engage in a task that induces different levels of cognitive load. The framework uses a set of classifiers to accurately predict low, medium and high levels of cognitive load. The classifiers achieve high predictive accuracy. In particular, Random Forest and Naive Bayes performed best with accuracies of 91.66% and 85.83% respectively. Furthermore, we found that, while mean pupil diameter change for both right and left eye were the most prominent features, blinking rate also made a moderately important contribution to this highly accurate prediction of low, medium and high cognitive load. The existing results on accuracy considerably outperform prior approaches and demonstrate the applicability of our framework to detect cognitive load.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.