Thanks to the efforts of the robotics and autonomous systems community,
robots are becoming ever more capable. There is also an increasing demand from
end-users for autonomous service robots that can operate in real environments
for extended periods. In the STRANDS project we are tackling this demand
head-on by integrating state-of-the-art artificial intelligence and robotics
research into mobile service robots, and deploying these systems for long-term
installations in security and care environments. Over four deployments, our
robots have been operational for a combined duration of 104 days autonomously
performing end-user defined tasks, covering 116km in the process. In this
article we describe the approach we have used to enable long-term autonomous
operation in everyday environments, and how our robots are able to use their
long run times to improve their own performance
In credit scoring, feature selection aims at removing irrelevant data to improve the performance of the scorecard and its interpretability. Standard techniques treat feature selection as a single-objective task and rely on statistical criteria such as correlation. Recent studies suggest that using profit-based indicators may improve the quality of scoring models for businesses. We extend the use of profit measures to feature selection and develop a multi-objective wrapper framework based on the NSGA-II genetic algorithm with two fitness functions: the Expected Maximum Profit (EMP) and the number of features. Experiments on multiple credit scoring data sets demonstrate that the proposed approach develops scorecards that can yield a higher expected profit using fewer features than conventional feature selection strategies.
Abstract. For the effective operation of intelligent assistive systems working in real-world human environments, it is important to be able to recognise human activities and their intentions. In this paper we propose a novel approach to activity recognition from visual data. Our approach is based on qualitative and quantitative spatio-temporal features which encode the interactions between human subjects and objects in an abstract and efficient manner. Unlike current state of the art approaches, our approach uses significantly fewer assumptions and does not require any knowledge about object types, their affordances, or the sub-level activities that high-level activities consist of. We perform an automatic feature selection process which provides the most representative descriptions of the learnt activities. We validated our method using these descriptions on the CAD-120 benchmark dataset consisting of video sequences showing humans performing daily real-world activities. The experimental results show the strength of our work which significantly outperforms the current state of the art benchmark.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.