Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors previously learnt by the CNN after processing millions of images, without requiring an expensive training phase. Contributions to this field (commonly known as feature representation transfer or transfer learning) have been purely empirical so far, extracting all CNN features from a single layer close to the output and testing their performance by feeding them to a classifier. This approach has provided consistent results, although its relevance is limited to classification tasks. In a completely different approach, in this paper we statistically measure the discriminative power of every single feature found within a deep CNN, when used for characterizing every class of 11 datasets. We seek to provide new insights into the behavior of CNN features, particularly the ones from convolutional layers, as this can be relevant for their application to knowledge representation and reasoning. Our results confirm that low and middle level features may behave differently to high level features, but only under certain conditions. We find that all CNN features can be used for knowledge representation purposes both by their presence or by their absence, doubling the information a single CNN feature may provide. We also study how much noise these features may include, and propose a thresholding approach to discard most of it. All these insights have a direct application to the generation of CNN embedding spaces.
In todays aging society, many people require mobility assistance, that can be provided by robotized assistive wheelchairs with a certain degree of autonomy when manual control is unfeasible due to disability.Robot wheelchairs, though, are not supposed to be completely in control because lack of human intervention may lead to loss of residual capabilities and frustration. Most of these systems rely on shared control, which typically consists of swapping control from human to robot when needed. However, this means that persons never deal with situations they find difficult. We propose a new shared control approach to allow constant cooperation between humans and robots, so that assistance may be adapted to the user's skills. Our proposal is based on the reactive navigation paradigm, where robot and human commands become different goals in a Potential Field. Our main novelty is that human and robot attractors are weighted by their respective local efficiencies at each time instant. This produces an emergent behavior that combines both inputs in an efficient, safe and smooth way and is dynamically adapted to the user's needs. The proposed control scheme has been successfully tested at hospital Fondazione Santa Lucia (FSL) in Rome with several volunteers presenting different disabilities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.