2017 Ieee Sensors 2017
DOI: 10.1109/icsens.2017.8234203
|View full text |Cite
|
Sign up to set email alerts
|

Human and object recognition with a high-resolution tactile sensor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(33 citation statements)
references
References 15 publications
0
33
0
Order By: Relevance
“…To have a comparison point to our proposed method (DCNN), the DCNN-SVM method was also implemented. This method was previously presented in [ 53 ], where the first pre-trained layers of a DCNN are used to extract the features of the input tactile images. Then, the SVM replaces the last layer of the network and must be trained with pre-labelled tactile images.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To have a comparison point to our proposed method (DCNN), the DCNN-SVM method was also implemented. This method was previously presented in [ 53 ], where the first pre-trained layers of a DCNN are used to extract the features of the input tactile images. Then, the SVM replaces the last layer of the network and must be trained with pre-labelled tactile images.…”
Section: Methodsmentioning
confidence: 99%
“…The recognition procedure adopted in this paper complies with the idea of classifying an object by its contact shape, treating pressure data from a high-resolution tactile sensor as common images, and using a deep convolutional neural network [ 53 ]. This method was successfully implemented with a tactile sensor mounted on a rigid probe, applied to robotics search and rescue tasks [ 54 ], showing how tactile perception provides valuable information for the search of potential victims—especially in low-visibility scenarios.…”
Section: Tactile Recognitionmentioning
confidence: 99%
“…While most of the work done was focused on the methodology itself, few works addressed the implementation on embedded platforms where the real application should reside. Gandarias et al [11] used two approaches to classify eight objects: finger, hand, arm, pen, scissors, pliers, sticky tape, and Allen key, using a 28 × 50 tactile sensory array attached to a robotic arm, the first approach using the Speeded-Up Robust Features (SURF) descriptor, while the second a pre-trained AlexNet CNN for feature extraction, with a Support Vector Machine (SVM) classifier for both approaches. In Yuan et al's research [12], a CNN was also used for active tactile clothing perception, to classify clothes grasped by a robotic arm equipped with a tactile sensor that output a large RGB pressure map.…”
Section: State-of-the-artmentioning
confidence: 99%
“…The proposed framework is based on deep Convolutional Neural Networks (CNNs). This choice was motivated by the good performance reported recently in multi-class tactile recognition [7], [8] when using CNNs. However, note that the use of CNNs for Zero-Shot Learning is not straightforward: if we simply train a CNN to map tactile data into object classes, the CNN will miss output classes having no training data.…”
Section: Introductionmentioning
confidence: 99%