2021
DOI: 10.1109/lra.2020.3038377
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian and Neural Inference on LSTM-Based Object Recognition From Tactile and Kinesthetic Information

Abstract: Recent advances in the field of intelligent robotic manipulation pursue providing robotic hands with touch sensitivity. Haptic perception encompasses the sensing modalities encountered in the sense of touch (e.g., tactile and kinesthetic sensations). This letter focuses on multimodal object recognition and proposes analytical and data-driven methodologies to fuse tactile-and kinesthetic-based classification results. The procedure is as follows: a three-finger actuated gripper with an integrated high-resolution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 43 publications
(19 citation statements)
references
References 51 publications
(69 reference statements)
0
16
0
Order By: Relevance
“…To address this problem, several lines of research have shown that incorporating a variety of sensory modalities is the key to further enhance the robotic capabilities in recognizing multisensory object properties (see [4] and [21] for a review). For example, visual and physical interaction data yields more accurate haptic classification for objects [11], and non-visual sensory modalities (e.g., audio, haptics) coupled with exploratory actions (e.g., touch or grasp) have been shown useful for recognizing objects and their properties [5,10,15,24,30], as well as grounding natural language descriptors that people use to refer to objects [3,39]. More recently, researchers have developed end-to-end systems to enable robots to learn to perceive the environment and perform actions at the same time [20,42].…”
Section: Data Augmentationmentioning
confidence: 99%
“…To address this problem, several lines of research have shown that incorporating a variety of sensory modalities is the key to further enhance the robotic capabilities in recognizing multisensory object properties (see [4] and [21] for a review). For example, visual and physical interaction data yields more accurate haptic classification for objects [11], and non-visual sensory modalities (e.g., audio, haptics) coupled with exploratory actions (e.g., touch or grasp) have been shown useful for recognizing objects and their properties [5,10,15,24,30], as well as grounding natural language descriptors that people use to refer to objects [3,39]. More recently, researchers have developed end-to-end systems to enable robots to learn to perceive the environment and perform actions at the same time [20,42].…”
Section: Data Augmentationmentioning
confidence: 99%
“…Therefore, some research will each image represents a transient moment pressure readings matrix, and haptic image sequence can contain the physical properties of a object information changes over time [20]. For haptic image, machine learning methods, such as k-nearest neighbor [21], bayesian method [22] and the traditional method based on image [23,24] was used to identify features. But this does not mean that haptic data must use these corresponding methods.…”
Section: Related Workmentioning
confidence: 99%
“…Researchers have also proposed to connect vision and touch via cross-domain modeling [13], [48], [14], [15] or estimating 3D human poses from a tactile carpet by taking the outputs of computer vision models as supervisions [49]. Others have tried to combine tactile sensing with proprioception and kinesthetic information for object and shape recognition [50], [51]. In this paper, we also desire to establish a connection between touch and vision but focusing on the task of dynamics modeling using the newlydeveloped tactile glove [1].…”
Section: Related Workmentioning
confidence: 99%