2020
DOI: 10.48550/arxiv.2009.07083
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Event-Driven Visual-Tactile Sensing and Learning for Robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…For instance, the GRID [164] visual-audio lipreading dataset is recorded using two bio-inspired silicon multimodal sensors (i.e., DVS [158] and DAS [159]). A visual-tactile event-based dataset [163] is built for intelligent power-efficient robot systems using a neuromorphic fingertip tactile sensor and an event camera. Besides, some works [150], [152]- [157], [165] attempt to integrate neuromorphic sensors with other sensing modalities (e.g., LiDAR, RGB-D camera, infrared camera, IMU, and GPS) for intelligent robots in challenging scenarios.…”
Section: B Categorizationmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, the GRID [164] visual-audio lipreading dataset is recorded using two bio-inspired silicon multimodal sensors (i.e., DVS [158] and DAS [159]). A visual-tactile event-based dataset [163] is built for intelligent power-efficient robot systems using a neuromorphic fingertip tactile sensor and an event camera. Besides, some works [150], [152]- [157], [165] attempt to integrate neuromorphic sensors with other sensing modalities (e.g., LiDAR, RGB-D camera, infrared camera, IMU, and GPS) for intelligent robots in challenging scenarios.…”
Section: B Categorizationmentioning
confidence: 99%
“…On the benchmark dataset side, existing BIC datasets can be classified into two categories: the simulated datasets [112], [134]- [143] and the real-world datasets [113], [114], [144]- [157] from the dataset generation perspective, and can also be classified into the single-modality datasets [144], [158]- [162] and the multi-modality datasets [150], [152]- [157], [163]- [165] from the modality perspective. The simulated BIC datasets are usually generated based on event-based simulators [134]- [140] or event cameras to record images from popular frame-based datasets on an LCD monitor [112], [141]- [143].…”
Section: Introduction Motivation and Overviewmentioning
confidence: 99%
“…On the benchmark dataset side, existing BIC datasets can be classified into two categories: the simulated datasets [112], [134]- [143] and the real-world datasets [113], [114], [144]- [157] from the dataset generation perspective, and can also be classified into the single-modality datasets [144], [158]- [162] and the multi-modality datasets [150], [152]- [157], [163]- [165] from the modality perspective. The simulated BIC datasets are usually generated based on event-based simulators [134]- [140] or event cameras to record images from popular frame-based datasets on an LCD monitor [112], [141]- [143].…”
Section: Introduction Motivation and Overviewmentioning
confidence: 99%
“…On the other hand, there has been a surge of interest and enthusiasm for learning-based systems that use deep learning to estimate tactile information [30]. Traditional image processing/computer vision-based and learning-based methods are both used for processing vision-based tactile sensors.…”
Section: Introductionmentioning
confidence: 99%