2022
DOI: 10.1109/lra.2022.3190641
|View full text |Cite
|
Sign up to set email alerts
|

DigiTac: A DIGIT-TacTip Hybrid Tactile Sensor for Comparing Low-Cost High-Resolution Robot Touch

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(21 citation statements)
references
References 24 publications
0
21
0
Order By: Relevance
“…Adversarial loss (10) where we set the hyperparameters α = 100, β = 200, and γ = 1, which are tuned experimentally. Finally, for training the adversarial discriminator D ψ , we use the cGAN objective as described in [36].…”
Section: E Real-to-simulation Generative Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial loss (10) where we set the hyperparameters α = 100, β = 200, and γ = 1, which are tuned experimentally. Finally, for training the adversarial discriminator D ψ , we use the cGAN objective as described in [36].…”
Section: E Real-to-simulation Generative Networkmentioning
confidence: 99%
“…In detail, the deformation of soft artificial skins upon physical contact with an object is detected through the optical tracking of visual features, such as markers or reflective membranes, which is then translated into tactile information, including contact location, force, vibration, object texture, and so on. The ViTac sensors have been found useful in small-scale manipulation tasks using robotics hands/fingers [9], [10]; however, their potential uses in large-scale whole-arm applications have not been comprehensively investigated.…”
mentioning
confidence: 99%
“… Optical flow method [28], [37]  Finite element mode [63], [44]  Neural network [58], [53] mainly using the machine learningbased approaches  Speckle detection [77], [80]  Feature enhancement [81], [82] mainly using the physical model-based approaches  Stereo vision [25], [94]  Virtual stereo vision [23], [26] mainly using the physical model-based approaches  Contact area [24]  2D force distribution [17]  Slip field [79]  Contact area [20]  2D force distribution [86]  Slip field [21]  Contact area [91]  Friction coefficient [23]  2D force distribution [97] 3D tactile perception  Geometric features [46]  3D geometry [18]  3D force distribution [63]  Geometric features [82]  3D geometry [82]  3D force distribution [87]  Geometric features [ commonly used in the field of visuotactile sensing. By using a camera to photograph the marks prepared on the sensor contact elastomer, a tactile image containing the position change of the markers can be obtained, and the tactile information can be further obtained by post-processing and analyzing the tactile image.…”
Section: Common Technologiesmentioning
confidence: 99%
“…GelSight sensor [2] and various revised ones [4,8,9,11] use the photometric stereo technique [3] to measure highresolution 3D geometry of contact objects with a monocular camera. With a GelSight sensor mounted on the gripper, robots can accomplish challenging tasks such as texture recognition [12,13,14], dexterous manipulation [15,16,17,18], shape mapping [19,20], as well as liquid property estimation [21].…”
Section: Related Workmentioning
confidence: 99%