2022
DOI: 10.1016/j.patter.2022.100543
|View full text |Cite
|
Sign up to set email alerts
|

Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 15 publications
(21 reference statements)
0
2
0
Order By: Relevance
“…It needs to be noted that for a single modality, all I3D variants are the same. When we analyze the impact of each modality (8)(9)(10)(11)(12), we observe that the combination of video and F-T performs best. We next investigate multitask learning with MSTCN by considering only video as input (13)(14)(15)(16)(17).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…It needs to be noted that for a single modality, all I3D variants are the same. When we analyze the impact of each modality (8)(9)(10)(11)(12), we observe that the combination of video and F-T performs best. We next investigate multitask learning with MSTCN by considering only video as input (13)(14)(15)(16)(17).…”
Section: Resultsmentioning
confidence: 99%
“…The types of handover failures in the dataset formed part of the MET-RICS HEART-MET competition at ICRA 2023 as possible behaviours of persons for whom a robot had to fetch an object. The dataset has also been used to create a handover failure detection benchmark hosted on Codabench [9]. We focus on failures that are caused by the human participant in both robot-to-human (R2H) and human-to-robot (H2R)…”
Section: Introductionmentioning
confidence: 99%