2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00962
|View full text |Cite
|
Sign up to set email alerts
|

X-World: Accessibility, Vision, and Autonomy Meet

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 46 publications
0
6
0
Order By: Relevance
“…Since the data were recorded in New York City, many demographics are captured. This is particularly important since some of these groups, such as wheelchair users and people with varying levels and types of disabilities, are absent from large-scale datasets in computer vision and robotics, creating a steep barrier to developing accessibility-aware autonomous systems [ 52 ]. Identifying pedestrians with disabilities, the qualities of their behavior, and ease at traversing the sensed urban environment is an area of possible exploration with datasets such as this one.…”
Section: Discussionmentioning
confidence: 99%
“…Since the data were recorded in New York City, many demographics are captured. This is particularly important since some of these groups, such as wheelchair users and people with varying levels and types of disabilities, are absent from large-scale datasets in computer vision and robotics, creating a steep barrier to developing accessibility-aware autonomous systems [ 52 ]. Identifying pedestrians with disabilities, the qualities of their behavior, and ease at traversing the sensed urban environment is an area of possible exploration with datasets such as this one.…”
Section: Discussionmentioning
confidence: 99%
“…[GRL*19] use a GCNN for mesh generation; Zhang et al . [ZLM*19] regress camera and mesh parameters with an iterative regression module; finally, Zhou et al . [ZHX*20] apply an inverse kinematics network, for the first time in the context of hands.…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
“…All of them train on synthetic or mixed datasets with ground‐truth 3D hand meshes and poses and some fine‐tune on in‐the‐wild images using either 2D annotations only [BBT19] or rendered depth maps [GRL*19]. Moreover, MANO differentiability enables end‐to‐end trainable architectures [BBT19, ZLM*19]. Further characteristics of these methods are: Boukhayma et al .…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…While humans can effortlessly transfer general navigation knowledge across settings and platforms [37,48,54,55], current real-world development of navigation agents generally deploys within a fixed pre-assumed setting (e.g., ge-ographical location, use-case) and carefully calibrated sensor configurations. Consequently, each autonomy use-case generally requires its own prohibitive data collection and platform-specific annotation efforts [4,6,19,58,81]. Due to such development bottlenecks, brittle navigation models trained in-house by various developers (e.g., Tesla's Autopilot [24], Waymo's Driver [4], Amazon's Astro [71], Fedex's Roxo [26], etc.)…”
Section: Introductionmentioning
confidence: 99%