2017
DOI: 10.48550/arxiv.1705.05065
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
66
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(66 citation statements)
references
References 0 publications
0
66
0
Order By: Relevance
“…EXPERIMENTS Dataset. For our dataset, we use the photo-realistic, multirobot AirSim [14] simulator to collect synchronized RGB, depth, semantic segmentation, and pose information from a realistic city environment with dynamic objects. Within this environment, we control a non-rigid swarm of 6 drones to various road intersections.…”
Section: Methods a Multi-agent Infilling Networkmentioning
confidence: 99%
“…EXPERIMENTS Dataset. For our dataset, we use the photo-realistic, multirobot AirSim [14] simulator to collect synchronized RGB, depth, semantic segmentation, and pose information from a realistic city environment with dynamic objects. Within this environment, we control a non-rigid swarm of 6 drones to various road intersections.…”
Section: Methods a Multi-agent Infilling Networkmentioning
confidence: 99%
“…The publication [95] deals with a hybrid camera array-based autonomous landing procedure with a high field of view camera (fish-eye camera) and a tailored control strategy. The authors describe the development of their system with an initial simulation and parameter identification in Microsoft AirSim [146]. An ArUco marker was attached to the landing platform, and multiple tests validated the performance of the system in simulations and a real scenario.…”
Section: Landing the Uav On An Agvmentioning
confidence: 99%
“…To produce this dataset, we spawned a swarm of 6 drones in the photo-realistic, multi-robot AirSim [15] simulator. We commanded the drones to move roughly together throughout the environment, capturing synchronized images along the way.…”
Section: × Rmentioning
confidence: 99%
“…To demonstrate the efficacy of our approach, we perform extensive experimentation in a photo-realistic simulation environment (AirSim [15]), specifically investigating how a group of mobile robots can communicate their observations to overcome unexpected foreground obstructions, such as occluding vegetation and wildlife. Our method increases performance on a multi-agent semantic segmentation task by an absolute 11% IoU over strong baselines, and it approaches arXiv:2107.00771v1 [cs.RO] 1 Jul 2021 upper bounds that utilize ground truth transformations across the agents' sensors, while saving significant bandwidth.…”
Section: Introductionmentioning
confidence: 99%