2018 AIAA Guidance, Navigation, and Control Conference 2018
DOI: 10.2514/6.2018-1604
|View full text |Cite
|
Sign up to set email alerts
|

CubeSat Simulation and Detection using Monocular Camera Images and Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(18 citation statements)
references
References 3 publications
0
18
0
Order By: Relevance
“…The implementation of CNNs for monocular pose estimation in space has already become an attractive solution in recent years [10][11][12], also thanks to the creation of the Spacecraft PosE Estimation Dataset (SPEED) [11], a database of highly representative synthetic images of PRISMA's TANGO spacecraft made publicly available by Stanford's Space Rendezvous Laboratory (SLAB) applicable to train and test different network architectures. One of the main advantages of CNNs over standard feature-based algorithms for relative pose estimation [3,13,14] is an increase in the robustness under adverse illumination condition, as well as a reduction in the computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…The implementation of CNNs for monocular pose estimation in space has already become an attractive solution in recent years [10][11][12], also thanks to the creation of the Spacecraft PosE Estimation Dataset (SPEED) [11], a database of highly representative synthetic images of PRISMA's TANGO spacecraft made publicly available by Stanford's Space Rendezvous Laboratory (SLAB) applicable to train and test different network architectures. One of the main advantages of CNNs over standard feature-based algorithms for relative pose estimation [3,13,14] is an increase in the robustness under adverse illumination condition, as well as a reduction in the computational complexity.…”
Section: Introductionmentioning
confidence: 99%
“…[34] for noncooperative spacecraft to solve a classification problem and return the relative pose of the space target associated to each image [14]. Shi et al use Inception-ResNet-V2 [35] and ResNet-101 [36], combined with an object detection engine [19], which improves their reliability. Sharma and D'Amico propose the SPN network [20,37] based on five convolutional layers, a Region Proposal Network (RPN) [38], and three fully-connected layers in order to generate the relative attitude of the target spacecraft.…”
Section: Pose Determination For Noncooperative Space Targetsmentioning
confidence: 99%
“…The CNNs have the advantages of higher robustness for adverse illumination condition and lower computational complexity over classical feature-based algorithms. Most of the CNN based pose determination methods [14,[17][18][19][20][21] are designed to solve a classification or regression problem and then return the relative pose of the space target, which is describeed in detail in the related work section. However, compared with CNNs based on keypoints, these regression or classification models are less generalized and are more easily interfered by low signal-to-noise-ratio, and the multiscale characteristics of space target images are often ignored.…”
Section: Introductionmentioning
confidence: 99%
“…This was performed by means of transfer learning on the last three fully-connected layers. Shi et al [52] used two state-of-the-art CNNs, namely Inception-ResNet-V2 [53] and ResNet-101 [54], in combination with an object detection engine [55] to improve their reliability. Synthetic images generated in the 3DS-Max software were used in combination with real images to train and test the two networks, specifically 400 and 100 images, of which 8% were real images, were used for training and testing the networks, respectively.…”
Section: Cnn-based Pose Estimationmentioning
confidence: 99%
“…Also, larger datasets shall be considered for a comprehensive comparative assessment of the CNN architecture with the conventional pose determination architectures. Furthermore, assumptions on the illumination environment, target texture and reflectance properties shall be investigated to increase the robustness of the pose estimation, and different CNNs, such as the GoogLeNet, the ResNets and the DenseNet, shall be traded-off with respect to computational time and accuracy in the pose estimation, following the promising results reported in [52] for the Inception-ResNet-V2…”
Section: Cnn-based Pose Estimationmentioning
confidence: 99%