2023
DOI: 10.1002/lpor.202300424
|View full text |Cite
|
Sign up to set email alerts
|

Hardware Implementation of Ultra‐Fast Obstacle Avoidance Based on a Single Photonic Spiking Neuron

Shuang Gao,
Shuiying Xiang,
Ziwei Song
et al.

Abstract: Visual obstacle avoidance is widely applied to unmanned aerial vehicles (UAVs) and mobile robot fields. A simple system architecture, low power consumption, optimized processing, and real‐time performance are extremely needed due to the limited payload of some mini UAVs. To address these issues, an obstacle avoidance system harnessing the rate encoding features of a photonic spiking neuron based on a Fabry–Pérot (FP) laser is proposed, which simulates the monocular vision. Here, time to collision is used to de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 52 publications
0
1
0
Order By: Relevance
“…However, such an architecture is inefficient for computational models that are distributed, massively parallel, and adaptive, most notably those used for neural networks in machine learning (ML). ML is an attempt to achieve a human-level approach to tasks that are challenging for traditional computers but easy for humans. Furthermore, the human brain, which could be abstracted as a neural network, is a dynamic system characterized by high parallelism and adaptability. Reservoir computing (RC), as a ML approach based on the principles of dynamic system theory, attempts to mimic these neural network characteristics, although its structure is relatively simpler compared to the human brain. Compared with those of other ML models, the input and reservoir weights of RC are randomly generated and fixed. Only the output weight, applied to connect the reservoir and output layer to implement RC, needs to be trained, which greatly reduces the difficulty and computational complexity of model training. , …”
Section: Introductionmentioning
confidence: 99%
“…However, such an architecture is inefficient for computational models that are distributed, massively parallel, and adaptive, most notably those used for neural networks in machine learning (ML). ML is an attempt to achieve a human-level approach to tasks that are challenging for traditional computers but easy for humans. Furthermore, the human brain, which could be abstracted as a neural network, is a dynamic system characterized by high parallelism and adaptability. Reservoir computing (RC), as a ML approach based on the principles of dynamic system theory, attempts to mimic these neural network characteristics, although its structure is relatively simpler compared to the human brain. Compared with those of other ML models, the input and reservoir weights of RC are randomly generated and fixed. Only the output weight, applied to connect the reservoir and output layer to implement RC, needs to be trained, which greatly reduces the difficulty and computational complexity of model training. , …”
Section: Introductionmentioning
confidence: 99%