2023
DOI: 10.3390/s23042208
|View full text |Cite
|
Sign up to set email alerts
|

FPGA-Based Vehicle Detection and Tracking Accelerator

Abstract: A convolutional neural network-based multiobject detection and tracking algorithm can be applied to vehicle detection and traffic flow statistics, thus enabling smart transportation. Aiming at the problems of the high computational complexity of multiobject detection and tracking algorithms, a large number of model parameters, and difficulty in achieving high throughput with a low power consumption in edge devices, we design and implement a low-power, low-latency, high-precision, and configurable vehicle detec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 43 publications
(44 reference statements)
0
8
0
1
Order By: Relevance
“…A comparison of on-chip resource utilization is presented in Figure10. It can be seen that the proposed method has higher utilization of DSP and LUT resources than other literature, where the utilization efficiency of DSP is 0.61, which is 1.4 times higher than that of reference [18] , and the utilization of BRAM is slightly lower than that of reference [19] and reference [16] . The experiments show that the hardware accelerator design method proposed in this paper outperforms other literature in terms of on-chip resource utilization.…”
Section: On-chip Resource Consumption and Utilizationmentioning
confidence: 79%
See 1 more Smart Citation
“…A comparison of on-chip resource utilization is presented in Figure10. It can be seen that the proposed method has higher utilization of DSP and LUT resources than other literature, where the utilization efficiency of DSP is 0.61, which is 1.4 times higher than that of reference [18] , and the utilization of BRAM is slightly lower than that of reference [19] and reference [16] . The experiments show that the hardware accelerator design method proposed in this paper outperforms other literature in terms of on-chip resource utilization.…”
Section: On-chip Resource Consumption and Utilizationmentioning
confidence: 79%
“…It can be seen that the hardware accelerator developed only in this paper only uses 108 DSPs, showing a DSP resource reduction by a factor of 21.7 over the reference [17] . Meanwhile, through fixedpoint 16-bit wide quantization of floating-point 32 bits and using CSR data structures to store and read weight parameters, this paper further reduces on-chip BRAM consumption by a factor of 18.1 Experiment Ref [21] Ref [17] Ref [20] Ref [18] This work compared to the reference [17] . Besides, with the optimization of line cache pipelining, the model's throughput is further increased, which will further enhance the parallelism of processes during feature mapping and creating each WGS PE.…”
Section: Experimental Comparison and Analysismentioning
confidence: 99%
“…Longzhen Yu et al used FPGA to build a defect detector to achieve good accuracy and speed in industrial inspection [ 39 ]. Jiaqi Zhai et al used FPGA to build a license plate detector [ 10 ], and this detector has great application prospects in daily practice.…”
Section: Resultsmentioning
confidence: 99%
“…FPGA is a semi-custom circuit, which is a logic array that can be programmed. FPGA has a shorter design cycle and lower cost than CPU and ASIC, which are logic-fixed arrays, and using the parallel computing feature of FPGA, we can quickly perform inference calculations on neural networks, so FPGA is the best platform for deep learning model deployment acceleration [ 10 , 11 ].…”
Section: Introductionmentioning
confidence: 99%
“…Traditional computing platforms primarily include CPUs, GPUs, FPGAs, and ASICs. Among these, GPU [ 18 ], FPGA [ 19 , 20 ], and ASIC [ 21 ] excel in parallel implementations and can be applied to edge-side inference. To better adapt to deep learning intelligent algorithms while considering power constraints on edge devices, the core processor chips of computing platforms often adopt heterogeneous forms.…”
Section: Introductionmentioning
confidence: 99%