2023
DOI: 10.1038/s41598-023-40961-5
|View full text |Cite
|
Sign up to set email alerts
|

Radar sensor based machine learning approach for precise vehicle position estimation

Muhammad Sohail,
Abd Ullah Khan,
Moid Sandhu
et al.

Abstract: Estimating vehicles’ position precisely is essential in Vehicular Adhoc Networks (VANETs) for their safe, autonomous, and reliable operation. The conventional approaches used for vehicles’ position estimation, like Global Positioning System (GPS) and Global Navigation Satellite System (GNSS), pose significant data delays and data transmission errors, which render them ineffective in achieving precision in vehicles’ position estimation, especially under dynamic environments. Moreover, the existing radar-based a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 42 publications
(31 reference statements)
0
3
0
Order By: Relevance
“…Whilst a detailed description of video processing and its schematics are omitted from this paper, interested readers are referred to our earlier studies 27 , 28 . Succinctly, the YOLO algorithm was used for object detection in a traffic scene 29 , whereas the DeepSORT algorithm tracked the movement of the detected objects (both motorised and non-motorised). The trajectories obtained from automated analysis were meticulously examined and corrected for any measurement/calibration error.…”
Section: Data and Pre-processingmentioning
confidence: 99%
“…Whilst a detailed description of video processing and its schematics are omitted from this paper, interested readers are referred to our earlier studies 27 , 28 . Succinctly, the YOLO algorithm was used for object detection in a traffic scene 29 , whereas the DeepSORT algorithm tracked the movement of the detected objects (both motorised and non-motorised). The trajectories obtained from automated analysis were meticulously examined and corrected for any measurement/calibration error.…”
Section: Data and Pre-processingmentioning
confidence: 99%
“…Sensor fusion involves the close knitted integration and processing of data from multiple sensors for a more comprehensive and accurate understanding of the environment. It is a crucial aspect of autonomous driving technology. , A variety of sensors, including light detection and ranging (LiDAR), millimeter-wave radar (Radar), and cameras, are extensively utilized to capture diverse information about the vehicle’s surroundings. However, individual sensors are easily subject to various interferences such as changing weather conditions, electromagnetic disturbances, laser obstruction etc., significantly affecting the measurement accuracy and reliability of the entire system. , Late fusion in autonomous driving involves processing data from various sensors independently and merging their outputs at a later stage . It is particularly useful in addressing the aforementioned challenges and is expected to revolutionize future autonomous driving technology.…”
Section: Introductionmentioning
confidence: 99%
“…These solutions offer the advantage of furnishing precise and varied information, encompassing details such as speed and heading direction. However, the drawbacks include the typically substantial installation size and the computational power required [4][5][6]. In recent times, advancements in computing hardware performance, coupled with the development of sophisticated machine learning techniques, have facilitated the reliable recognition of vehicles in images or video streams through machine vision approaches.…”
Section: Introductionmentioning
confidence: 99%