Abstract:Traffic participant classification is critical in autonomous driving perception. Millimetre wave radio detection and ranging (RADAR) is a cost‐effective and robust means of performing this task in adverse traffic scenarios such as inclement weather (e.g. fog, snow, and rain) and poor lighting conditions. Compared to commercial two‐dimensional RADAR, the new generation of three‐dimensional (3D) RADAR can obtain height information about targets as well as their dense point clouds, greatly improving target classi… Show more
“…Car have been the easiest to detect (due to its unique signatures) with an average precision of 0.999, almost perfect classification. It can be observed in table 3 that RODNET [44], Ramp CNN [45], Bivariant Cross attention model [48] and MLP-22 [49] also perform considerably better on car classification. However, cars are lower on the vulnerable road user scale and higher emphasis has to be given to pedestrians and bicyclists.…”
Section: Resultsmentioning
confidence: 98%
“…The developed method performs better than the existing method as can be seen in table 3. Pedestrian are detected with average precision of 0.993 by the developed F-ROADNET whereas RODNET [44], Ramp CNN [45], Bivariant Cross attention model [48] and MLP-22 [49] achieved average precision of 0.88, 0.89, 0.91 and 0.90 respectively. Bicyclist have been the most challenging to correctly classify, even so, F-ROADNET achieve average precision of 0.951, exceeding the average precision accomplished by the other methods.…”
Section: Resultsmentioning
confidence: 99%
“…Comparisons with already-developed techniques ( [44], [45], [49] and [48]) were done to assess the developed method's potential. The developed method performs better than the existing method as can be seen in table 3.…”
Road user categorization is essential for autonomous driving perception. In challenging traffic situations including unfavorable weather (such as fog, snow, and rain) and dim lighting. There are several kinds of sensors that need to be researched in order to achieve the precision and resilience that autonomous systems demand. Currently, to create a depiction of the environment surrounding the vehicle, principally cameras and laser scanners (LiDAR) are commissioned. Despite their enticing qualities, Radar sensors are currently underutilized for autonomous driving, even though, they have been employed in the automobile industry for a long time. Radar's ability to measure the relative speed of obstacles and to operate even in adverse weather conditions makes it a front line contender for road user detection. This study proposes F-ROADNET, a multi-object classification method for vulnerable road users based on raw Radar data. The model is trained on Range Angle and Range Doppler maps based on a late fusion architecture. F-ROADNET has a detection accuracy of 99.01%, precision of 99.3% and recall of 99% on the CARRADA dataset and detection accuracy of 91.62%, precision of 87.2% and recall of 90.2% on the RADDet dataset. The findings exhibit that F-ROADNET outperforms established methods in terms of average precision.
“…Car have been the easiest to detect (due to its unique signatures) with an average precision of 0.999, almost perfect classification. It can be observed in table 3 that RODNET [44], Ramp CNN [45], Bivariant Cross attention model [48] and MLP-22 [49] also perform considerably better on car classification. However, cars are lower on the vulnerable road user scale and higher emphasis has to be given to pedestrians and bicyclists.…”
Section: Resultsmentioning
confidence: 98%
“…The developed method performs better than the existing method as can be seen in table 3. Pedestrian are detected with average precision of 0.993 by the developed F-ROADNET whereas RODNET [44], Ramp CNN [45], Bivariant Cross attention model [48] and MLP-22 [49] achieved average precision of 0.88, 0.89, 0.91 and 0.90 respectively. Bicyclist have been the most challenging to correctly classify, even so, F-ROADNET achieve average precision of 0.951, exceeding the average precision accomplished by the other methods.…”
Section: Resultsmentioning
confidence: 99%
“…Comparisons with already-developed techniques ( [44], [45], [49] and [48]) were done to assess the developed method's potential. The developed method performs better than the existing method as can be seen in table 3.…”
Road user categorization is essential for autonomous driving perception. In challenging traffic situations including unfavorable weather (such as fog, snow, and rain) and dim lighting. There are several kinds of sensors that need to be researched in order to achieve the precision and resilience that autonomous systems demand. Currently, to create a depiction of the environment surrounding the vehicle, principally cameras and laser scanners (LiDAR) are commissioned. Despite their enticing qualities, Radar sensors are currently underutilized for autonomous driving, even though, they have been employed in the automobile industry for a long time. Radar's ability to measure the relative speed of obstacles and to operate even in adverse weather conditions makes it a front line contender for road user detection. This study proposes F-ROADNET, a multi-object classification method for vulnerable road users based on raw Radar data. The model is trained on Range Angle and Range Doppler maps based on a late fusion architecture. F-ROADNET has a detection accuracy of 99.01%, precision of 99.3% and recall of 99% on the CARRADA dataset and detection accuracy of 91.62%, precision of 87.2% and recall of 90.2% on the RADDet dataset. The findings exhibit that F-ROADNET outperforms established methods in terms of average precision.
“…Specifically, if short-and longrange radar sensors are installed at the periphery of an autonomous vehicle, they will monitor the real-time position and speed of surrounding objects, including vehicles and pedestrians [128]. However, 2D radar, which can scan only in the horizontal plane, cannot reconstruct the height information of obstacles, so collisions may occur when the vehicle travels on a height-limited road [129]. 3D radar sensors will be applied to solve the problem.…”
Section: (C) Radio Detection and Ranging (Radar) Sensorsmentioning
In response to severe environmental and energy crises, the world is increasingly focusing on electric vehicles (EVs) and related emerging technologies. Emerging technologies for EVs have great potential to accelerate the development of smart and sustainable transportation and help build future smart cities. This paper reviews new trends and emerging EV technologies, including wireless charging, smart power distribution, vehicle-to-home (V2H) and vehicle-to-grid (V2G) systems, connected vehicles, and autonomous driving. The opportunities, challenges, and prospects for emerging EV technologies are systematically discussed. The successful commercialization development cases of emerging EV technologies worldwide are provided. This review serves as a reference and guide for future technological development and commercialization of EVs and offers perspectives and recommendations on future smart transportation.
“…[12] proposed the target classification network using self‐attention mechanisms for millimetre‐wave automotive radar systems. They also classified targets on the road by extracting multidimensional feature vectors from the point cloud data and training the machine learning‐based classifiers with those vectors [13].…”
In this study, a target classification method based on point cloud data in a high‐resolution radar sensor is proposed. By using multiple antenna elements arranged in horizontal and vertical directions, pedestrians, cyclists and vehicles can be expressed as point cloud data in the three‐dimensional (3D) space. To perform target classification using the spatial characteristics (i.e. length, height and width) of the target, the 3D point cloud data is orthogonally projected onto the xy, yz and zx planes, respectively, and three types of images are generated. Then, a multi‐view convolutional neural network (CNN)‐based target classifier using those three images as inputs is designed. To this end, a method for synthesising the detection results of three directions in series or in parallel is proposed. The proposed classifier can learn the spatial characteristics of the target by using the detection results of multiple viewpoints. Compared to the CNN‐based classifier that uses only the detection result of a single plane as input, the proposed method shows 4.5%p higher classification accuracy in terms of the target with the lowest classification accuracy. In addition, the proposed multi‐view CNN structure shows improved classification performance and shorter training time compared to the well‐known deep learning methods for image classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.