“…Need system for transferring information from UE to BS [20] OTDOA Limited accuracy (10 m >) [31], [32], [33] Tracking with vision and mmWave radar…”
Section: B Object Trackingmentioning
confidence: 99%
“…Methods for tracking objects by combining images and radio information have been studied. Most are sensor fusion techniques that use a camera and mmWave radar [31], [32], [33]. Images from a camera and point clouds from mmWave radar are aligned, and detection and tracking based on the respective data are extrinsically merged.…”
Section: Proposed Tracking With Vision and Rssimentioning
confidence: 99%
“…This radio information can be obtained without any additional device in the base station environment we assume. In addition, the proposed method uses the intrinsic combination of camera images and radio information for tracking, which is distinct from these sensor fusion methods [31], [32], [33]. A UE localization estimation method using camera images and RSSI was proposed [34].…”
Section: Proposed Tracking With Vision and Rssimentioning
In a mobile millimeter wave (mmWave) communication system, blockages cause disconnections or serious degradation of communications. Several techniques have been proposed to control radio links between multiple base stations based on blockage prediction using camera images to avoid these problems. However, blockage prediction requires continuously determining the position of user equipment (UE) with decimeter precision, which is difficult when sensors and resources on the UE side are not available, and there are many moving objects around the UE. To resolve this problem, we propose a UE tracking method that uses a received signal strength indicator (RSSI) and RGB-D camera images from the base station. The proposed method consists of a combination of visual tracking and reidentification of the UE using radio information and camera images. We use the synchronization of the RSSI variation with the occlusion on the image by movements of the UE and objects for reidentification. We evaluated the proposed method experimentally in an outdoor environment by simulating a communication area formed by mmWave band base stations. The proposed method achieved an 11.5% improvement in tracking accuracy compared with conventional visual tracking.INDEX TERMS Image analysis, millimeter wave communication, object tracking,
“…Need system for transferring information from UE to BS [20] OTDOA Limited accuracy (10 m >) [31], [32], [33] Tracking with vision and mmWave radar…”
Section: B Object Trackingmentioning
confidence: 99%
“…Methods for tracking objects by combining images and radio information have been studied. Most are sensor fusion techniques that use a camera and mmWave radar [31], [32], [33]. Images from a camera and point clouds from mmWave radar are aligned, and detection and tracking based on the respective data are extrinsically merged.…”
Section: Proposed Tracking With Vision and Rssimentioning
confidence: 99%
“…This radio information can be obtained without any additional device in the base station environment we assume. In addition, the proposed method uses the intrinsic combination of camera images and radio information for tracking, which is distinct from these sensor fusion methods [31], [32], [33]. A UE localization estimation method using camera images and RSSI was proposed [34].…”
Section: Proposed Tracking With Vision and Rssimentioning
In a mobile millimeter wave (mmWave) communication system, blockages cause disconnections or serious degradation of communications. Several techniques have been proposed to control radio links between multiple base stations based on blockage prediction using camera images to avoid these problems. However, blockage prediction requires continuously determining the position of user equipment (UE) with decimeter precision, which is difficult when sensors and resources on the UE side are not available, and there are many moving objects around the UE. To resolve this problem, we propose a UE tracking method that uses a received signal strength indicator (RSSI) and RGB-D camera images from the base station. The proposed method consists of a combination of visual tracking and reidentification of the UE using radio information and camera images. We use the synchronization of the RSSI variation with the occlusion on the image by movements of the UE and objects for reidentification. We evaluated the proposed method experimentally in an outdoor environment by simulating a communication area formed by mmWave band base stations. The proposed method achieved an 11.5% improvement in tracking accuracy compared with conventional visual tracking.INDEX TERMS Image analysis, millimeter wave communication, object tracking,
“…Millimeter-wave (mmw) radar has the ability to provide accurate range and velocity measurement all weather conditions, while visual sensors just acquire the vision information depends on the light condition . When fusing these two sensors, it is possible to compensate for their shortcomings respectively and improve the monitoring accuracy of traffic intersections [2].…”
Section: Introductionmentioning
confidence: 99%
“…Many neural network technologies based on the Lidar detections, such as PointNet and PointNet++, have extended corresponding versions for mmw radar. However, since the sparsity of mmw radar point cloud and the influences of random noise , it is difficult to detect traffic targets accurately only use mmw radar in roadside surveillance, especially in terms of shape perception and other detail information [11].…”
An efficient and accurate traffic monitoring system often takes advantages of multi-sensor detection to ensure the safety of urban traffic, promoting the accuracy and robustness of target detection and tracking. A method for target detection using Radar-Vision Fusion Path Aggregation Fully Convolutional One-Stage Network (RV-PAFCOS) is proposed in this paper, which is extended from Fully Convolutional One-Stage Network (FCOS) by introducing the modules of radar image processing branches, radar-vision fusion and path aggregation. The radar image processing branch mainly focuses on the image modeling based on the spatiotemporal calibration of millimeter-wave (mmw) radar and cameras, taking the conversion of radar point clouds to radar images. The fusion module extracts features of radar and optical images based on the principle of spatial attention stitching criterion. The path aggregation module enhances the reuse of feature layers, combining the positional information of shallow feature maps with deep semantic information, to obtain better detection performance for both large and small targets. Through the experimental analysis, the method proposed in this paper can effectively fuse the mmw radar and vision perceptions, showing good performance in traffic target detection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.