In this paper, we present a performance analysis of various descriptors suited to human gait analysis in Rotating Multi-Beam (RMB) Lidar measurement sequences. The gait descriptors for training and recognition are observed and extracted in realistic outdoor surveillance scenarios, where multiple pedestrians walk concurrently in the field of interest, their trajectories often intersect, while occlusions or background noise may affects the observation. For the Lidar scenes, we compared the modifications of five approaches proposed originally for optical cameras or Kinect measurements. Our results confirmed that efficient person re-identification can be achieved using a single Lidar sensor, even if it produces sparse point clouds.
In this paper we propose a general approach for registration of point clouds obtained by various mobile laser scanning technologies. Our method is able to robustly match measurements with significantly different density characteristic including the sparse and inhomogeneous instant 3D (I3D) data taken be self-driving cars, and the dense and regular point clouds captured by mobile mapping systems (MMS) for virtual city generation. The core steps of the algorithm are robust scan segmentation, abstract street object extraction, object based coarse transformation estimation in the Hough accumulator space, and point-level registration refinement. Experimental results are provided using three different sensors: Velodyne HDL64 and VLP16 I3D scanners, and a Riegl VMX450 MMS. Application examples are shown regarding self localization of autonomous cars through crossmodal I3D and MMS frame registration, IMUless SLAM and change detection based on I3D data.
In this paper we introduce a new approach on gait analysis based on data streams of a Rotating Multi Beam (RMB) Lidar sensor. The gait descriptors for training and recognition are observed and extracted in realistic outdoor surveillance scenarios, where multiple pedestrians walk concurrently in the field of interest, while occlusions or background noise may affects the observation. The proposed algorithms are embedded into an integrated 4D vision and visualization system. Gait features are exploited in two different components of the workflow. First, in the tracking step the collected characteristic gait parameters support as biometric descriptors the re-identification of people, who temporarily leave the field of interest, and re-appear later. Second, in the visualization module, we display moving avatar models which follow in real time the trajectories of the observed pedestrians with synchronized leg movements. The proposed approach is experimentally demonstrated in eight multi-target scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.