Mapping the environment of a vehicle and localizing a vehicle within that unknown environment are complex issues. Although many approaches based on various types of sensory inputs and computational concepts have been successfully utilized for ground robot localization, there is difficulty in localizing an unmanned aerial vehicle (UAV) due to variation in altitude and motion dynamics. This paper proposes a robust and efficient indoor mapping and localization solution for a UAV integrated with low-cost Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU) sensors. Considering the advantage of the typical geometric structure of indoor environments, the planar position of UAVs can be efficiently calculated from a point-to-point scan matching algorithm using measurements from a horizontally scanning primary LiDAR. The altitude of the UAV with respect to the floor can be estimated accurately using a vertically scanning secondary LiDAR scanner, which is mounted orthogonally to the primary LiDAR. Furthermore, a Kalman filter is used to derive the 3D position by fusing primary and secondary LiDAR data. Additionally, this work presents a novel method for its application in the real-time classification of a pipeline in an indoor map by integrating the proposed navigation approach. Classification of the pipeline is based on the pipe radius estimation considering the region of interest (ROI) and the typical angle. The ROI is selected by finding the nearest neighbors of the selected seed point in the pipeline point cloud, and the typical angle is estimated with the directional histogram. Experimental results are provided to determine the feasibility of the proposed navigation system and its integration with real-time application in industrial plant engineering.
Acquiring the three-dimensional point cloud data of a scene using a laser scanner and the alignment of the point cloud data within a real-time video environment view of a camera is a very new concept and is an efficient method for constructing, monitoring, and retrofitting complex engineering models in heavy industrial plants. This article presents a novel prototype framework for virtual retrofitting applications. The workflow includes an efficient 4-in-1 alignment, beginning with the coordination of pre-processed three-dimensional point cloud data using a partial point cloud from LiDAR and alignment of the pre-processed point cloud within the video scene using a frame-by-frame registering method. Finally, the proposed approach can be utilized in pre-retrofitting applications by pre-generated three-dimensional computeraided design models virtually retrofitted with the help of a synchronized point cloud, and a video scene is efficiently visualized using a wearable virtual reality device. The prototype method is demonstrated in a real-world setting, using the partial point cloud from LiDAR, pre-processed point cloud data, and video from a two-dimensional camera.
Today, enhancement in sensing technology enables the use of multiple sensors to track human motion/activity precisely. Tracking human motion has various applications, such as fitness training, healthcare, rehabilitation, human-computer interaction, virtual reality, and activity recognition. Therefore, the fusion of multiple sensors creates new opportunities to develop and improve an existing system. This paper proposes a pose-tracking system by fusing multiple three-dimensional (3D) light detection and ranging (lidar) and inertial measurement unit (IMU) sensors. The initial step estimates the human skeletal parameters proportional to the target user’s height by extracting the point cloud from lidars. Next, IMUs are used to capture the orientation of each skeleton segment and estimate the respective joint positions. In the final stage, the displacement drift in the position is corrected by fusing the data from both sensors in real time. The installation setup is relatively effortless, flexible for sensor locations, and delivers results comparable to the state-of-the-art pose-tracking system. We evaluated the proposed system regarding its accuracy in the user’s height estimation, full-body joint position estimation, and reconstruction of the 3D avatar. We used a publicly available dataset for the experimental evaluation wherever possible. The results reveal that the accuracy of height and the position estimation is well within an acceptable range of ±3–5 cm. The reconstruction of the motion based on the publicly available dataset and our data is precise and realistic.
Aim:Objective of this study was to investigate the effect of multicarbohydrases supplementation on performance of broilers fed low energy diet.Materials and Methods:A total of 75 days old chicks were selected and randomly divided into three treatments groups (T1, T2, and T3); each group contained 25 chicks distributed in five replicates of five chicks each. T1 group (positive control) was offered control ration formulated as per Bureau of Indian Standards recommendations. In T2 group (negative control) ration, metabolizable energy (ME) was reduced by 100 kcal/kg diet. T3 group ration was same as that of T2 except that it was supplemented with multicarbohydrases (xylanase at 50 g/ton+mannanase at 50 g/ton+amylase at 40 g/ton). Feed intake and body weight of all experimental birds were recorded weekly. Metabolic trial was conducted for 3 days at the end of experiment to know the retention of nutrients.Results:Significant improvement (p<0.01) was observed in total weight gain, feed conversion efficiency, and performance index in broilers under supplementary group T3 as compared to T1 and T2 groups. Retention of crude protein and ether extract was significantly increased (p<0.05) in T3 group supplemented with multicarbohydrases as compared to other groups. Retention of dry matter, crude fiber, and nitrogen-free extract was comparable in all the three groups. Significantly highest dressed weight, eviscerated weight, and drawn weight (% of live body weight) were observed in multicarbohydrases supplemented T3 group, however it was comparable in T1 and T2 groups.Conclusion:It was concluded that the supplementation of multicarbohydrases (xylanase at 50 g/ton+mannanase at 50 g/ton+amylase at 40 g/ton) in low energy diet improved overall performance of broilers.
Augmented reality (AR) systems are becoming next-generation technologies to intelligently visualize the real world in 3D. This research proposes a sensor fusion based pipeline inspection and retrofitting for the AR system, which can be used in pipeline inspection and retrofitting processes in industrial plants. The proposed methodology utilizes a prebuilt 3D point cloud data of the environment, real-time Light Detection and Ranging (LiDAR) scan and image sequence from the camera. First, we estimate the current pose of the sensors platform by matching the LiDAR scan and the prebuilt point cloud data from the current pose prebuilt point cloud data augmented on to the camera image by utilizing the LiDAR and camera calibration parameters. Next, based on the user selection in the augmented view, geometric parameters of a pipe are estimated. In addition to pipe parameter estimation, retrofitting in the existing plant using augmented scene are illustrated. Finally, step-by-step procedure of the proposed method was experimentally verified at a water treatment plant. Result shows that the integration of AR with building information modelling (BIM) greatly benefits the post-occupancy evaluation process or pre-retrofitting and renovation process for identifying, evaluating, and updating the geometric specifications of a construction environment.
Understanding and differentiating subtle human motion over time as sequential data is challenging. We propose Motion-sphere, which is a novel trajectory-based visualization technique, to represent human motion on a unit sphere. Motion-sphere adopts a two-fold approach for human motion visualization, namely a three-dimensional (3D) avatar to reconstruct the target motion and an interactive 3D unit sphere, that enables users to perceive subtle human motion as swing trajectories and color-coded miniature 3D models for twist. This also allows for the simultaneous visual comparison of two motions. Therefore, the technique is applicable in a wide range of applications, including rehabilitation, choreography, and physical fitness training. The current work validates the effectiveness of the proposed work with a user study in comparison with existing motion visualization methods. Our study’s findings show that Motion-sphere is informative in terms of quantifying the swing and twist movements. The Motion-sphere is validated in threefold ways: validation of motion reconstruction on the avatar, accuracy of swing, twist, and speed visualization, and the usability and learnability of the Motion-sphere. Multiple range of motions from an online open database are selectively chosen, such that all joint segments are covered. In all fronts, Motion-sphere fares well. Visualization on the 3D unit sphere and the reconstructed 3D avatar make it intuitive to understand the nature of human motion.
Human pose estimation and tracking in real-time from multi-sensor systems is essential for many applications. Combining multiple heterogeneous sensors increases opportunities to improve human motion tracking. Using only a single sensor type, e.g., inertial sensors, human pose estimation accuracy is affected by sensor drift over longer periods. This paper proposes a human motion tracking system using lidar and inertial sensors to estimate 3D human pose in real-time. Human motion tracking includes human detection and estimation of height, skeletal parameters, position, and orientation by fusing lidar and inertial sensor data. Finally, the estimated data are reconstructed on a virtual 3D avatar. The proposed human pose tracking system was developed using open-source platform APIs. Experimental results verified the proposed human position tracking accuracy in real-time and were in good agreement with current multi-sensor systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.