Image mosaicking applications require both geometrical and photometrical registrations between the images that compose the mosaic. This paper proposes a probabilistic color correction algorithm for correcting the photometrical disparities. First, the image to be color corrected is segmented into several regions using mean shift. Then, connected regions are extracted using a region fusion algorithm. Local joint image histograms of each region are modeled as collections of truncated Gaussians using a maximum likelihood estimation procedure. Then, local color palette mapping functions are computed using these sets of Gaussians. The color correction is performed by applying those functions to all the regions of the image. An extensive comparison with ten other state of the art color correction algorithms is presented, using two different image pair data sets. Results show that the proposed approach obtains the best average scores in both data sets and evaluation metrics and is also the most robust to failures.
The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time. 2. Related work Several color correction methods have been proposed in literature. They can be divided into model-based parametric approaches [11][12][13] and modeless non parametric approaches [15]. Usually, parametric approaches outperform their non-parametric counterparts [16]. Parametric approaches are based on [14], where a simple
This article presents a 3D reconstruction technique for real world environments based on a traditional 2D laser range finder modified to implement a 3D laser scanner. The article describes the mechanical and control issues addressed to physically achieve the 3D sensor used to acquire the data. It also presents the techniques used to process and merge range and intensity data to create textured polygonal models and illustrates the potential of such a unit. The result is a promising system for 3D modeling of real world scenes at a commercial price 10 or 20 times lower than current commercial 3D laser scanners. The use of such a system can simplify measurements of existing buildings and produce easily 3D models and ortophotos of existing structures with minimum effort and at an affordable price.
Repetitive industrial tasks can be easily performed by traditional robotic systems. However, many other works require cognitive knowledge that only humans can provide. Human-Robot Collaboration (HRC) emerges as an ideal concept of co-working between a human operator and a robot, representing one of the most significant subjects for human-life improvement.The ultimate goal is to achieve physical interaction, where handing over an object plays a crucial role for an effective task accomplishment. Considerable research work had been developed in this particular field in recent years, where several solutions were already proposed. Nonetheless, some particular issues regarding Human-Robot Collaboration still hold an open path to truly important research improvements. This paper provides a literature overview, defining the HRC concept, enumerating the distinct human-robot communication channels, and discussing the physical interaction that this collaboration entails. Moreover, future challenges for a natural and intuitive collaboration are exposed: the machine must behave like a human especially in the pre-grasping/grasping phases and the handover procedure should be fluent and bidirectional, for an articulated function development. These are the focus of the near future investigation aiming to shed light on the complex combination of predictive and reactive control mechanisms promoting coordination and understanding. Following recent progress in artificial intelligence, learning exploration stand as the key element to allow the generation of coordinated actions and their shaping by experience.
This paper presents a technique to estimate in real time the egomotion of a vehicle based solely on laser range data. This technique calculates the discrepancy between closely spaced two‐dimensional laser scans due to the vehicle motion using scan matching techniques. The result of the scan alignment is converted into a nonlinear motion measurement and fed into a nonholonomic extended Kalman filter model. This model better approximates the real motion of the vehicle when compared to more simplistic models, thus improving performance and immunity to outliers. The motion estimate is intended to be used for egomotion compensation in a target‐tracking algorithm for situation awareness applications. In this paper, several recent scan matching algorithms were evaluated for their accuracy and computational speed: metric‐based iterative closest point (MbICP), point‐to‐line ICP (PIICP), and polar scan matching. The proposed approach is performed in real time and provides an accurate estimate of the current robot motion. The MbICP algorithm proved to be the most advantageous scan matching algorithm, but it is still comparable to PlICP. The motion estimation algorithm is validated through experimental testing in real world conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.