One of the key components of the intelligent transport system is to provide a safe driving environment. The ability to quickly and accurately recognize the surrounding environments and neighboring vehicles is essential for the development of safety-related services to provide a safe driving environment. Currently, research in this field is being conducted based on the global positioning system (GPS), light detection and ranging, camera, and ranging sensors. However, the currently used sensors cannot recognize a wide range of vehicles because of their limited range of recognition. Moreover, GPS-based studies are highly affected by the surrounding environment because of the nature of GPS and have relatively high error rates and low rate of location information updates. The use of GPS-based location information for safetyrelated services can result in negative consequences. In this paper, we propose a new positioning system, called cooperative neighboring vehicle positioning system (CNVPS). The CNVPS rapidly identifies the locations of neighboring vehicles based on their information obtained through various sensors and shares this information with a wide range of neighboring vehicles over vehicle-to-vehicle communications. The CNVPS also compensates the position of a neighboring vehicle by applying the maximum likelihood estimation to the duplicated position observation of the other neighboring vehicles. The simulation results show that the CNVPS achieves approximately 370% improvement in location errors over GPS, assuming that the rootmean-squared error of GPS is 15 m. In addition, the proposed system has a location refresh cycle ten times faster than the existing GPS-based system. INDEX TERMS Cooperative neighboring vehicle positioning system (CNVPS), intelligent systems, global positioning system.
In this paper, we propose a method for removing hairs automatically from skin lesion images. To achieve this, we employ an edge-detection technique based on edge-tangent flow. To detect only hair-like structures, rather than contour boundaries, we propose a novel, specialized method for detecting hairs. Regardless of the personal characteristics of the hairs, hairy regions are detected because our method detects coherent thin lines of consistent width. We then restore the hairy regions detected by the proposed method by using the texture synthesis method. Our method restores the regions occluded by hairs with very few remarkable artefacts, because we utilize pixels that actually exist in the source image to restore the occluded areas by searching for the best matching pixels.
This paper describes the development and evaluation of a color estimation method that is able to create more natural lighting conditions for outdoor-purposed augmented reality (AR) technology. In outdoor AR systems, the real outdoor light source (i.e., the sun) illuminates real objects, while a virtual light source illuminates the augmented virtual objects. These two light sources result in color differentials, with the real object and virtual object being visualized as a mixture of the colors induced by the two light sources. As such, there is a visible difference in color between the real object and the virtual object. Consequently, this visible color difference will vitiate the sense of immersion felt by the AR user. Thus, to overcome this problem, we have defined each RGB color channel value by analyzing the color generated by the outdoor light source and applied the defined values to the virtual light source to reduce the visibility of the color differential between the two light sources, thereby reducing the visualized incompatibility between the virtual object and the real background. In addition, using virtual objects to express weather events, in combination with the color estimation method, we were able to demonstrate that the proposed method can adequately adapt to and manage the weather changes that affect outdoor AR. The proposed method has the potential to improve the visual coincidence between the real outdoor background and virtual objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.