With the ubiquitous deployment of wireless systems and pervasive availability of smart devices, indoor localization is empowering numerous location-based services. With the established radio maps, WiFi fingerprinting has become one of the most accessible and practical approaches to localize a mobile user. However, most fingerprint-based localization algorithms are computation-intensive, with heavy dependence on both offline training phase and online localization phase. In this paper, we propose CNNLoc, a Convolutional Neural Network (CNN) based indoor localization framework with WiFi fingerprints for multi-building and multi-floor localization. We propose a novel classification model by combining Stacked Auto-Encoder (SAE) with one-dimensional CNN. The SAE can be used to extract key features more precisely from sparse Received Signal Strength (RSS) data, and the CNN can be trained to effectively achieve high success rates in the localization phase. We evaluate CNNLoc with state-of-the-arts as benchmarks on the UJIIndoor-Loc dataset and Tampere dataset. CNNLoc shows its excellence in both building-level and floor-level classifications and outperforms the existing solutions with 100% success on building success rate and an average success rate over 95% on floor-level localization.
Image resolution is crucial to visual measurement accuracy, but on the one hand, the cost of increasing the resolution of the acquisition device is prohibitive, and on the other hand, the resolution of the image inevitably decreases when photographing objects at a distance, which is particularly common in the assembly of large hole shaft structures for pose measurement. In this study, a deep learning-based method for super-resolution of large hole shaft images is proposed, including a super-resolution dataset for hole shaft images and a new deep learning super-resolution network structure, which is designed to enhance the perception of edge information in images through the core structure and improve efficiency while improving the effect of image super-resolution. A series of experiments have proven that the method is highly accurate and efficient and can be applied to the automatic assembly of large hole shaft structures.
In this paper, we present an automatic system for bloody image detection on the internet. A quick blood model in HIS color space is employed to identify the blood regions and then the fractal dimension and the information entropy of these regions are extracted. At last all the features are fed to a SVM -classifier to tell whether the image is bloody or not. Our experiments on real-world web image data indicate that this system can detect the bloody images with high recall ratio and high precision ratio.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.