Commercial multispectral satellite datasets, such as WorldView-2 and Geoeye-1 images, are often delivered with a high-spatial resolution panchromatic image (PAN) as well as a corresponding lower resolution multispectral image (MSI). Certain fine features are only visible on the PAN but are difficult to discern on the MSI. To fully utilize the high-spatial resolution of the PAN and the rich spectral information from the MSI, a pan-sharpening process can be carried out. However, difficulties arise in maintaining radiometric accuracy, particularly for applications other than visual assessment. We propose a fast pan-sharpening process based on nearest-neighbor diffusion with the aim to enhance the salient spatial features while preserving spectral fidelity. Our approach assumes that each pixel spectrum in the pan-sharpened image is a weighted linear mixture of the spectra of its immediate neighboring superpixels; it treats each spectrum as its smallest element of operation, which is different from the most existing algorithms that process each band separately. Our approach is shown to be capable of preserving salient spatial and spectral features. We expect this algorithm to facilitate fine feature extraction from satellite images.
Spectral unmixing is a common task in hyperspectral data analysis. In order to sufficiently spectrally unmix the data, three key steps must be accomplished: Estimate the number of endmembers (EMs), identify the EMs, and then unmix the data. Several different statistical and geometrical approaches have been developed for all steps of the unmixing process. However, many of these methods rely on using the full image to estimate the number and extract the EMs from the background data. In this paper, spectral unmixing is accomplished using a spatially adaptive approach. Linear unmixing is performed per pixel with EMs identified at the local level, but global abundance maps are created by clustering the locally determined EMs into common groups. Results show that the unmixing residual error of each pixel's spectrum from real data, estimated from the spatially adaptive methodology, is reduced when compared to a global scale EM estimation and linear unmixing methodology. The component algorithms of the new spatially adaptive approach, which complete the three key unmixing steps, can be interchanged while maintaining spatial information, making this new methodology modular. A final advantage of the spatially adaptive spectral unmixing methodology is the user-defined spatial scale size.
Spectral imaging modalities, including reflectance and X-ray fluorescence, play an important role in conservation science. In reflectance hyperspectral imaging, the data are classified into areas having similar spectra and turned into labeled pigment maps using spectral features and fusing with other information. Direct classification and labeling remain challenging because many paints are intimate pigment mixtures that require a non-linear unmixing model for a robust solution. Neural networks have been successful in modeling non-linear mixtures in remote sensing with large training datasets. For paintings, however, existing spectral databases are small and do not encompass the diversity encountered. Given that painting practices are relatively consistent within schools of artistic practices, we tested the suitability of using reflectance spectra from a subgroup of well-characterized paintings to build a large database to train a one-dimensional (spectral) convolutional neural network. The labeled pigment maps produced were found to be robust within similar styles of paintings.
No abstract
Systematic variations with wavelength in the position angle of interstellar linear polarization of starlight may be indicative of multiple cloud structure along the line of sight. We use polarimetric observations of two stars (HD 29647, HD 283809) in the general direction of TMC-1 in the Taurus Dark Cloud to investigate grain properties and cloud structure in this region. We show the data to be consistent with a simple two-component model, in which general interstellar polarization in the Taurus Cloud is produced by a widely distributed cloud component with relatively uniform magnetic field orientation; the light from stars close to TMC-1 suffers additional polarization arising in one (or more) subcloud(s) with larger average grain size and different magnetic field directions compared with the general trend. Towards HD 29647, in particular, we show that the unusually low degree of visual polarization relative to extinction is due to depolarization associated with the presence of distinct cloud components in the line of sight with markedly different magnetic field orientations. Stokes parameter calculations allow us to separate out the polarization characteristics of the individual components. Results are fit with the Serkowski empirical formula to determine the degree and wavelength of maximum polarization. Whereas λ max values in the widely distributed material are similar to the average (0.55 µm) for the diffuse interstellar medium, the subcloud in line of sight to HD 283809, the most heavily reddened star in our study, has λ max ≈ 0.73 µm, indicating the presence of grains ∼ 30% larger than this average. Our model also predicts detectable levels of circular polarization toward both HD 29647 and HD 283809.
The ability to rank images based on their appearance finds many real-world applications such as image retrieval or image album creation. Despite the recent dominance of deep learning methods in computer vision which often result in superior performance, they are not always the methods of choice because they lack interpretability. In this work, we investigate the possibility of improving image aesthetic inference of convolutional neural networks with hand-designed features that rely on domain expertise in various fields. We perform a comparison of hand-crafted feature sets in their ability to predict fine-grained aesthetics scores on two image aesthetics datasets. We observe that even feature sets published earlier are able to compete with more recently published algorithms and, by combining the algorithms together, one can obtain a significant improvement in predicting image aesthetics. By using a tree-based learner, we perform feature elimination to understand the best performing features overall and across different image categories. Only roughly 15 % and 8 % of the features are needed to achieve full performance in predicting a fine-grained aesthetic score and binary classification respectively. By combining hand-crafted features with meta-features that predict the quality of an image based on CNN features, the model performs better than a baseline VGG16 model. One can, however, achieve more significant improvement in both aesthetics score prediction and binary classification by fusing the hand-crafted features and the penultimate layer activations. Our experiments indicate an improvement up to 2.2 % achieving current state-of-the-art binary classification accuracy on the AVA dataset when the hand-designed features are fused with activation from VGG16 and ResNet50 networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.