In this paper, we propose a method using a three dimensional convolutional neural network (3-D-CNN) to fuse together multispectral (MS) and hyperspectral (HS) images to obtain a high resolution hyperspectral image. Dimensionality reduction of the hyperspectral image is performed prior to fusion in order to significantly reduce the computational time and make the method more robust to noise. Experiments are performed on a data set simulated using a real hyperspectral image. The results obtained show that the proposed approach is very promising when compared to conventional methods. This is especially true when the hyperspectral image is corrupted by additive noise.
In remote sensing, due to cost and complexity issues, multispectral (MS) and hyperspectral (HS) sensors have significantly lower spatial resolution than panchromatic (PAN) images. Recently, the problem of fusing coregistered MS and HS images has gained some attention. In this paper, we propose a novel method for fusion of MS/HS and PAN images and of MS and HS images. MS and, more so, HS images contain spectral redundancy, which makes the dimensionality reduction of the data via principal component (PC) analysis very effective. The fusion is performed in the lower dimensional PC subspace; thus, we only need to estimate the first few PCs, instead of every spectral reflectance band, and without compromising the spectral and spatial quality. The benefits of the approach are substantially lower computational requirements and very high tolerance to noise in the observed data. Examples are presented using WorldView 2 data and a simulated data set based on a real HS image, with and without added noise.Index Terms-Image fusion, maximum a posteriori probability (MAP), principal component analysis (PCA), wavelets.
Currently, the main limitation in high-throughput microsatellite genotyping is the required manual editing of allele calls. Even though programs for automated allele calling have been available for several years, they have limited capability because accurate data could only be assured by manual inspection of the electropherograms for confirmation. Here we describe the development of a parametric approach to allele call quality control that eliminates much of the time required for manual editing of the data. This approach was implemented in an editing tool, Decode-GT, that works downstream of the allele calling program, TrueAllele (TA). Decode-GT reads the output data from TA, displays the underlying electropherograms for the genotypes, and sorts the allele calls into three categories: good, bad, and ambiguous. It discards the bad calls, accepts the good calls, and suggests that the user inspect the ambiguous calls, thereby reducing dependence on manual editing. For the categorization we use the following parameters: (1) the quality value for each allele call from TrueAllele; (2) the peak height of the alleles; and (3) the size of the peak shift needed to move peaks into the nearest bin. Here we report how we optimized the parameters such that the size of the ambiguous category was minimized, and both the number of miscalled genotypes in the good category and the useable genotypes in the bad category were negligible. This approach reduces the manual editing time and results in <1% miscalls.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.