A new growing method for simplex-based endmember extraction algorithms (EEAs), called simplex growing algorithm (SGA), is presented in this paper. It is a sequential algorithm to find a simplex with the maximum volume every time a new vertex is added. In order to terminate this algorithm a recently developed concept, virtual dimensionality (VD), is implemented as a stopping rule to determine the number of vertices required for the algorithm to generate. The SGA improves one commonly used EEA, the N-finder algorithm (N-FINDR) developed by Winter, by including a process of growing simplexes one vertex at a time until it reaches a desired number of vertices estimated by the VD, which results in a tremendous reduction of computational complexity.
Additionally, it also judiciously selects an appropriate initial vector to avoid a dilemma caused by the use of random vectors as its initial condition in the N-FINDR where the N-FINDR generally produces different sets of final endmembers if different setsof randomly generated initial endmembers are used. In order to demonstrate the performance of the proposed SGA, the N-FINDR and two other EEAs, pixel purity index, and vertex component analysis are used for comparison. Index Terms-Endmember extraction, N-finder algorithm (N-FINDR), pixel purity index (PPI), sequential endmember extraction algorithm (SQEEA), simplex growing algorithm (SGA), simultaneous endmember extraction algorithm (SMEEA), vertex component analysis (VCA), virtual dimensionality (VD).
Changes of Land Use and Land Cover (LULC) affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification.
Object-Based Support Vector Machine (OB-SVM) and Pixel-Based Maximum LikelihoodClassifier (PB-MLC) were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.
The N-finder algorithm (N-FINDR) suffers from several issues in its practical implementation. One is its search region which is usually the entire data space. Another related issue is its excessive computation. A third issue is its use of random initial conditions which causes inconsistency in final results that can not be reproducible if a search for endmembers is not exhaustive. This paper resolves the first two issues by developing two approaches to speed-up of the N-FINDR computation while implementing a recently developed random pixel purity index (RPPI) to alleviate the third issue. First of all, it narrows down the search region for the N-FINDR to a feasible range, called region of interest (ROI), where two ways are proposed, data sphering/thresholding and RPPI, to be used as a pre-processing to find a desired ROI. Second, three methods are developed to reduce computing load of simplex volume computation by simplifying matrix determinant. Third, to further reduce computational complexity three sequential N-FINDR algorithms are implemented by finding one endmember after another in sequence instead of finding all endmembers together at once. The conducted experiments demonstrate that while the proposed fast algorithms can greatly reduce computational complexity, their performance remains as good as the N-FINDR is and is not compromised by reduction of the search region to an ROI
An endmember is an idealized, pure signature for a class and provides crucial information for hyperspectral image analysis. Recently, endmember extraction has received considerable attention in hyperspectral imaging due to significantly improved spectral resolution where the likelihood of a hyperspectral image pixel uncovered by a hyperspectral image sensor as an endmember is substantially increased. Many algorithms have been proposed for this purpose. One great challenge in endmember extraction is the determination of number of endmembers, p required for an endmember extraction algorithm (EEA) to generate. Unfortunately, this issue has been overlooked and avoided by making an empirical assumption without justification. However, it has been shown that an appropriate selection of p is critical to success in extracting desired endmembers from image data. This paper explores methods available in the literature that can be used to estimate the value, p. These include the commonly used eigenvalue-based energy method, An Information criterion (AIC), Minimum Description Length (MDL), Gershgorin radii-based method, Signal Subspace Estimation (SSE) and Neyman-Pearson detection method in detection theory. In order to evaluate the effectiveness of these methods, two sets of experiments are conducted for performance analysis. The first set consists of synthetic imagebased simulations which allow us to evaluate their performance with a priori knowledge, while the second set comprising of real hyperspectral image experiments which demonstrate utility of these methods in real applications.
One of great challenges in unsupervised hyperspectral target analysis is how to obtain desired knowledge in an unsupervised means directly from the data for image analysis. This paper provides a review of unsupervised target analysis by first addressing two fundamental issues, "what are material substances of interest, referred to as targets?" and "how can these targets be extracted from the data?" and then further developing least squares (LS)-based unsupervised algorithms for finding spectral targets for analysis. In order to validate and substantiate the proposed unsupervised hyperspectral target analysis, three applications in endmember extraction, target detection and linear spectral unmixing are considered where custom-designed synthetic images and real image scenes are used to conduct experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.