Synthetic Aperture Radar 91 7 ~m-SpOdight-mode syathetic aperhrre radar (spotfight-mode S a ) syntkizes high-resohrtion terrain maps using data gathered from multipkobseRntionangles.'LhispfperBonstbatspottight-modeSAR~ beiaterpretedasntomogmp&recorstructionpmMemaadanalyzedllsiag dgnnlrecordedateachSARf * 'oapdatismodeledasaportionof theFouriertrilmformofacentralprojeftioaoftheimagedgrolmd8n?a. RecomhuctionofnSARimagemaythenbeoccomplisbedusingakp ritholsfromCAT.'LhismodelpennitsasimplelmderstaadingofSAR i magi ng, not based on Doppler shifts. Resdutioa, spmpling rates, waveform context of this interpretation of SAR.
Convolution backprojection (CBP) image reconstruction has been proposed as a means of producing high-resolution synthetic-aperture radar (SAR) images by processing data directly in the polar recording format which is the conventional recording format for spotlight mode SAR. The CBP algorithm filters each projection as it is recorded and then backprojects the ensemble of filtered projections to create the final image in a pixel-by-pixel format. CBP reconstruction produces high-quality images by handling the recorded data directly in polar format. The CBP algorithm requires only 1-D interpolation along the filtered projections to determine the precise values that must be contributed to the backprojection summation from each projection. The algorithm is thus able to produce higher quality images by eliminating the inaccuracies of 2-D interpolation, as well as using all the data recorded in the spectral domain annular sector more effectively. The computational complexity of the CBP algorithm is O(N (3)).
Dictionary learning algorithms have been successfully used for both reconstructive and discriminative tasks, where an input signal is represented with a sparse linear combination of dictionary atoms. While these methods are mostly developed for single-modality scenarios, recent studies have demonstrated the advantages of feature-level fusion based on the joint sparse representation of the multimodal inputs. In this paper, we propose a multimodal task-driven dictionary learning algorithm under the joint sparsity constraint (prior) to enforce collaborations among multiple homogeneous/heterogeneous sources of information. In this task-driven formulation, the multimodal dictionaries are learned simultaneously with their corresponding classifiers. The resulting multimodal dictionaries can generate discriminative latent features (sparse codes) from the data that are optimized for a given task such as binary or multiclass classification. Moreover, we present an extension of the proposed formulation using a mixed joint and independent sparsity prior, which facilitates more flexible fusion of the modalities at feature level. The efficacy of the proposed algorithms for multimodal classification is illustrated on four different applications--multimodal face recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared with the counterpart reconstructive-based dictionary learning algorithms, the task-driven formulations are more computationally efficient in the sense that they can be equipped with more compact dictionaries and still achieve superior performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.