We propose a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First, MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, we decompose our multiclass problem into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then "decoded" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature.
This paper addresses the problem of simultaneous tracking of multiple targets in a video. We first apply object detectors to every video frame. Pairs of detection responses from every two consecutive frames are then used to build a graph of tracklets. The graph helps transitively link the best matching tracklets that do not violate hard and soft contextual constraints between the resulting tracks. We prove that this data association problem can be formulated as finding the maximum-weight independent set (MWIS) of the graph. We present a new, polynomial-time MWIS algorithm, and prove that it converges to an optimum. Similarity and contextual constraints between object detections, used for data association, are learned online from object appearance and motion properties. Long-term occlusions are addressed by iteratively repeating MWIS to hierarchically merge smaller tracks into longer ones. Our results demonstrate advantages of simultaneously accounting for soft and hard contextual constraints in multitarget tracking. We outperform the state of the art on the benchmark datasets.
This paper is about few-shot segmentation of foreground objects in images. We train a CNN on small subsets of training images, each mimicking the few-shot setting. In each subset, one image serves as the query and the other(s) as support image(s) with ground-truth segmentation. The CNN first extracts feature maps from the query and support images. Then, a class feature vector is computed as an average of the support's feature maps over the known foreground. Finally, the target object is segmented in the query image by using a cosine similarity between the class feature vector and the query's feature map. We make two contributions by: (1) Improving discriminativeness of features so their activations are high on the foreground and low elsewhere; and (2) Boosting inference with an ensemble of experts guided with the gradient of loss incurred when segmenting the support images in testing. Our evaluations on the PASCAL-5 i and COCO-20 i datasets demonstrate that we significantly outperform existing approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.