Spatial predictions of near-surface air temperature (T air ) in Antarctica are required as baseline information for a variety of research disciplines. Since the network of weather stations in Antarctica is sparse, remote sensing methods have large potential due to their capabilities and accessibility. Based on the MODIS land surface temperature (LST) data, T air at the exact time of satellite overpass was modelled at a spatial resolution of 1 km using data from 32 weather stations. The performance of a simple linear regression model to predict T air from LST was compared to the performance of three machine learning algorithms: Random Forest (RF), generalized boosted regression models (GBM) and Cubist. In addition to LST, auxiliary predictor variables were tested in these models. Their relevance was evaluated by a Cubist-based forward feature selection in conjunction with leave-one-station-out cross-validation to reduce the impact of spatial overfitting. GBM performed best to predict T air using LST and the month of the year as predictor variables. Using the trained model, T air could be estimated with a leave-one-station-out cross-validated R 2 of 0.71 and a RMSE of 10.51 • C. However, the machine learning approaches only slightly outperformed the simple linear estimation of T air from LST (R 2 of 0.64, RMSE of 11.02 • C). Using the trained model allowed creating time series of T air over Antarctica for 2013. Extending the training data by including more years will allow developing time series of T air from 2000 on.
Abstract. A light detection and ranging canopy height model (CHM) was used as training data for a segment-based classification of woody patches. The classifier is accurate (∼92%) and suitable for use at the national scale. Height thresholds and percentage cover of vegetation from the CHM were used to produce larger quantities of reliable training data compared to other, mostly point or plot-based, ground-truthing approaches. It was found that the regional-scale differentiation between woody and nonwoody vegetation might be achieved by a combination of L-band dual-polarized Phased Array type L-band synthetic aperture radar data (HV) with multispectral optical data that include a short-wave infrared band. The application of a support vector machine algorithm to these data proved successful. The versatility of these algorithms regarding the discrimination function and their ability to solve classification problems with multiple output classes were critical factors for success. The identified and classified woody patches constitute a valuable addition and enhancement of the national land cover database. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Abstract. Super-resolution aims at increasing image resolution by algorithmic means and has progressed over the recent years due to advances in the fields of computer vision and deep learning. Convolutional Neural Networks based on a variety of architectures have been applied to the problem, e.g. autoencoders and residual networks. While most research focuses on the processing of photographs consisting only of RGB color channels, little work can be found concentrating on multi-band, analytic satellite imagery. Satellite images often include a panchromatic band, which has higher spatial resolution but lower spectral resolution than the other bands. In the field of remote sensing, there is a long tradition of applying pan-sharpening to satellite images, i.e. bringing the multispectral bands to the higher spatial resolution by merging them with the panchromatic band. To our knowledge there are so far no approaches to super-resolution which take advantage of the panchromatic band. In this paper we propose a method to train state-of-the-art CNNs using pairs of lower-resolution multispectral and high-resolution pan-sharpened image tiles in order to create super-resolved analytic images. The derived quality metrics show that the method improves information content of the processed images. We compare the results created by four CNN architectures, with RedNet30 performing best.
Most modern hand pose estimation methods rely on Convolutional Neural Networks (CNNs), which typically require a large training dataset to perform well. Exploiting unlabeled data provides a way to reduce the required amount of annotated data. We propose to take advantage of a geometry-aware representation of the human hand, which we learn from multiview images without annotations. The objective for learning this representation is simply based on learning to predict a different view. Our results show that using this objective yields clearly superior pose estimation results compared to directly mapping an input image to the 3D joint locations of the hand if the amount of 3D annotations is limited. We further show the effect of the objective for either case, using the objective for pre-learning as well as to simultaneously learn to predict novel views and to estimate the 3D pose of the hand.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.