This study proposes a light convolutional neural network (LCNN) well-fitted for medium-resolution (30-m) land-cover classification. The LCNN attains high accuracy without overfitting, even with a small number of training samples, and has lower computational costs due to its much lighter design compared to typical convolutional neural networks for high-resolution or hyperspectral image classification tasks. The performance of the LCNN was compared to that of a deep convolutional neural network, support vector machine (SVM), k-nearest neighbors (KNN), and random forest (RF). SVM, KNN, and RF were tested with both patch-based and pixel-based systems. Three 30 km × 30 km test sites of the Level II National Land Cover Database were used for reference maps to embrace a wide range of land-cover types, and a single-date Landsat-8 image was used for each test site. To evaluate the performance of the LCNN according to the sample sizes, we varied the sample size to include 20, 40, 80, 160, and 320 samples per class. The proposed LCNN achieved the highest accuracy in 13 out of 15 cases (i.e., at three test sites with five different sample sizes), and the LCNN with a patch size of three produced the highest overall accuracy of 61.94% from 10 repetitions, followed by SVM (61.51%) and RF (61.15%) with a patch size of three. Also, the statistical significance of the differences between LCNN and the other classifiers was reported. Moreover, by introducing the heterogeneity value (from 0 to 8) representing the complexity of the map, we demonstrated the advantage of patch-based LCNN over pixel-based classifiers, particularly at moderately heterogeneous pixels (from 1 to 4), with respect to accuracy (LCNN is 5.5% and 6.3% more accurate for a training sample size of 20 and 320 samples per class, respectively). Finally, the computation times of the classifiers were calculated, and the LCNN was confirmed to have an advantage in large-area mapping.
Meteorological satellite images provide crucial information on solar irradiation and weather conditions at spatial and temporal resolutions which are ideal for short-term photovoltaic (PV) power forecasts. Following the introduction of next-generation meteorological satellites, investigating their application on PV forecasts has become imminent. In this study, Communications, Oceans, and Meteorological Satellite (COMS) and Himawari-8 (H8) satellite images were inputted in a deep neural network (DNN) model for 2 hour (h)- and 1 h-ahead PV forecasts. A one-year PV power dataset acquired from two solar power test sites in Korea was used to directly forecast PV power. H8 was used as a proxy for GEO-KOMPSAT-2A (GK2A), the next-generation satellite after COMS, considering their similar resolutions, overlapping geographic coverage, and data availability. In addition, two different data sampling setups were designed to implement the input dataset. The first setup sampled chronologically ordered data using a relatively more inclusive time frame (6 a.m. to 8 p.m. in local time) to create a two-month test dataset, whereas the second setup randomly sampled 25% of data from each month from the one-year input dataset. Regardless of the setup, the DNN model generated superior forecast performance, as indicated by the lowest normalized mean absolute error (NMAE) and normalized root mean squared error (NRMSE) results in comparison to that of the support vector machine (SVM) and artificial neural network (ANN) models. The first setup results revealed that the visible (VIS) band yielded lower NMAE and NRMSE values, while COMS was found to be more influential for 1 h-ahead forecasts. For the second setup, however, the difference in NMAE results between COMS and H8 was not significant enough to distinguish a clear edge in performance. Nevertheless, this marginal difference and similarity of the results suggest that both satellite datasets can be used effectively for direct short-term PV forecasts. Ultimately, the comparative study between satellite datasets as well as spectral bands, time frames, forecast horizons, and forecast models confirms the superiority of the DNN and offers insights on the potential of transitioning to applying GK2A for future PV forecasts.
Not all building labels for training improve the performance of the deep learning model. Some labels can be falsely labeled or too ambiguous to represent their groundtruths, resulting in poor performance of the model. For example, building labels in OpenStreetMap (OSM) and Microsoft Building Footprints (MBF) are publicly available training sources that have great potential to train deep models, but directly using those labels for training can limit the model's performance as their labels are incomplete and inaccurate, called noisy labels. This paper presents self-filtered learning (SFL) that helps a deep model learn well with noisy labels for building extraction in remote sensing images. SFL iteratively filters out noisy labels during the training process based on loss of samples. Through a multi-round manner, SFL makes a deep model learn progressively more on refined samples from which the noisy labels have been removed. Extensive experiments with the simulated noisy map as well as real-world noisy maps, OSM and MBF, showed that SFL can improve the deep model's performance in diverse error types and different noise levels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.