Coal is a principal source of energy and the combustion of coal supplies around one-third of the global electricity generation. Coal mines are also an important source of CH 4 emissions, the second most important greenhouse gas. Monitoring CH 4 emissions caused by coal mining using earth observation will require the exact location of coal mines. This paper aims to determine surface coal mines from satellite images through deep learning techniques by treating them as a land use/land cover classification task. This is achieved using Convolutional Neural Networks (CNN) that has proven to be capable of complex land use/ land cover classification tasks. With a list of known coal mine locations from various countries, a training dataset of "Coal Mine" and "No Coal Mine" image patches is prepared using Sentinel-2 satellite images with 13 spectral bands. Various pre-trained CNN network architectures (VGG, ResNet, DenseNet) are trained and validated with our prepared coal mine dataset of 3500 "Coal Mine" and 3000 "No Coal Mine" image patches. After several experiments with the VGG network combined with transfer learning is found to be an optimal model for this task. Classification accuracy of 98% has been achieved for the validation dataset of the pre-trained VGG architecture. The model produces more than 95% overall accuracy when tested on unseen satellite images from different countries outside the training dataset and evaluated against visual classification.
Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.