In mining operations, an ore is separated into its constituents through mineral processing methods, such as flotation. Identifying the type of minerals contained in the ore in advance aids greatly in performing faster and more efficient mineral processing. The human eye can recognize visual information in three wavelength regions: red, green, and blue. With hyperspectral imaging, high resolution spectral data that contains information from the visible light wavelength region to the near infrared region can be obtained. Using deep learning, the features of the hyperspectral data can be extracted and learned, and the spectral pattern that is unique to each mineral can be identified and analyzed. In this paper, we propose an automatic mineral identification system that can identify mineral types before the mineral processing stage by combining hyperspectral imaging and deep learning. By using this technique, it is possible to quickly identify the types of minerals contained in rocks using a non-destructive method. As a result of experimentation, the identification accuracy of the minerals that underwent deep learning on the red, green, and blue (RGB) image of the mineral was approximately 30%, while the result of the hyperspectral data analysis using deep learning identified the mineral species with a high accuracy of over 90%.
Fragmentation size distribution estimation is a critical process in mining operations that employ blasting. In this study, we aim to create a low-cost, efficient system for producing a scaled 3D model without the use of ground truth data, such as GCPs (Ground Control Points), for the purpose of improving fragmentation size distribution measurement using GNSS (Global Navigation Satellite System)-aided photogrammetry. However, the inherent error of GNSS data inhibits a straight-forward application in Structure-from-Motion (SfM). To overcome this, the study proposes that, by increasing the number of photos used in the SfM process, the scale error brought about by the GNSS error will proportionally decrease. Experiments indicated that constraining camera positions to locations, relative or otherwise, improved the accuracy of the generated 3D model. In further experiments, the results showed that the scale error decreased when more images from the same dataset were used. The proposed method is practical and easy to transport as it only requires a smartphone and, optionally, a separate camera. In conclusion, with some modifications to the workflow, technique, and equipment, a muckpile can be accurately recreated in scale in the digital world with the use of positional data.
Though multitudes of industries depend on the mining industry for resources, this industry has taken hits in terms of declining mineral ore grades and its current use of traditional, time-consuming and computationally costly rock and mineral identification methods. Therefore, this paper proposes integrating Hyperspectral Imaging, Neighbourhood Component Analysis (NCA) and Machine Learning (ML) as a combined system that can identify rocks and minerals. Modestly put, hyperspectral imaging gathers electromagnetic signatures of the rocks in hundreds of spectral bands. However, this data suffers from what is termed the ‘dimensionality curse’, which led to our employment of NCA as a dimensionality reduction technique. NCA, in turn, highlights the most discriminant feature bands, number of which being dependent on the intended application(s) of this system. Our envisioned application is rock and mineral classification via unmanned aerial vehicle (UAV) drone technology. In this study, we performed a 204-hyperspectral to 5-band multispectral reduction, because current production drones are limited to five multispectral bands sensors. Based on these bands, we applied ML to identify and classify rocks, thereby proving our hypothesis, reducing computational costs, attaining an ML classification accuracy of 71%, and demonstrating the potential mining industry optimisations attainable through this integrated system.
In mining operations that employ explosives and mineral processing, one of the important factors for efficient and low-cost operation is the fragmentation size distribution of rock after it has been blasted. Automatic scaling is a critical component of fragmentation size distribution measurement as it will directly determine the accuracy of the size estimation. In this study, we propose a method to create a system for creating a scaled 3D CG model, without the use of ground truth data such as GCPs (Ground Control Points), for the purpose of improving fragmentation size distribution measurement using positional data such as GNSS (Global Navigation Satellite System)-aided photogrammetry. We confirmed the validation of the method through an experimental evaluation of actual muckpiles. The results showed evidence of improving the scaling aspect of 3D fragmentation measurement systems without using GCPs or manual scales, specifically in surface mines where GNSS data are available.
In this paper, the local correspondence between synthetic aperture radar (SAR) images and optical images is proposed using an image feature-based keypoint-matching algorithm. To achieve accurate matching, common image features were obtained at the corresponding locations. Since the appearance of SAR and optical images is different, it was difficult to find similar features to account for geometric corrections. In this work, an image translator, which was built with a DNN (deep neural network) and trained by conditional generative adversarial networks (cGANs) with edge enhancement, was employed to find the corresponding locations between SAR and optical images. When using conventional cGANs, many blurs appear in the translated images and they degrade keypoint-matching accuracy. Therefore, a novel method applying an edge enhancement filter in the cGANs structure was proposed to find the corresponding points between SAR and optical images to accurately register images from different sensors. The results suggested that the proposed method could accurately estimate the corresponding points between SAR and optical images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.