This article presents results of a case study within a project that seeks to develop heavily automated analysis of digital topographic data to extract archaeological information and to expedite large area mapping. Drawing on developments in computer vision and machine learning, this has the potential to fundamentally recast the capacity of archaeological prospection to cover large areas and deal with mass data, breaking a dependency on human resource. Without such developments, the potential of the vast amount of archaeological information embedded in large topographic and image-based datasets cannot be realized. The purpose of the case study reported on here is to assess existing developments in a Norwegian study against digital topographic data for the island of Arran, Scotland, examining the transferability of the approach and providing a proof of concept in a Scottish context. For Arran, three monument classes were assessedprehistoric roundhouses, shieling huts of medieval or post-medieval date, and small clearance cairns. These present different challenges to detection, with preliminary results ranging from a manageable mix of false positives and true identifications to the chaotic. The influence of variable morphology and the occurrence of other, largely natural, objects of confusion in the landscape is discussed, highlighting the potential improvements in automated detection routines offered by adding anthropogenic and natural false positives to additional confusion classes.
Deep-learning methods have proved successful recently for solving problems in image analysis and natural language processing. One of these methods, convolutional neural networks (CNNs), is revolutionizing the field of image analysis and pushing the state of the art. CNNs consist of layers of convolutions with trainable filters. The input to the network is the raw image or seismic amplitudes, removing the need for feature/attribute engineering. During the training phase, the filter coefficients are found by iterative optimization. The network thereby learns how to compute good attributes to solve the given classification task. However, CNNs require large amounts of training data and must be carefully designed and trained to perform well. We look into the intuition behind this method and discuss considerations that must be made in order to make the method reliable. In particular, we discuss how deep learning can be used for automated seismic interpretation. As an example, we show how a CNN can be used for automatic interpretation of salt bodies.
Acoustic target classification is the process of assigning observed acoustic backscattering intensity to an acoustic category. A deep learning strategy for acoustic target classification using a convolutional network is developed, consisting of an encoder and a decoder, which allow the network to use pixel information and more abstract features. The network can learn features directly from data, and the learned feature space may include both frequency response and school morphology. We tested the method on multifrequency data collected between 2007 and 2018 during the Norwegian sandeel survey. The network was able to distinguish between sandeel schools, schools of other species, and background pixels (including seabed) in new survey data with an F1 score of 0.87 when tested against manually labelled schools. The network separated schools of sandeel and schools of other species with an F1 score of 0.94. A traditional school classification algorithm obtained substantially lower F1 scores (0.77 and 0.82) when tested against the manually labelled schools. To train the network, it was necessary to develop sampling and preprocessing strategies to account for unbalanced classes, inaccurate annotations, and biases in the training data. This is a step towards a method to be applied across a range of acoustic trawl surveys.
The common-reflection-surface (CRS) method offers a stack with higher signal-to-noise ratio at the cost of a timeconsuming semblance search to obtain the stacking parameters. We have developed a fast method for extracting the CRS parameters using local slope and curvature. We estimate the slope and curvature with the gradient structure tensor and quadratic structure tensor on stacked data. This is done under the assumption that a stacking velocity is already available. Our method was compared with an existing slopebased method, in which the slope is extracted from prestack data. An experiment on synthetic data shows that our method has increased robustness against noise compared with the existing method. When applied to two real data sets, our method achieves accuracy comparable with the pragmatic and full semblance searches. Our method has the advantage of being approximately two and four orders of magnitude faster than the semblance searches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.