While remote sensing data have long been widely used in archaeological prospection over large areas, the task of examining such data is time consuming and requires experienced and specialist analysts. However, recent technological advances in the field of artificial intelligence (AI), and in particular deep learning methods, open possibilities for the automated analysis of large areas of remote sensing data. This paper examines the applicability and potential of supervised deep learning methods for the detection and mapping of different kinds of archaeological sites comprising features such as walls and linear or curvilinear structures of different dimensions, spectral and geometrical properties. Our work deliberately uses open-source imagery to demonstrate the accessibility of these tools. One of the main challenges facing AI approaches has been that they require large amounts of labeled data to achieve high levels of accuracy so that the training stage requires significant computational resources. Our results show, however, that even with relatively limited amounts of data, simple eight-layer, fully convolutional network can be trained efficiently using minimal computational resources, to identify and classify archaeological sites and successfully distinguish them from features with similar characteristics. By increasing the number of training sets and switching to the use of high-performance computing the accuracy of the identified areas increases. We conclude by discussing the future directions and potential of such methods in archaeological research.
The availability of overlapping geophysical data produced by different sensors provides complementary information about the investigation area. However, joint interpretation of these geophysical images is challenging. One common problem is the registration of the images that is necessary to compare features appearing in dissimilar datasets. Measurements in archeological geophysics are often performed by handheld devices therefore, the actual location of the measurement could be different from the planned one. These offsets are localized and essentially random. Consequently, it is impossible to correct them following usual deterministic approaches. This paper presents a novel registration method between geophysical images produced from different prospecting methods. We developed a semi‐stochastic, iterative registration algorithm that applies random local transformations in small randomly selected regions of the processed image. The algorithm uses the mutual information of the images as similarity measure due to its suitability in images of different modalities. We use a pair of images to train the algorithm and tune its parameters. Afterwards, we test the method with nine different pairs of geophysical images from various locations and characteristics. The results, in all cases, show a significant increase of the mutual information in comparison with the registration through geographical coordinates.
Infragravity waves are generated along coasts, and some small fraction of their energy escapes to the open oceans and propagates with little attenuation. Due to the scarcity of deep-ocean observations of these waves, the mechanism and the extent of the infragravity waves energy leakage from the coasts remains poorly understood. Understanding the generation and pathways of infragravity wave energy is important among others for understanding the breakup of ice-shelves and the contamination of high-resolution satellite radar altimetry measurements of sea level. We examine data from 37 differential pressure gauges of Ocean Bottom Seismometers (OBS) near the equatorial mid-Atlantic ridge, deployed during the Passive Imaging of the Lithosphere-Asthenosphere Boundary (PI-LAB) experiment. We use the beamforming technique to investigate the incoming directions of infragravity waves. Next, we develop a graph-theorybased global back-projection method of noise cross-correlation function envelopes, which minimizes the effects of array geometry using an adaptive weighting scheme. This approach allows us to locate the sources of the infragravity energy. We assess our observations by comparing to a global model of infragravity wave heights. Our results reveal strong coherent energy from sources and/or reflected phases at the west coast of Africa and some sources from South America. These energy sources are in good agreement with the global infragravity wave model. In addition, we also observe infragravity waves arriving from North America during specific events that mostly occur during October-February 2016. Finally, we find indications of waves that propagate with little attenuation, long distances through sea ice, reflecting off Antarctica. Plain Language Summary Infragravity waves are oceanic surface waves with periods between 30 and 300 s and wavelengths up to tens of kilometers. They are generated along coasts; however, a small fraction of the energy escapes to the open ocean and travels with little attenuation over transoceanic distances. They play a significant role in phenomena such as seiches, coastal barrier breaching, and the break-up of ice-shelves. Therefore, it is important to determine their sources, how they propagate through the ocean and their interactions with the coasts. To shed light on these questions, we examine pressure data recorded at an array of ocean bottom instruments deployed beneath the equatorial Atlantic Ocean. We use array techniques to turn the ambient infragravity energy noise into useful signal, allowing the determination of the incoming directions and the sources of these waves. Our results reveal strong infragravity wave sources at the Atlantic coasts of Africa and South America. We also observe significant energy arriving from Antarctica possibly due to waves generated elsewhere reflecting off Antarctic coasts.
In the last few years, the idea of combining images, called image fusion, appeared and it has become a critical area of research and development. Image fusion can be defined as the process of combining images, taken from the same scene and create one single image containing all the essential information of the original images. A single sensor is not always sufficient. Different sensors, effective in different environmental conditions, provide different information of a scene. The underlying idea in this article is to combine geophysical images taken with different sensors, from the same location, aiming to improve the detectability of possible archaeological targets.Three different fusion approaches were used; fusion by calculating the average of the individual images, and through the use of wavelet and curvelet transforms. Furthermore, taking advantage of the curvelet domain we exploit possible prior angle information to enhance the angles where the remnants are expected. We applied the methods in seven different pairs of geophysical images taken from two different archaeological areas. In all cases the fused images produced significantly better results than each of the original geophysical images separately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.