The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.3390/rs11070747
|View full text |Cite
|
Sign up to set email alerts
|

Why Not a Single Image? Combining Visualizations to Facilitate Fieldwork and On-Screen Mapping

Abstract: Visualization products computed from a raster elevation model still form the basis of most archaeological and geomorphological enquiries of lidar data. We believe there is a need to improve the existing visualizations and create meaningful image combinations that preserve positive characteristics of individual techniques. In this paper, we list the criteria a good visualization should meet, present five different blend modes (normal, screen, multiply, overlay, luminosity), which combine various images into one… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
104
1
4

Year Published

2019
2019
2022
2022

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 159 publications
(130 citation statements)
references
References 56 publications
1
104
1
4
Order By: Relevance
“…Although shaded relief map is a widely accepted technique for visually showing DTMs, it has two major drawbacks: Identifying details in deep shades and inability to properly represent linear features lying parallel to the light beam. In comparison, sky-view factor can be used as a general relief visualization technique to show relief characteristics [44,45]. Thus, the sky-view factors of OK and the proposed method on s -52 are demonstrated in Figure 8.…”
Section: Public Datasetmentioning
confidence: 99%
“…Although shaded relief map is a widely accepted technique for visually showing DTMs, it has two major drawbacks: Identifying details in deep shades and inability to properly represent linear features lying parallel to the light beam. In comparison, sky-view factor can be used as a general relief visualization technique to show relief characteristics [44,45]. Thus, the sky-view factors of OK and the proposed method on s -52 are demonstrated in Figure 8.…”
Section: Public Datasetmentioning
confidence: 99%
“…The cross-validation results from the different types of LiDAR visualizations indicated that the model performed better when trained on the raw 8-bit DSM height values rather than any of the advanced visualizations. It is suspected that whilst these visualizations are effective for human interpretation of archaeological data [11] and also effective for more traditional machine learning techniques [22], because deep CNNs learn their own feature representations during training, it is not desirable to artificially alter the data representation prior to input. However, it also must be taken into account that the CNN chosen in this study was pretrained on 8-bit DSM height values, thereby introducing a bias towards this representation.…”
Section: Discussionmentioning
confidence: 99%
“…Humans are inherently unable to process height data in its native state; therefore, it must be processed to create visualizations that are interpretable by the human eye. This can lead to a loss of information, image artefacts [10] or a bias stemming from the visualization techniques used [11]. As a computer can process the single channel numeric gridded height data directly, this removes some of these issues.…”
Section: Introductionmentioning
confidence: 99%
“…The image patches were then rescaled individually to the 0-1 range before being remapped to 8-bit integer format. For human interpretation, different LiDAR data visualisations have been shown to greatly enhance interpretation (Kokalj and Somrak, 2019). Following the workflows described in Kokalj and Hesse (2017) the additional data representations of Simplified Local Relief Models (SLRM) and the measures of positive and negative relief openness, calculated as the angular size of a sphere looking either up or down at each pixel location (Doneus, 2013) were generated from the exported tiles using the Relief Visualisation Toolbox (Kokalj and Somrak, 2019).…”
Section: Methodsmentioning
confidence: 99%