Orthophoto production aims at the elimination of sensor tilt and terrain relief effects from captured perspective imagery. Uniform scale and the absence of relief displacement in orthophotos make them an important component of GIS databases, where the user can directly determine geographic locations, measure distances, compute areas, and derive other useful information about the area in question. Differential rectification has been traditionally used for orthophoto generation. For large scale imagery over urban areas, differential rectification produces serious artifacts in the form of double mapped areas at object space locations with sudden relief variations, e.g., in the vicinity of buildings. Such artifacts are removed through true orthophoto generation methodologies which are based on the identification of occluded portions of the object space in the involved imagery. Existing methodologies suffer from several problems such as their sensitivity to the sampling interval of the digital surface model (DSM) as it relates to the ground sampling distance (GSD) of the imaging sensor. Moreover, current methodologies rely on the availability of a digital building model (DBM), which requires an additional and expensive pre-processing. This paper presents new methodologies for true orthophoto generation while circumventing the problems associated with existing techniques. The feasibility and performance of the suggested techniques are verified through experimental results with simulated and real data.
Increasing resolution and lower cost of off‐the‐shelf digital cameras are giving rise to their use in traditional and new photogrammetric activities such as aerial mapping, transportation and surveillance as well as archaeological, industrial and medical applications. For most, if not all, photogrammetric applications, the interior orientation parameters (IOP) of the camera need to be determined and analysed. The derivation of these parameters is usually achieved through a bundle adjustment with self‐calibration procedure. Prior to using a camera in photogrammetric applications, the IOP should be estimated and their stability should be checked. Camera stability has been rarely addressed when dealing with analogue metric cameras since they have been carefully designed and built to assure the utmost stability of their internal characteristics. However, the stability of low‐cost digital cameras needs to be investigated since these cameras are not built with photogrammetric applications in mind. This paper introduces three quantitative methods for testing camera stability, where the degree of similarity between reconstructed bundles from two sets of IOP is evaluated. Each of these methods limits the position and orientation of the bundles in a different way. Hence, each method is applicable for a specific georeferencing methodology depending on similar constraints imposed by the stability measures and different georeferencing techniques. The paper will test this hypothesis on the basis of reconstruction results obtained from the use of a low‐cost digital camera in an aerial mapping project.
The steady evolution of mapping technology is leading to an increasing availability of multi‐sensory geo‐spatial datasets, such as data acquired by single‐head frame cameras, multi‐head frame cameras, line cameras, and light detection and ranging systems, at a reasonable cost. The complementary nature of the data collected by these systems makes their integration to obtain a complete description of the object space. However, such integration is only possible after accurate co‐registration of the collected data to a common reference frame. The registration can be carried out reliably through a triangulation procedure which considers the characteristics of the involved data. This paper introduces algorithms for a multi‐primitive and multi‐sensory triangulation environment, which is geared towards taking advantage of the complementary characteristics of spatial data available from the above mentioned sensors. The triangulation procedure ensures the alignment of involved data to a common reference frame. The devised methodologies are tested and proven efficient through experiments using real multi‐sensory data.
Several photogrammetric and geographic information system applications such as surface matching, object recognition, city modeling, environmental monitoring, and change detection deal with multiple versions of the same surface that have been derived from different sources and/or at different times. Surface registration is a necessary procedure prior to the manipulation of these 3D datasets. This need is also applicable in the field of medical imaging, where imaging modalities such as magnetic resonance imaging (MRI) can provide temporal 3D imagery for monitoring disease progression. This paper will present a general automated surface registration procedure that can establish correspondences between conjugate surface elements. Experimental results using light detection and ranging (LIDAR) and MRI data will verify the feasibility, robustness, and accuracy of this approach.
Due to the increased interest in global warming, interest in forest resources aimed towards reducing greenhouse gases have subsequently increased. Thus far, data related to forest resources have been obtained, through the employment of aerial photographs or satellite images, by means of plotting. However, the use of imaging data is disadvantageous; merely, due to the fact that recorded measurements such as the height of trees, in dense forest areas, lack accuracy. Within such context, the authors of this study have presented a method of data processing in which an individual tree is isolated within forested areas through the use of LIDAR data and ortho-images. Such isolation resulted in the provision of more efficient and accurate data in regards to the height of trees. As for the data processing of LIDAR, the authors have generated a normalized digital surface model to extract tree points via local maxima filtering, and have additionally, with motives to extract forest areas, applied object oriented image classifications to the processing of data using ortho-images. The final tree point was then given a figure derived from the combination of LIDAR and ortho-images results. Based from an experiment conducted in the Yongin area, the authors have analyzed the merits and demerits of methods that either employ LIDAR data or ortho-images and have thereby obtained information of individual trees within forested areas by combining the two data; thus verifying the efficiency of the above presented method.
Traditionally, pathologists microscopically examine tissue sections to detect pathological lesions; the many slides that must be evaluated impose severe work burdens. Also, diagnostic accuracy varies by pathologist training and experience; better diagnostic tools are required. Given the rapid development of computer vision, automated deep learning is now used to classify microscopic images, including medical images. Here, we used a Inception-v3 deep learning model to detect mouse lung metastatic tumors via whole slide imaging (WSI); we cropped the images to 151 by 151 pixels. The images were divided into training (53.8%) and test (46.2%) sets (21,017 and 18,016 images, respectively). When images from lung tissue containing tumor tissues were evaluated, the model accuracy was 98.76%. When images from normal lung tissue were evaluated, the model accuracy (“no tumor”) was 99.87%. Thus, the deep learning model distinguished metastatic lesions from normal lung tissue. Our approach will allow the rapid and accurate analysis of various tissues.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.