The ExoMars Trace Gas Orbiter (TGO)’s Colour and Stereo Surface Imaging System (CaSSIS) provides multi-spectral optical imagery at 4-5m/pixel spatial resolution. Improving the spatial resolution of CaSSIS images would allow greater amounts of scientific information to be extracted. In this work, we propose a novel Multi-scale Adaptive weighted Residual Super-resolution Generative Adversarial Network (MARSGAN) for single-image super-resolution restoration of TGO CaSSIS images, and demonstrate how this provides an effective resolution enhancement factor of about 3 times. We demonstrate with qualitative and quantitative assessments of CaSSIS SRR results over the Mars2020 Perseverance rover’s landing site. We also show examples of similar SRR performance over 8 science test sites mainly selected for being covered by HiRISE at higher resolution for comparison, which include many features unique to the Martian surface. Application of MARSGAN will allow high resolution colour imagery from CaSSIS to be obtained over extensive areas of Mars beyond what has been possible to obtain to date from HiRISE.
We demonstrate an end-to-end application of the in-house deep learning-based surface modelling system, called MADNet, to produce three large area 3D mapping products from single images taken from the ESA Mars Express’s High Resolution Stereo Camera (HRSC), the NASA Mars Reconnaissance Orbiter’s Context Camera (CTX), and the High Resolution Imaging Science Experiment (HiRISE) imaging data over the ExoMars 2022 Rosalind Franklin rover’s landing site at Oxia Planum on Mars. MADNet takes a single orbital optical image as input, provides pixelwise height predictions, and uses a separate coarse Digital Terrain Model (DTM) as reference, to produce a DTM product from the given input image. Initially, we demonstrate the resultant 25 m/pixel HRSC DTM mosaic covering an area of 197 km × 182 km, providing fine-scale details to the 50 m/pixel HRSC MC-11 level-5 DTM mosaic. Secondly, we demonstrate the resultant 12m/pixel CTX MADNet DTM mosaic covering a 114 km × 117 km area, showing much more detail in comparison to photogrammetric DTMs produced using the open source in-house developed CASP-GO system. Finally, we demonstrate the resultant 50 cm/pixel HiRISE MADNet DTM mosaic, produced for the first time, covering a 74.3 km × 86.3 km area of the 3-sigma landing ellipse and partially the ExoMars team’s geological characterisation area. The resultant MADNet HiRISE DTM mosaic shows fine-scale details superior to existing Planetary Data System (PDS) HiRISE DTMs and covers a larger area that is considered difficult for existing photogrammetry and photoclinometry pipelines to achieve, especially given the current limitations of stereo HiRISE coverage. All of the resultant DTM mosaics are co-aligned with each other, and ultimately with the Mars Global Surveyor’s Mars Orbiter Laser Altimeter (MOLA) DTM, providing high spatial and vertical congruence. In this paper, technical details are presented, issues that arose are discussed, along with a visual evaluation and quantitative assessments of the resultant DTM mosaic products.
The lack of adequate stereo coverage and where available, lengthy processing time, various artefacts, and unsatisfactory quality and complexity of automating the selection of the best set of processing parameters, have long been big barriers for large-area planetary 3D mapping. In this paper, we propose a deep learning-based solution, called MADNet (Multi-scale generative Adversarial u-net with Dense convolutional and up-projection blocks), that avoids or resolves all of the above issues. We demonstrate the wide applicability of this technique with the ExoMars Trace Gas Orbiter Colour and Stereo Surface Imaging System (CaSSIS) 4.6 m/pixel images on Mars. Only a single input image and a coarse global 3D reference are required, without knowing any camera models or imaging parameters, to produce high-quality and high-resolution full-strip Digital Terrain Models (DTMs) in a few seconds. In this paper, we discuss technical details of the MADNet system and provide detailed comparisons and assessments of the results. The resultant MADNet 8 m/pixel CaSSIS DTMs are qualitatively very similar to the 1 m/pixel HiRISE DTMs. The resultant MADNet CaSSIS DTMs display excellent agreement with nested Mars Reconnaissance Orbiter Context Camera (CTX), Mars Express’s High-Resolution Stereo Camera (HRSC), and Mars Orbiter Laser Altimeter (MOLA) DTMs at large-scale, and meanwhile, show fairly good correlation with the High-Resolution Imaging Science Experiment (HiRISE) DTMs for fine-scale details. In addition, we show how MADNet outperforms traditional photogrammetric methods, both on speed and quality, for other datasets like HRSC, CTX, and HiRISE, without any parameter tuning or re-training of the model. We demonstrate the results for Oxia Planum (the landing site of the European Space Agency’s Rosalind Franklin ExoMars rover 2023) and a couple of sites of high scientific interest.
Panoramic camera systems on robots exploring the surface of Mars are used to collect images of terrain and rock outcrops which they encounter along their traverse. Image mosaics from these cameras are essential in mapping the surface geology and selecting locations for analysis by other instruments on the rover's payload. 2‐D images do not truly portray the depth of field of features within an image, nor their 3‐D geometry. This paper describes a new 3‐D visualization software tool for geological analysis of Martian rover‐derived Digital Outcrop Models created using photogrammetric processing of stereo‐images using the Planetary Robotics Vision Processing tool developed for 3‐D vision processing of ExoMars PanCam and Mars 2020 Mastcam‐Z data. Digital Outcrop Models are rendered in real time in the Planetary Robotics 3‐D Viewer PRo3D, allowing scientists to roam outcrops as in a terrestrial field campaign. Digitization of point, line, and polyline features is used for measuring the physical dimensions of geological features and communicating interpretations. Dip and strike of bedding and fractures is measured by digitizing a polyline along the contact or fracture trace, through which a best fit plane is plotted. The attitude of this plane is calculated in the software. Here we apply these tools to analysis of sedimentary rock outcrops and quantification of the geometry of fracture systems encountered by the science teams of NASA's Mars Exploration Rover Opportunity and Mars Science Laboratory rover Curiosity. We show the benefits PRo3D allows for visualization and collection of geological interpretations and analyses from rover‐derived stereo‐images.
The High-Resolution Imaging Science Experiment (HiRISE) onboard the Mars Reconnaissance Orbiter provides remotely sensed imagery at the highest spatial resolution at 25–50 cm/pixel of the surface of Mars. However, due to the spatial resolution being so high, the total area covered by HiRISE targeted stereo acquisitions is very limited. This results in a lack of the availability of high-resolution digital terrain models (DTMs) which are better than 1 m/pixel. Such high-resolution DTMs have always been considered desirable for the international community of planetary scientists to carry out fine-scale geological analysis of the Martian surface. Recently, new deep learning-based techniques that are able to retrieve DTMs from single optical orbital imagery have been developed and applied to single HiRISE observational data. In this paper, we improve upon a previously developed single-image DTM estimation system called MADNet (1.0). We propose optimisations which we collectively call MADNet 2.0, which is based on a supervised image-to-height estimation network, multi-scale DTM reconstruction, and 3D co-alignment processes. In particular, we employ optimised single-scale inference and multi-scale reconstruction (in MADNet 2.0), instead of multi-scale inference and single-scale reconstruction (in MADNet 1.0), to produce more accurate large-scale topographic retrieval with boosted fine-scale resolution. We demonstrate the improvements of the MADNet 2.0 DTMs produced using HiRISE images, in comparison to the MADNet 1.0 DTMs and the published Planetary Data System (PDS) DTMs over the ExoMars Rosalind Franklin rover’s landing site at Oxia Planum. Qualitative and quantitative assessments suggest the proposed MADNet 2.0 system is capable of producing pixel-scale DTM retrieval at the same spatial resolution (25 cm/pixel) of the input HiRISE images.
A seamless mosaic has been constructed including a 3D terrain model at 50 m grid-spacing and a corresponding terrain-corrected orthoimage at 12.5 m using a novel approach applied to ESA Mars Express High Resolution Stereo Camera orbital (HRSC) images of Mars. This method consists of blending and harmonising 3D models and normalising reflectance to a global albedo map. Eleven HRSC image sets were processed to Digital Terrain Models (DTM) based on an opensource stereo photogrammetric package called CASP-GO and merged with 71 published DTMs from the HRSC team. In order to achieve high quality and complete DTM coverage, a new method was developed to combine data derived from different stereo matching approaches to achieve a uniform outcome. This new approach was developed for high-accuracy data fusion of different DTMs at dissimilar grid-spacing and provenance which employs joint 3D and image co-registration, and B-spline fitting against the global Mars Orbiter Laser Altimeter (MOLA) standard reference. Each HRSC strip is normalised against a global albedo map to ensure that the very different lighting conditions could be corrected and resulting in a tiled set of seamless mosaics. The final 3D terrain model is compared against the MOLA height reference and the results shown of this intercomparison both in altitude and planum. Visualisation and access mechanisms to the final open access products are described.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.