Most of the world’s 1500 active volcanoes are not instrumentally monitored, resulting in deadly eruptions which can occur without observation of precursory activity. The new Sentinel missions are now providing freely available imagery with unprecedented spatial and temporal resolutions, with payloads allowing for a comprehensive monitoring of volcanic hazards. We here present the volcano monitoring platform MOUNTS (Monitoring Unrest from Space), which aims for global monitoring, using multisensor satellite-based imagery (Sentinel-1 Synthetic Aperture Radar SAR, Sentinel-2 Short-Wave InfraRed SWIR, Sentinel-5P TROPOMI), ground-based seismic data (GEOFON and USGS global earthquake catalogues), and artificial intelligence (AI) to assist monitoring tasks. It provides near-real-time access to surface deformation, heat anomalies, SO2 gas emissions, and local seismicity at a number of volcanoes around the globe, providing support to both scientific and operational communities for volcanic risk assessment. Results are visualized on an open-access website where both geocoded images and time series of relevant parameters are provided, allowing for a comprehensive understanding of the temporal evolution of volcanic activity and eruptive products. We further demonstrate that AI can play a key role in such monitoring frameworks. Here we design and train a Convolutional Neural Network (CNN) on synthetically generated interferograms, to operationally detect strong deformation (e.g., related to dyke intrusions), in the real interferograms produced by MOUNTS. The utility of this interdisciplinary approach is illustrated through a number of recent eruptions (Erta Ale 2017, Fuego 2018, Kilauea 2018, Anak Krakatau 2018, Ambrym 2018, and Piton de la Fournaise 2018–2019). We show how exploiting multiple sensors allows for assessment of a variety of volcanic processes in various climatic settings, ranging from subsurface magma intrusion, to surface eruptive deposit emplacement, pre/syn-eruptive morphological changes, and gas propagation into the atmosphere. The data processed by MOUNTS is providing insights into eruptive precursors and eruptive dynamics of these volcanoes, and is sharpening our understanding of how the integration of multiparametric datasets can help better monitor volcanic hazards.
Assessing the well-being of an animal is hindered by the limitations of efficient communication between humans and animals. Instead of direct communication, a variety of parameters are employed to evaluate the well-being of an animal. Especially in the field of biomedical research, scientifically sound tools to assess pain, suffering, and distress for experimental animals are highly demanded due to ethical and legal reasons. For mice, the most commonly used laboratory animals, a valuable tool is the Mouse Grimace Scale (MGS), a coding system for facial expressions of pain in mice. We aim to develop a fully automated system for the surveillance of post-surgical and post-anesthetic effects in mice. Our work introduces a semi-automated pipeline as a first step towards this goal. A new data set of images of black-furred laboratory mice that were moving freely is used and provided. Images were obtained after anesthesia (with isoflurane or ketamine/xylazine combination) and surgery (castration). We deploy two pre-trained state of the art deep convolutional neural network (CNN) architectures (ResNet50 and InceptionV3) and compare to a third CNN architecture without pre-training. Depending on the particular treatment, we achieve an accuracy of up to 99% for the recognition of the absence or presence of post-surgical and/or post-anesthetic effects on the facial expression.
This paper presents a comprehensive review of the principle and application of deep learning in retinal image analysis. Many eye diseases often lead to blindness in the absence of proper clinical diagnosis and medical treatment. For example, diabetic retinopathy (DR) is one such disease in which the retinal blood vessels of human eyes are damaged. The ophthalmologists diagnose DR based on their professional knowledge, that is labor intensive. With the advances in image processing and artificial intelligence, computer vision-based techniques have been applied rapidly and widely in the field of medical images analysis and are becoming a better way to advance ophthalmology in practice. Such approaches utilize accurate visual analysis to identify the abnormality of blood vessels with improved performance over manual procedures. More recently, machine learning, in particular, deep learning, has been successfully implemented in this area. In this paper, we focus on recent advances in deep learning methods for retinal image analysis. We review the related publications since 1982, which include more than 80 papers for retinal vessels detections in the research scope spanning from segmentation to classification. Although deep learning has been successfully implemented in other areas, we found only 17 papers so far focus on retinal blood vessel segmentation. This paper characterizes each deep learning based segmentation method as described in the literature. Analyzing along with the limitations and advantages of each method. In the end, we offer some recommendations for future improvement for retinal image analysis. INDEX TERMS Retinal colour fundus images, convolutional neural networks, retinal vessels segmentation.
ABSTRACT:The extraction and description of keypoints as salient image parts has a long tradition within processing and analysis of 2D images. Nowadays, 3D data gains more and more importance. This paper discusses the benefits and limitations of keypoints for the task of fusing multiple 3D point clouds. For this goal, several combinations of 3D keypoint detectors and descriptors are tested. The experiments are based on 3D scenes with varying properties, including 3D scanner data as well as Kinect point clouds. The obtained results indicate that the specific method to extract and describe keypoints in 3D data has to be carefully chosen. In many cases the accuracy suffers from a too strong reduction of the available points to keypoints.
DinosauriaSauropoda paleophysiology body mass estimation specific tissue density paleoecology Abstract Body mass and surface areas are important in several aspects for an organism living today. Therefore, mass and surface determinations for extinct dinosaurs could be important for paleo-biological aspects as well. Based on photogrammetrical measurement the body mass and body surface area of the Late Jurassic Brachiosaurus brancai Janensch, 1914 from Tendaguru (East Africa), a skeleton mounted and exhibited at the Museum of Natural History in Berlin (Germany), has been re-evaluated. We determined for a slim type of 3D reconstruction of Brachiosaurus brancai a total volume of 47.9 m 3 which represents, assuming a mean tissue density of 0.8 kg per 1,000 cm 3 , a total body mass of 38,000 kg. The volume distributions from the head to the tail were as follows: 0.2 m 3 for the head, neck 7.3 m 3 , fore limbs 2.9 m 3 , hind limbs 2.6 m 3 , thoracic-abdominal cavity 32.4 m 3 , tail 2.2 m 3 . The total body surface area was calculated to be 119.1 m 2 , specifically 1.5 m 2 for the head, 26 m 2 neck, fore limbs 18.8 m 2 , hind limbs 16.4 m 2 , 44.2 m 2 thoracic-abdominal cavity, and finally the tail 12.2 m 2 . Finally, allometric equations were used to estimate presumable organ sizes of this extinct dinosaur and to test whether their dimensions really fit into the thoracic and abdominal cavity of Brachiosaurus brancai if a slim body shape of this sauropod is assumed. museum fü r naturkunde
Abstract-In this paper, we introduce an iterative speckle filtering method for polarimetric SAR (PolSAR) images based on the bilateral filter. To locally adapt to the spatial structure of images, this filter relies on pixel similarities in both spatial and radiometric domains. To deal with polarimetric data, we study the use of similarities based on a statistical distance called Kullback-Leibler divergence as well as two geodesic distances on Riemannian manifolds. To cope with speckle, we propose to progressively refine the result thanks to an iterative scheme. Experiments are run over synthetic and experimental data. First, simulations are generated to study the effects of filtering parameters in terms of polarimetric reconstruction error, edge preservation and smoothing of homogeneous areas. Comparison with other methods shows that our approach compares well to other state of the art methods in the extraction of polarimetric information and shows superior performance for edge restoration and noise smoothing. The filter is then applied to experimental data sets from ESAR and FSAR sensors (DLR) at L-band and S-band, respectively. These last experiments show the ability of the filter to restore structures such as buildings and roads and to preserve boundaries between regions while achieving a high amount of smoothing in homogeneous areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.