The aim of this review is to provide a comprehensive overview of the large variety of phenolic compounds that have to date been identified in a wide range of monofloral honeys found globally. The collated information is structured along several themes, including the botanical family and genus of the monofloral honeys for which phenolic constituents have been reported, the chemical classes the phenolic compounds can be attributed to, and the analytical method employed in compound determination as well as countries with a particular research focus on phenolic honey constituents. This review covers 130 research papers that detail the phenolic constituents of a total of 556 monofloral honeys. Based on the findings of this review, it can be concluded that most of these honeys belong to the Myrtaceae and Fabaceae families and that Robinia (Robinia pseudoacacia, Fabaceae), Manuka (Leptospermum scoparium, Myrtaceae), and Chestnut (Castanea sp., Fagaceae) honeys are to date the most studied honeys for phenolic compound determination. China, Italy, and Turkey are the major honey phenolic research hubs. To date, 161 individual phenolic compounds belonging to five major compound groups have been reported, with caffeic acid, gallic acid, ferulic acid and quercetin being the most widely reported among them. HPLC with photodiode array detection appears to be the most popular method for chemical structure identification.
Abstract. We present two Python libraries (map2loop and map2model) which combine the observations available in digital geological maps with conceptual information, including assumptions regarding the subsurface extent of faults and plutons to provide sufficient constraints to build a reasonable 3D geological model. At a regional scale, the best predictor for the 3D geology of the near-subsurface is often the information contained in a geological map. This remains true even after recognising that a map is also a model, with all the potential for hidden biases that this model status implies. One challenge we face is the difficulty in reproducibly preparing input data for 3D geological models. The information stored in a map falls into three categories of geometric data: positional data such as the position of faults, intrusive and stratigraphic contacts; gradient data, such as the dips of contacts or faults and topological data, such as the age relationships of faults and stratigraphic units, or their adjacency relationships. This work is being conducted within the Loop Consortium, in which algorithms are being developed that allow automatic deconstruction of a geological map to recover the necessary positional, gradient and topological data as inputs to different 3D geological modelling codes. This automation provides significant advantages: it reduces the time to first prototype models; it clearly separates the primary data from subsets produced from filtering via data reduction and conceptual constraints; and provides a homogenous pathway to sensitivity analysis, uncertainty quantification and Value of Information studies. We use the example of the re-folded and faulted Hamersley Basin in Western Australia to demonstrate a complete workflow from data extraction to 3D modelling using two different Open Source 3D modelling engines: GemPy and LoopStructural.
Abstract. Exploration and mining companies rely on geological drill core logs to target and obtain initial information on geology of the area to build models for prospectivity mapping or mine planning. A huge amount of legacy drilling data is available in geological survey but cannot be used directly as it is compiled and recorded in an unstructured textural form and using different formats depending on the database structure, company, logging geologist, investigation method, investigated materials and/or drilling campaign. It is subjective and plagued with uncertainty as it is likely to have been conducted by tens to hundreds geologists, all of whom would have their own personal biases. However, this is valuable information that adds value to geoscientific data for research and exploration, specifically in efficiently targeting sustainable new discoveries and providing better shallow subsurface constraints for 3D geological models. dh2loop (https://github.com/Loop3D/dh2loop) is an open-source python library that provides the functionality to extract and standardize geologic drill hole data and export it into readily importable interval tables (collar, survey, lithology). In this contribution, we extract, process and classify lithological logs from the Geological Survey of Western Australia Mineral Exploration Reports Database in the Yalgoo-Singleton Greenstone Belt (YSGB) region. For this study case, the extraction rate for collar, survey and lithology data is respectively 93 %, 865 and 34 %. It also addresses the subjective nature and variability of nomenclature of lithological descriptions within and across different drilling campaigns by using thesauri and fuzzy string matching. 86% of the extracted lithology data is successfully matched to lithologies in the thesauri. Since this process can be tedious, we attempted to test the string matching with the comments, which resulted to a matching rate of 16 % (7,870 successfully matched records out of 47,823 records). The standardized lithological data is then classified into multi-level groupings that can be used to systematically upscale and downscale drill hole data inputs for multiscale 3D geological modelling. dh2loop formats legacy data bridging the gap between utilization and maximization of legacy drill hole data and drill hole analysis functionalities available in existing python libraries (lasio, welly, striplog).
Abstract. At a regional scale, the best predictor for the 3D geology of the near-subsurface is often the information contained in a geological map. One challenge we face is the difficulty in reproducibly preparing input data for 3D geological models. We present two libraries (map2loop and map2model) that automatically combine the information available in digital geological maps with conceptual information, including assumptions regarding the subsurface extent of faults and plutons to provide sufficient constraints to build a prototype 3D geological model. The information stored in a map falls into three categories of geometric data: positional data, such as the position of faults, intrusive, and stratigraphic contacts; gradient data, such as the dips of contacts or faults; and topological data, such as the age relationships of faults and stratigraphic units or their spatial adjacency relationships. This automation provides significant advantages: it reduces the time to first prototype models; it clearly separates the data, concepts, and interpretations; and provides a homogenous pathway to sensitivity analysis, uncertainty quantification, and value of information studies that require stochastic simulations, and thus the automation of the 3D modelling workflow from data extraction through to model construction. We use the example of the folded and faulted Hamersley Basin in Western Australia to demonstrate a complete workflow from data extraction to 3D modelling using two different open-source 3D modelling engines: GemPy and LoopStructural.
Abstract. To support the needs of practitioners regarding 3D geological modelling and uncertainty quantification in the field, in particular from the mining industry, we propose a Python package called loopUI-0.1 that provides a set of local and global indicators to measure uncertainty and features dissimilarities among an ensemble of voxet models. Results are presented of a survey launched among practitioners in the mineral industry, enquiring about their modelling and uncertainty quantification practice and needs. It reveals that practitioners acknowledge the importance of uncertainty quantification even if they do not perform it. A total of four main factors preventing practitioners performing uncertainty quantification were identified: a lack of data uncertainty quantification, (computing) time requirement to generate one model, poor tracking of assumptions and interpretations and relative complexity of uncertainty quantification. The paper reviews and proposes solutions to alleviate these issues. Elements of an answer to these problems are already provided in the special issue hosting this paper and more are expected to come.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.