Information from cores and wireline logs, processed according to different methodologies, were integrated to provide a consistent facies classification, petrophysical characterization and 3D geological model of a deep-water turbidite reservoir in the Gulf of Mexico. A number of sedimentological facies (sed-facies), either heterolithics or single-lithology, were first identified on cores and used for generating a conceptual depositional model of the reservoir. At the same time, the conventional wireline log recordings from both cored and uncored wells were processed using a multivariate statistical technique (cluster analysis) to provide a Net-to-Gross (or Vshale)-based log-facies classification. The relationship between sed-facies and log-facies is not straightforward, as some of the latter include different amounts of different sed-facies, either heterolithics or not. The petrophysical characterization of the log-facies was carried out using a Process-Oriented Modeling approach. Realistic fine-scale, 3D digital models of the different sed-facies were generated. These models were populated at the lamina scale using the statistics of porosity measurements from a selected subset of the core plugs and the statistics of permeability measurements from a mini-permeameter, both parameters having been overburden-corrected in advance. Next, the fine-scale 3D models of the sed-facies were stacked according to their observed amount in each log-facies, and several porosity and permeability grids for each log-facies were stochastically generated. Eventually, the 3D log-facies models were analytically and numerically upscaled to respectively provide effective porosity and permeability statistics for use in the property modeling phase of the 3D geological model of the reservoir. This workflow has the great advantage of allowing the use of data of types acquired at different scales (mini-permeameter, core plugs and wireline logs) in a consistent manner with respect to their volumes of investigation, and also allows for honoring the conceptual sedimentological model of the reservoir. Introduction Hydrocarbon production from 'easily characterized and produced' reservoirs has slowly declined worldwide in the last decades: as a consequence, exploration and production targets progressively shifted towards more challenging environments and/or more 'difficult' reservoirs. Turbidite deposits in deep and ultra-deep water offshore are a typical example. Besides their overall architectural complexity (amalgamated channels, channel-levee systems, channel-lobe systems), some of these deposits include heterolithic facies consisting of thin, cm- or mm-sized, alternating horizons of sandstone, siltstone and shales. Infact, the overall architectural aspects of deep-water turbidite deposits can be reasonably modeled if the appropriate geophysical-geological information and sedimentological conceptual models are provided as input to any commercially available 3D geo-modeling tool with object-based geostatistical modeling (Bratvold et al., 1994). On the other hand, the petrophysical characterization of heterolithic reservoir is still a very demanding and challenging task, as the very thin alternations of sandstones, siltstones and shales cannot be resolved by conventional wireline logs, which makes the parameters derived from conventional quantitative log interpretation of little use. Relying on the sparse, often biased, data from core plugs can also be misleading (Scaglioni et al., 2006). Nordhal et al. (2005), Phillips & Wen (2007), Ringrose et al. (2005), Ruvo et al. (2005), Scaglioni et al. (2006) have shown that improving the characterization of heterolithic reservoir requires a wellfocused data collection and the adoption of non-conventional approaches, such as the Process-Oriented Modeling at the core/near-wellbore scale (Wen at al., 1998). But, most importantly, all of the above authors point out that the greatest attention is to be paid to the meaning of data in terms of their support volume.
A new workflow has been devised to characterize the petrophysical properties of two, thin-layered, heterolithic log facies from a turbidite reservoir. The methodology is based on a published modelling technique that enables an extremely accurate reconstruction of the fine-scale lithological and sedimentological reservoir heterogeneities and a thorough integration of petrophysical data from core plugs. A large number of fine-scale rock models (geometrical grids) are: (1) stochastically generated to investigate the variability of the sedimentological features observed in cores; and (2) stochastically populated with porosity and permeability values of the pure lithological components (sandstone, siltstone and mudstone) to generate petrophysical grids. The petrophysical grids are subsequently upscaled using analytical and flow-based techniques, thus providing distributions of porosity, horizontal permeability and vertical permeability that are further analysed to characterize the aforementioned log facies. The results obtained using this workflow are exhaustive, in the sense that they implicitly take into account all of the possible ranges of variation of ‘net-reservoir’ (sandstone and siltstone) and ‘non-net-reservoir’ (mudstone) lithologies. The use of net-to-gross in petrophysical characterization is thus made redundant.
Summary Two techniques of preprocessing data from core plugs have been investigated to enhance the quality of synthetic permeability estimation from conventional logs by use of artificial neural networks (ANNs). A first technique consists of "cleaning" the core-plug data set by removing the measurements deemed log-incompatible (i.e., those from plugs corresponding to log measurements affected by shoulder-bed effect or layers with thickness below the vertical log resolution). The second technique relies on building high-resolution digital models of cored intervals by use of a process-oriented-modeling (POM) approach—the core model is populated with permeability values from core plugs and then upscaled to a log-equivalent support volume. Synthetic permeability curves estimated with these techniques have been compared to synthetic permeability curves estimated without core-data preprocessing and to permeability estimated directly from core plugs and properly calibrated permeability curves from a nuclear magnetic resonance (NMR) log tool in a turbidite reservoir, the ground truth value being represented by actual dynamic data. Results highlight that core-to-log scale effects play a major role in the permeability estimation from conventional logs and show that the proposed preprocessing techniques can be effective in improving permeability prediction, because they significantly reduce cross-scaling problems related to the differences in support volumes. Strengths and weaknesses of the two preprocessing approaches also have been compared. The first technique is faster, but its application is strongly constrained by the statistical and geological representativeness of the selected data set. This is because some lithologies go under represented so as to question the use of estimation tools like ANNs. Conversely, the POM preprocessing technique is more time-consuming and needs detailed core descriptions, but has the great advantage of supplying—starting from core data only—a reliable permeability curve that retains its validity at the log scale. Introduction Permeability prediction in hydrocarbon reservoirs is probably the most challenging issue that geologists, petrophysicists, and reservoir engineers have to deal with. In particular, the availability of permeability curves in a large number of wells is one of the most desired targets in a reservoir-characterization study. In recent years, logging techniques such as NMR have been developed that allow permeability curves to be generated along reservoir intervals. Nevertheless, the availability of NMR logs is not the rule: In the majority of the wells, especially those from older fields, the only permeability measurement available comes from plugs sparsely sampled from bottomhole cores. On the other hand, bottomhole cores are usually available only in a few reservoir intervals and/or wells, whereas conventional log recordings (natural gamma ray, density, and neutron) are available from nearly every well. Attempts to correlate core permeability to porosity and/or other conventional logs using mathematical/statistical tools date back to the early 1960s. Since then, regression analysis has been the most widely used approach for permeability prediction: This approach assumes that the permeability vs. porosity—or, alternatively, vs. conventional logs—functional relationships can be known in advance. As a matter of fact, functional relationships are unknown.
TX 75083-3836, U.S.A., fax 01-972-952-9435. AbstractTwo techniques of preprocessing data from core plugs have been investigated to enhance the quality of synthetic permeability estimation from conventional logs using Artificial Neural Networks (ANNs). A first technique consists of 'cleaning' the core plug data set by removing the measurements deemed as log-incompatible, i.e. those from plugs corresponding to log measurements affected by shoulder-bed effect, and those from layers with thickness below the vertical log resolution. The second technique relies on building high-resolution digital models of cored intervals using a Process-Oriented Modeling (POM) approach: the core model is populated with permeability values from core plugs and then upscaled to a log-equivalent support volume. Synthetic permeability curves estimated with the above techniques have been compared to synthetic permeability curves estimated without core-data preprocessing and to permeability estimated directly from core plugs and properly calibrated permeability curves from a Nuclear Magnetic Resonance log tool in a turbidite reservoir, the ground truth value being represented by actual dynamic data. Results highlight that core-to-log scale-effects play a major role in the permeability estimation from conventional logs and show that the proposed preprocessing techniques can be very effective in improving permeability prediction, as they significantly reduce cross-scaling problems related to the differences in support volumes. Strengths and weaknesses of the two preprocessing approaches have also been compared. The first technique is faster, but its application can be strongly constrained by statistical and geological representativeness of the selected data set, in the sense that some lithologies could go underrepresented so as to question the use of estimation tools like ANNs. Conversely, the POM preprocessing technique is more time-consuming and needs detailed core descriptions, but has the great advantage to supply -starting from core data only -a reliable permeability curve that can be retained valid at the log scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.