A model is described for predicting the incidence of high-pH SCC on gas pipelines. The model is mechanistic and is based on a film rupture mechanism of crack growth. Based on field pressure cycle data, the model determines the crack tip strain rate and the occurrence of film rupture events at the crack tip during operating pressure cycles. Crack aspect ratio data were obtained from field data. Probabilistic distribution functions were assigned to input parameters and a Monte Carlo method was used to produce probabilistic crack growth rate distributions. The model grows a crack to failure while considering the effects of temperature and potential.
Integrity management is based on the ability of the pipeline operator to predict the growth of defects detected in inspection programs on an operating pipeline system. Accurate predictions allow targeted interventions to be scheduled in a cost effective and timely fashion for those defects that pose a high potential risk. In this paper two distinct theories are described for predicting the development of corrosion pits on an operating pipeline. The first theory corresponds to the traditional approach in which the past growth behaviour of each defect is used to predict the rate of its future development. In this theory each defect is assumed to have its own unique corrosion environment in which only a very limited range of corrosion rates will be seen. In the second approach, this assumption is not made. Instead any corrosion defect is allowed to grow at any likely rate over any time interva. In this approach an arbitrary selection of corrosion rates derived from the overall profile of past rates seen for all defects is applied to each defect over time. Predicted distributions derived by computer simulation of the initiation and growth of corrosion defects according to each theory have been compared to an actual defect depth distribution derived by in line inspection (ILI) of an operating pipeline. The success of the two models is compared and implications for pipeline integrity management are discussed.
Typical risk assessment processes produce risk estimates by multiplying together single-valued, expected failure frequencies and associated consequences. However, a range of consequences can result from an incident, and a more representative estimate of failure frequency is captured by a distributed variable rather than by a single point value. Risk estimates calculated by typical assessment processes are sometimes referred to as “mean” estimates or “cautious best estimates”. This terminology acknowledges implicitly that there is truly a range of possible values. Meta-risk is a potential approach for analyzing risk that captures this uncertainty by utilizing distributions of failure frequency and consequence in place of point estimates. These distributions are combined to form a risk distribution that can then be used more directly in quantified decision making. Meta-risk improves on the principle of “As low as reasonably practicable” (ALARP) by acknowledging that the levels of uncertainty associated with models used in the risk assessment process are not equal. By providing “probability of exceedance” targets relative to defined risk acceptance criteria, the meta-risk approach allows for quantified decision making that addresses both the level of risk and the associated level of uncertainty. This process allows an analyst to compare risks more accurately from multiple hazards between which levels of uncertainty may vary greatly, and to quantify the benefits of integrity management strategies such as condition monitoring whose primary effect is to reduce uncertainty rather than to reduce risk directly.
Risk assessment has been used historically in pipeline integrity to provide relative risk rankings based on a mixture of qualitative and quantitative inputs. With the improvement of assessment and data collection techniques and technologies, and the corresponding improvement in hazard and consequence modeling that these techniques have made possible, pipeline operators are now able to calculate risk on an entirely quantitative basis. This improvement allows operators to manage pipeline integrity-related risk within a framework that allows levels of risk reduction to be related to integrity costs using comparable terms and to measure the acceptability of residual levels of risk against responsible and defensible risk acceptance criteria. The framework outlined in this paper, coupled with quantified risk models, allows pipeline operators the ability to identify areas that may require risk reduction, identify preferred risk-reduction methods, provide justification for the project, and monitor residual levels of risk. The introduction of defined risk acceptance criteria also provides operators with a tool to move beyond relative risk prioritization towards the ability to discriminate between pipe operating at acceptable integrity levels and pipe requiring risk mitigation.
A common approach to the management of external corrosion in the pipeline industry is to perform an In-Line Inspection, followed by repairs of defects that fail a deterministic criterion, and then leave the line in service until a prescribed time interval has elapsed, at which point another reinspection is performed. However, many companies have found that as a result of the uncertainty associated with MFL defect sizing and corrosion growth rates, a deterministic repair and reinspection process may often result in unnecessary maintenance expenditures while occasionally failing to identify and address critical features. When the rare feature ‘slips through’ the deterministic process, companies often respond by adding conservatism to the process, leading to increased spending with little additional benefit. A better approach for evaluating corrosion defects is to view the process as an analysis of a set of stochastic variables instead of deterministic values. Through such an approach, the sensitivity of a defect’s failure probability can be more effectively evaluated, facilitating a decision process that is better able to find the ‘exceptions’ that are not addressed by a deterministic process. This paper outlines an approach to analyzing MFL data with stochastic variable using computer simulation, along with a process for continuously improving the characterization of each variable through a feedback loop. Alternative methods to Monte Carlo, such as Importance Sampling are briefly outlined to minimize the analysis time required without sacrificing simulation accuracy. Finally, acceptance criteria are required to interpret the calculated failure probability in order to inform maintenance decision making. This is presented in a risk-based context using a previously published risk management framework. Through this process, defect repair decisions and the evaluation of the benefit of MFL re-inspection can be better optimized. Examples are drawn from actual maintenance programs to illustrate this approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.