A study of film reject and repeat rates was undertaken in the Department of Dental Radiology of King's College School of Medicine and Dentistry over a 6 month period. The aim of the study was to assess the effects of changes implemented after a previous audit, and to carry out a more detailed analysis of the factors influencing the reject and repeat rates using a larger volume of data. The information recorded included the equipment and projection used, and the age of the patient if under 16 years. The overall reject rate was 3.06%, 1.84% less than recorded in the earlier study, and the repeat rate was 0.93%. Positioning errors were the most frequent cause for rejection. Significant differences in reject rates were noted between different projections, and also between qualified staff and those in training. The rejection rate for patients under 16 years was not significantly higher than for patients over 16 years, the most frequent cause of rejection was still positioning faults, but patient movement accounted for a larger proportion of the rejects than was the case in adult patients. The results demonstrate the role of audit in isolating factors leading to additional exposures. The effectiveness of changes implemented following a reject film analysis is also shown.
Otoliths of southern bluefin. TIiunntts macco~ii. of between 42cm and 167cm F.L., taken from waters off New South Wales, South Australia. and Western Australia were prepared to reveal annual banding. Methods of preparation and examination are detailed.Otolith growth was demonstrated to be directly proportional to fish growth over the size range studied. Sampling over 13 months provided validation of the annual nature of bands for fish in their 3rd. 4th and 5th year of growth. Band formation of fish in their 2nd. 6th, 7th. 8th and 9th years ofgrowth also appeared to be annular. though samples were available from an insufficient number of months for confident validation.Von Bertalanffy growth parameters derived from determined age-at-length are LT. 261.3 cm; k , 0.108. I,, -0.157.
Summary. When planning, monitoring, or checking the results from a directional surveying program, one must be able to predict the probable errors associated with the different survey tools used. An "instrument performance model" is a mathematical algorithm that, when combined with a set of independently validated parameters describing the particular survey instrument in question and information about the well, enables the directional uncertainty to be computed at any point in a survey run. Randomization caused by axial rotation of the tool significantly affects the size of the predicted errors. The analysis accounts for rotation and incorporates a new depth-measurement-error treatment. Performance models have many practical uses in survey operations management and wellsite quality control. They enable research to be focused accurately and objective comparisons to be made between different instruments. Introduction The main difficulty for drilling engineers concerned with running a directional survey program has been the lack of literature describing a coherent structure that brings together instrument selection, operations planning, data analysis, and performance modeling into a single discipline that can be routinely applied to all wells. Thorogood introduced one approach to the problem of specifying and implementing well surveys. Within that framework, the service contractor is responsible for calibrating, maintaining, and operating survey tools to a level of accuracy and reliability defined by an instrument performance specification. The operator of a well is responsible for creating an environment in which successful surveys can be run. Therefore, the operator must monitor the performances of contractors carefully and control other factors that may performances of contractors carefully and control other factors that may significantly affect survey accuracy and instrument reliability. At each step in the process of managing an operation, it is necessary to be able to predict the behavior of the instruments and to quantify the possible errors resulting from their use. These ends can be achieved with instrument performance models. This paper shows how such models can be constructed and demonstrates their application to directional-surveying-operations management. A new method for the treatment of depth-measurement errors based on Wolff and de Wardt'S work is proposed, and the analysis is extended to consider errors that vary randomly between stations. Error-Analysis Background Analytic methods to quantify survey errors were originally developed by Waistrom et al. in the late 1960's and early 1970's. Their model is based on the assumption that errors vary randomly between stations throughout a survey. Errors predicted with this approach are far smaller than differences observed between surveys. Wolff and de Wardt postulated that although measurement errors may vary randomly between surveys, they tend to be systematic within a survey. Predictions made with their method were found to be more consistent with field experience than those calculated by the Waistrom method. In 1981, Warren analyzed survey errors resulting from measurements taken during a relief-well drilling operation. He presented a new method for extracting the random and systematic errors presented a new method for extracting the random and systematic errors from survey results, confirmed that both systematic and random errors occurred in the data sets, and showed that the systematic errors caused much larger positional errors than the random errors. Most survey tools in routine use during the development of the Wolff-de Wardt method were photomechanical devices comprising noninertial-grade sensors. Errors caused by the instrument could typically be on the order of several tenths of a degree of inclination and several degrees of azimuth. Errors resulting from external effects (such as axial misalignment of running gear, deflections of the drilling assembly, or uncertainty in the reference direction) could not easily be distinguished from those associated with the directional sensors. For these systems, Wolff and de Wardt's simple empirical formulations were quite appropriate. The new generation of inertial-grade gyroscopic and solid-state magnetic sensors, however, is capable of resolving direction to the order of 0.05 degrees of inclination and 0.1 degrees of azimuth. The same general external error-producing mech-anisms still apply and have a much greater impact on the overall performance of the surveying system. Consequently, a more rigorous approach is required to quantify both the external effects and the error characteristics of the new high- accuracy gyroscopic systems and solid-state magnetic devices. Stephensons analyzed sensor systems and showed how their performance is a function of not only borehole deviation but also performance is a function of not only borehole deviation but also geographical location. Formulation of the performance modeldescribed below enables these dependencies to be considered explicitly. Their complexity should not be a deterrent to their use because of the wide availability of computer systems. Instrument Performance Models An instrument performance model is a mathematical description of the error sources specific to a particular survey instrument. The model enables one to calculate the measurement uncertainty for an instrument under specific downhole conditions. In computer terms, an instrument performance model is a subroutine that presents a standard interface to a range of applications. This idea is illustrated in Fig. 1. The concept of a standard interface is very important because, by decoupling the instrument performance description from its end use, the model can be derived, performance description from its end use, the model can be derived, maintained, modified, or extended independently of the main code within which it is embedded. Incorporation of instrument-specific terms directly into the mathematics for error propagation is a significant practical shortcoming of the Wolff-de Wardt analysis. practical shortcoming of the Wolff-de Wardt analysis. To compute values of measurement uncertainty, an instrument performance model requires two sets of data: a list of parameters performance model requires two sets of data: a list of parameters to calibrate the model and specific details of the well at the point of the measurement. The parameter list is a set of constants that describes the magnitude of the different error sources applicable to the particular survey. Representative values of parameters are given for a typical attitude-referencing gyroscopic system (Table 1) and for a magnetic measurement-while-drilling (MWD) device (Table 2). The parameters may vary according to how the tool is run. Consider, parameters may vary according to how the tool is run. Consider, for example, geomagnetic uncertainty and drillstring magnetic interference. Where the geomagnetic field is known accurately from direct on-site measurement, drillstring interference-compensation methods can be applied validly. Under these conditions, performance levels approaching those of gyroscopic survey tools can be obtained from magnetic instruments. SPEDE P. 294
Summary Subsurface separation criteria have evolved empirically over the years. They still are based largely on untested assumptions about safety factors, comfort values, and survey tool accuracy. A mathematical analysis of the probability of collision combined with a decision tree describing the consequences provides a method of risk evaluation. The mathematics can be simplified under certain special assumptions, allowing key features of the problem to he illustrated. A flow chart of the directional-drilling tolerance setting procedure shows how the methods described can be used in daily well-planning operations. Introduction Formal methods for planning deviated wells, determining safe interwell separations, and executing drilling programs are poorly described in the literature. Basic geometrical calculations are covered in textbooks, but the more detailed procedures for operating on multiwell platforms have evolved gradually over the years and are largely undocumented. Two approaches commonly are used to establish safe well separations. 1. A set of fixed separation guidelines is defined as a function of depth. This method has the major advantage of simplicity. The rules may be empirical or may have been derived from an analysis of survey errors. The principal difficulty with this method is that there is no way to assess whether the values are conservative. 2. Ellipses of uncertainty can be calculated and separation criteria can be based on a minimum allowable distance between ellipses. While appearing to be more "scientific," many uncertainty models are not formally validated, and the use of confidence intervals appears to be quite arbitrary. Consequently, users are again unable to assess whether the predictions are conservative. In the face of the twin pressures of safety and cost-effectiveness, neither procedure allows the planner to balance the sizes of tolerances, costs of surveying, efficiency of drilling, loss of production, and probability of collision against the consequences of a production, and probability of collision against the consequences of a collision. Therefore, there is good justification for developing Procedures that enable engineers to demonstrate the optimum Procedures that enable engineers to demonstrate the optimum operational plan when the consequences of undetected errors have been minimized. This problem has five solution components:a set of formally validated models of instrument behavior;a mathematical estimate of probability of intersection between two wells at a specified separation for a given level of survey uncertainty;a method establish maximum tolerable probability of intersection between two wells;a procedure for defining subsurface tolerances based on the intersection criteria; anda management structure for plan execution at the wellsite. The purpose of this paper is to describe a risk-analysis-based solution to the well-collision problem embodying three new ideas: a method to derive maximum tolerable intersection criteria, calculation of intersection probability between wells, and a method to integrate these solutions into the directional-well planning process. Risk Analysis The process of risk analysis involves three steps: devising an event/outcome tree, quantifying the consequences of different branches, and assessing whether the resulting risks are tolerable. Inspection of the well-intersection problem indicates that the most important considerations are fluids in the well, the flowing characteristics of the well and its pressure regime; the nature of any barriers to the blowout, such as a blowout preventer (BOP) or subsurface safety valve (SSSV); properties of the drilling well, including mud weight and fracture gradient; and probability of ignition of the blowout. The problem may be analyzed by means of an event/outcome tree (Fig. 1 and Tables 1 and 2).
The minimum curvature method has emerged as the accepted industry standard for the calculation of 3D directional surveys. Using this model, the well's trajectory is represented by a series of circular arcs and straight lines. Collections of other points, lines, and planes can be used to represent features such as adjacent wells, lease lines, geological targets, and faults. The relationships between these objects have simple geometrical interpretations, making them amenable to mathematical treatment. The calculations are now used extensively in 3D imaging and directional collision scans, making them critical for both business and safety. However, references for the calculations are incomplete, scattered in the literature, and have no systematic mathematical treatment. These features make programming a consistent and reliable set of algorithms more difficult. Increased standardization is needed.Investigation shows that iterative schemes have been used in situations in which explicit solutions are possible. Explicit calculations are preferred because they confer numerical predictability and stability. Though vector methods were frequently adopted in the early stages of the published derivations, opportunities for simplification were missed because of premature translation to Cartesian coordinates. This paper contains a compendium of algorithms based on the minimum curvature method (includes coordinate reference frames, toolface, interpolation, intersection with a target plane, minimum and maximum true vertical depth (TVD) in a horizontal section, point closest to a circular arc, survey station to a target position with and without the direction defined, nudges, and steering runs). Consistent vector methods have been used throughout with improvements in mathematical efficiency, stability, and predictability of behavior. The resulting algorithms are also simpler and more cost effective to code and test. This paper describes the practical context in which each of the algorithms is applied and enumerates some key tests that need to be performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.