A dynamic server migration and consolidation algorithm is introduced. The algorithm is shown to provide substantial improvement over static server consolidation in reducing the amount of required capacity and the rate of service level agreement violations. Benefits accrue for workloads that are variable and can be forecast over intervals shorter than the time scale of demand variability. The management algorithm reduces the amount of physical capacity required to support a specified rate of SLA violations for a given workload by as much as 50% as compared to static consolidation approach. Another result is that the rate of SLA violations at fixed capacity may be reduced by up to 20%. The results are based on hundreds of production workload traces across a variety of operating systems, applications, and industries.
A common problem in experimental data analysis is to locate the position of a signal to an accuracy which is substantially less than the actual signal width. By applying maximum likelihood estimation to this problem, this paper derives theoretical limits on the ability to locate signal position. The limiting error in position measurement is shown to be a simple function of the instrument resolution, the density of sample points, and the signal-to-noise ratio of the data. An interesting conclusion is that position information on a much finer scale than the minimum instrument sampling interval is contained in data of modest signal-to-noise ratio. The common procedure of excluding the portion of the data lying below an amplitude threshold to guard against background fluctuations is incorporated in the maximum likelihood analysis. It is shown that selection of the optimum amplitude threshold level depends on the type of noise present in the data, and can be an important factor in position accuracy. The analytical results exhibit close agreement with Monte Carlo simulations of position accuracy in the presence of noise.
The present state of high-resolution displacement measuring interferometry is reviewed. Factors which determine the accuracy, linearity and repeatability of nanometre-scale measurements are emphasized. Many aspects of interferometry are discussed, including general metrology and alignment errors, as well as path length errors. Optical mixing and the nonlinear relation between phase and displacement are considered, as well as the influence of diffraction on accuracy. Environmental stability is a major factor in the repeatability and accuracy of measurement. It is difficult to obtain a measurement accuracy of 10-7 when working in air. Several approaches to improving this situation are described, including multiwavelength interferometry. Recent measurements of the short- and long-term frequency stability of lasers are summarized. Optical feedback is a subtle, but important source of frequency destabilization, and methods of detection and isolation are reviewed. Calibration of phase measuring electronics used for subfringe interpolation is included. Progress in 'in situ' identification of error sources and methods of validating accuracy are emphasized.
Laser displacement interferometry is used extensively in precision equipment for semiconductor manufacture. In these applications it is often necessary to introduce a high velocity airflow to the measurement environment to minimize the density of airborne particulate contaminants. The performance of the heterodyne interferometer is degraded by the resulting fluctuations in the index of refraction along the beam path. The magnitude, correlation length, and probability distribution of the optical path length (OPL) fluctuations are measured for several airflow conditions. The data are interpreted in terms of the path length errors for some common interferometric configurations. The OPL fluctuations are generally less significant than the systematic sources of measurement error. A more fundamental limit on the accuracy of the heterodyne Michelson interferometer is the periodic nonlinearity caused by leakage of the frequency components in the beamsplitter. The effect is discussed in detail. A direct observation of the nonlinearity is reported. The magnitude of the effect is about lambda/64 for the beam splitters used in this experiment. A simple technique which indicates the presence and magnitude of the nonlinearity is described.
Workflow technology is an emerging paradigm for systematic modeling and orchestration of job flow for enterprise and scientific applications. This paper introduces BPEL4Job, a BPEL-based design for fault handling of job flow in a distributed computing environment. The features of the proposed design include: a two-stage approach for job flow modeling that separates base flow structure from fault-handling policy, a generic job proxy that isolates the interaction complexity between the flow engine and the job scheduler, and a method for migrating flow instances between different flow engines for fault handling in a distributed system. An implementation of the design based on a set of industrial products from IBM is presented and validated using a Montage application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.