The LA‐ICP‐MS U‐(Th‐)Pb geochronology international community has defined new standards for the determination of U‐(Th‐)Pb ages. A new workflow defines the appropriate propagation of uncertainties for these data, identifying random and systematic components. Only data with uncertainties relating to random error should be used in weighted mean calculations of population ages; uncertainty components for systematic errors are propagated after this stage, preventing their erroneous reduction. Following this improved uncertainty propagation protocol, data can be compared at different uncertainty levels to better resolve age differences. New reference values for commonly used zircon, monazite and titanite reference materials are defined (based on ID‐TIMS) after removing corrections for common lead and the effects of excess 230Th. These values more accurately reflect the material sampled during the determination of calibration factors by LA‐ICP‐MS analysis. Recommendations are made to graphically represent data only with uncertainty ellipses at 2s and to submit or cite validation data with sample data when submitting data for publication. New data‐reporting standards are defined to help improve the peer‐review process. With these improvements, LA‐ICP‐MS U‐(Th‐)Pb data can be considered more robust, accurate, better documented and quantified, directly contributing to their improved scientific interpretation.
The presence of multiple faults in a program can inhibit the ability of fault-localization techniques to locate the faults. This problem occurs for two reasons: when a program fails, the number of faults is, in general, unknown; and certain faults may mask or obfuscate other faults. This paper presents our approach to solving this problem that leverages the well-known advantages of parallel work flows to reduce the time-to-release of a program. Our approach consists of a technique that enables more effective debugging in the presence of multiple faults and a methodology that enables multiple developers to simultaneously debug multiple faults. The paper also presents an empirical study that demonstrates that our parallel-debugging technique and methodology can yield a dramatic decrease in total debugging time compared to a one-fault-ata-time, or conventionally sequential, approach.
[1] High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.
[1] In the past decade, major advancements in precision and accuracy of U-Pb geochronology, which stem from improved sample pretreatment and refined measurement techniques, have revealed previously unresolvable discrepancies among analyses from different laboratories. One solution to evaluating and resolving many of these discrepancies is the adoption of a common software platform that standardizes data-processing protocols, enabling robust interlaboratory comparisons. We present the results of a collaboration to develop cyber infrastructure for high-precision U-Pb geochronology based on analyzing accessory minerals by isotope dilution-thermal ionization mass spectrometry. This cyber infrastructure implements an architecture specifying the workflows of data acquisition, statistical filtering, analysis and interpretation, publication, community-based archiving, and the compilation and comparison of data from different laboratories. The backbone of the cyber infrastructure consists of two open-source software programs: Tripoli and U-Pb_Redux.
A program's behavior is ultimately the collection of all its executions. This collection is diverse, unpredictable, and generally unbounded. Thus it is especially suited to statistical analysis and machine learning techniques. The primary focus of this paper is on the automatic classification of program behavior using execution data. Prior work on classifiers for software engineering adopts a classical batchlearning approach. In contrast, we explore an active-learning paradigm for behavior classification. In active learning, the classifier is trained incrementally on a series of labeled data elements. Secondly, we explore the thesis that certain features of program behavior are stochastic processes that exhibit the Markov property, and that the resultant Markov models of individual program executions can be automatically clustered into effective predictors of program behavior. We present a technique that models program executions as Markov models, and a clustering method for Markov models that aggregates multiple program executions into effective behavior classifiers. We evaluate an application of active learning to the efficient refinement of our classifiers by conducting three empirical studies that explore a scenario illustrating automated test plan augmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.