Executive SummaryThe Hanford Site in Washington State manages 177 underground storage tanks containing approximately 250,000 m 3 of waste generated during past defense reprocessing and waste management operations. These tanks contain a mixture of sludge, saltcake and supernatant liquids. The insoluble sludge fraction of the waste consists of metal oxides and hydroxides and contains the bulk of many radionuclides such as the transuranic components and 90 Sr. The saltcake, generated by extensive evaporation of aqueous solutions, consists primarily of dried sodium salts. The supernates consist of concentrated (5-15 M) aqueous solutions of sodium and potassium salts. The 177 storage tanks include 149 single-shell tanks (SSTs) and 28 double-shell tanks (DSTs).Ultimately the wastes need to be retrieved from the tanks for treatment and disposal. The SSTs contain minimal amounts of liquid wastes, and the Tank Operations Contractor is continuing a program of moving solid wastes from SSTs to interim storage in the DSTs. The Hanford DST system provides the staging location for waste feed delivery to the Department of Energy (DOE) Office of River Protection's (ORP) Hanford Tank Waste Treatment and Immobilization Plant (WTP). The WTP is being designed and constructed to pretreat and then vitrify a large portion of the wastes in Hanford's 177 underground waste storage tanks.The retrieval, transport, treatment and disposal operations involve the handling of a wide range of slurries. Solids in the slurry have a wide range of particle size, density and chemical characteristics. Depending on the solids concentration the slurries may exhibit a Newtonian or a non-Newtonian rheology.The extent of knowledge of the physical and rheological properties is a key component to the success of the design and implementation of the waste processing facilities. These properties are used in engineering calculations in facility designs. Knowledge of the waste properties is also necessary for the development and fabrication of simulants that are used in testing at various scales. The expense and hazards associated with obtaining and using actual wastes dictates that simulants be used at many stages in the testing and scale-up of process equipment. The results presented in this report should be useful for estimating process and equipment performance and provide a technical basis for development of simulants for testing.The purpose of this document is to provide an updated summary of the Hanford waste characterization data pertinent to safe storage, retrieval, transport and processing operations for both the tank farms and the WTP and thereby identify gaps in understanding and data. Important waste parameters for these operations are identified by examining examples of relevant mathematical models of selected phenomena including: The data sets in (UDS composition and particle density, UDS primary particle size and shape, UDS particle size distributions [PSDs], and estimated particle size and density distributions [PSDDs]) and Poloski et al. (2007) ...
The combination of mass and normalized elution time (NET) of a peptide identified by liquid chromatography-mass spectrometry (LC-MS) measurements can serve as a unique signature for that peptide. However, the specificity of an LC-MS measurement depends upon the complexity of the proteome (i.e., the number of possible peptides) and the accuracy of the LC-MS measurements. In this work, theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity. Accuracy of the LC-MS measurement of mass-NET pairs (on a 0 to 1.0 NET scale) was described by bivariate normal sampling distributions centered on the peptide signatures. Measurement accuracy (i.e., mass and NET standard deviations of Ϯ0.1, 1, 5, and 10 ppm, and Ϯ0.01 and 0.05, respectively) was varied to evaluate improvements in process quality. The spatially localized confidence score, a conditional probability of peptide uniqueness, formed the basis for the peptide identification. Application of this approach to organisms with comparatively small proteomes, such as Deinococcus radiodurans, shows that modest mass and elution time accuracies are generally adequate for confidently identifying most peptides. For more complex proteomes, more accurate measurements are required. However, the study suggests that the majority of proteins for even the human proteome should be identifiable with reasonable confidence by using LC-MS measurements with mass accuracies within Ϯ1 ppm and high efficiency separations having elution time measurements within Ϯ0.01
Comparing a protein's concentrations across two or more treatments is the focus of many proteomics studies. A frequent source of measurements for these comparisons is a mass spectrometry (MS) analysis of a protein's peptide ions separated by liquid chromatography (LC) following its enzymatic digestion. Alas, LC-MS identification and quantification of equimolar peptides can vary significantly due to their unequal digestion, separation, and ionization. This unequal measurability of peptides, the largest source of LC-MS nuisance variation, stymies confident comparison of a protein's concentration across treatments. Our objective is to introduce a mixed-effects statistical model for comparative LC-MS proteomics studies. We describe LC-MS peptide abundance with a linear model featuring pivotal terms that account for unequal peptide LC-MS measurability. We advance fitting this model to an often incomplete LC-MS data set with REstricted Maximum Likelihood (REML) estimation, producing estimates of model goodness-of-fit, treatment effects, standard errors, confidence intervals, and protein relative concentrations. We illustrate the model with an experiment featuring a known dilution series of a filamentous ascomycete fungus Trichoderma reesei protein mixture. For 781 of the 1546 T. reesei proteins with sufficient data coverage, the fitted mixed-effects models capably described the LC-MS measurements. The LC-MS measurability terms effectively accounted for this major source of uncertainty. Ninety percent of the relative concentration estimates were within 0.5-fold of the true relative concentrations. Akin to the common ratio method, this model also produced biased estimates, albeit less biased. Bias decreased significantly, both absolutely and relative to the ratio method, as the number of observed peptides per protein increased. Mixed-effects statistical modeling offers a flexible, well-established methodology for comparative proteomics studies integrating common experimental designs with LC-MS sample processing plans. It favorably accounts for the unequal LC-MS measurability of peptides and produces informative quantitative comparisons of a protein's concentration across treatments with objective measures of uncertainties.
A general shock model in which the time intervals between shocks have infinite expectation is considered. Limit theorems for the first time the magnitude of a shock exceeds a and the historical maximum magnitude are given.
ProMAT is available at http://www.pnl.gov/statistics/ProMAT. ProMAT requires Java version 1.5.0 and R version 1.9.1 (or more recent versions). ProMAT requires either Windows XP or Mac OS 10.4 or newer versions.
Many international border crossings presently screen cargo for illicit nuclear material using radiation portal monitors (RPMs) that measure the gamma ray and/or neutron flux emitted by vehicles. The fact that many target sources have a point-like geometry can be exploited to detect subthreshold sources and filter out benign sources that frequently possess a distributed geometry. This report describes a two-step process, which has the potential to complement other alarm algorithms, for detecting and characterizing point sources. The first step applies a matched filter whereas step two uses a weighted nonlinear least squares method. In a basecase simulation, matched filtering detected a 250-cps source injected onto a white-noise background at a 95% detection probability and a 0.003 false alarm probability. For the same simulation, the maximum likelihood estimation technique performed well at source strengths of 250 and 400 cps. These simulations provided a best-case feasibility study for this technique, which will be extended to experimental data that possess false point-source signatures resulting from background shielding caused by vehicle design and cargo distribution.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.