Part two of a three-part series, this paper describes chemical and toxicological results of a comprehensive shoreline ecology program that was designed to assess recovery in Prince William Sound following the Exxon Valdez oil spill of March 24, 1989. The program is an application of the “sediment quality triad” approach, combining chemical, toxicological, and biological measurements. Other parts of the program are described in Part 1: Study Design and Methods (Page et al., this volume) and Part 3: Biology (Gilfillan et al., this volume). The study was designed so that results could be extrapolated to the entire spill zone in the sound and projected forward in time. It combined one-time sampling of 64 randomly chosen study sites representing four major habitats and four oiling levels (including unoiled reference sites), with periodic sampling at 12 subjectively chosen “fixed” sites. Sediment samples—or when conditions required, filter-wipes from rock surfaces—were collected in each of three intertidal zones and from subtidal stations up to 30-m deep. Oil removal was generally quite rapid: by 1991 the concentration of oil spilled from the Exxon Valdez had been dramatically reduced on the majority of shorelines by both natural processes and cleanup efforts. Moderate concentrations of petroleum residues remain only in limited, localized areas; however, most of these residues are highly asphaltic, not readily bioavailable, and not toxic to marine life. Acute sediment toxicity from oil (as measured by standard toxicity tests) was virtually absent by 1990–'91, except at a small number of isolated locations. The petroleum residues had degraded below the threshold of acute toxic effects. Measurable polycyclic aromatic hydrocarbon (PAH) levels are, in general, well below those conservatively associated with adverse effects, and biological recovery has been considerably more rapid than the removal of the last chemical remnants. The remaining residues continue to degrade and are, in general, predicted to become indistinguishable from background hydrocarbon levels by 1993 or 1994. Localized residues of weathered oil will no doubt exist beyond 1994 at certain locations, but their environmental significance will be negligible compared with other stresses ongoing in the sound. Samples of nearshore subtidal sediments showed surprisingly low concentrations of oil residue, as an increment to the natural petrogenic hydrocarbon background. Sediment toxicity tests showed that they were essentially non toxic. It appears that most of the oil leaving the shoreline was swept away and dissipated at sea. It is concluded that long-term ecological effects resulting from shoreline oiling or subtidal contamination are highly unlikely.
Abstract. Mercury (Hg) from Hg mining at Clear Lake, California, USA, has contaminated water and sediments for over 130 years and has the potential to affect human and environmental health. With total mercury (TotHg) concentrations up to 438 mg/kg (dry mass) in surficial sediments and up to 399 ng/L in lake water, Clear Lake is one of the most Hg-contaminated lakes worldwide. Particulate Hg in surface water near the mine ranges from
Clear Lake is the site of an abandoned mercury (Hg) mine (active intermittently from 1873 to 1957), now a U.S. Environmental Protection Agency Superfund Site. Mining activities, including bulldozing waste rock and tailings into the lake, resulted in approximately 100 Mg of Hg entering the lake's ecosystem. This series of papers represents the culmination of approximately 15 years of Hg-related studies on this ecosystem, following Hg from the ore body to the highest trophic levels. A series of physical, chemical, biological, and limnological studies elucidate how ongoing Hg loading to the lake is influenced by acid mine drainage and how wind-driven currents and baroclinic circulation patterns redistribute Hg throughout the lake. Methylmercury (MeHg) production in this system is controlled by both sulfate-reducing bacteria as well as newly identified iron-reducing bacteria. Sediment cores (dated with dichlorodiphenyldichlorethane [DDD], 210pb, and 14C) to approximately 250 cm depth (representing up to approximately 3000 years before present) elucidate a record of total Hg (TotHg) loading to the lake from natural sources and mining and demonstrate how MeHg remains stable at depth within the sediment column for decades to millenia. Core data also identify other stresses that have influenced the Clear Lake Basin especially over the past 150 years. Although Clear Lake is one of the most Hg-contaminated lakes in the world, biota do not exhibit MeHg concentrations as high as would be predicted based on the gross level of Hg loading. We compare Clear Lake's TotHg and MeHg concentrations with other sites worldwide and suggest several hypotheses to explain why this discrepancy exists. Based on our data, together with state and federal water and sediment quality criteria, we predict potential resulting environmental and human health effects and provide data that can assist remediation efforts.
BackgroundSuccessfully modeling high-dimensional data involving thousands of variables is challenging. This is especially true for gene expression profiling experiments, given the large number of genes involved and the small number of samples available. Random Forests (RF) is a popular and widely used approach to feature selection for such "small n, large p problems." However, Random Forests suffers from instability, especially in the presence of noisy and/or unbalanced inputs.ResultsWe present RKNN-FS, an innovative feature selection procedure for "small n, large p problems." RKNN-FS is based on Random KNN (RKNN), a novel generalization of traditional nearest-neighbor modeling. RKNN consists of an ensemble of base k-nearest neighbor models, each constructed from a random subset of the input variables. To rank the importance of the variables, we define a criterion on the RKNN framework, using the notion of support. A two-stage backward model selection method is then developed based on this criterion. Empirical results on microarray data sets with thousands of variables and relatively few samples show that RKNN-FS is an effective feature selection approach for high-dimensional data. RKNN is similar to Random Forests in terms of classification accuracy without feature selection. However, RKNN provides much better classification accuracy than RF when each method incorporates a feature-selection step. Our results show that RKNN is significantly more stable and more robust than Random Forests for feature selection when the input data are noisy and/or unbalanced. Further, RKNN-FS is much faster than the Random Forests feature selection method (RF-FS), especially for large scale problems, involving thousands of variables and multiple classes.ConclusionsGiven the superiority of Random KNN in classification performance when compared with Random Forests, RKNN-FS's simplicity and ease of implementation, and its superiority in speed and stability, we propose RKNN-FS as a faster and more stable alternative to Random Forests in classification problems involving feature selection for high-dimensional datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.