Accurate estimates of the diets of predators are required in many areas of ecology, but for many species current methods are imprecise, limited to the last meal, and often biased. The diversity of fatty acids and their patterns in organisms, coupled with the narrow limitations on their biosynthesis, properties of digestion in monogastric animals, and the prevalence of large storage reservoirs of lipid in many predators, led us to propose the use of quantitative fatty acid signature analysis (QFASA) to study predator diets. We present a statistical model that provides quantitative estimates of the proportions of prey species in the diets of individual predators using fatty acid signatures. We conducted simulation studies using a database of 28 prey species (n ϭ 954 individuals) from the Scotian Shelf off eastern Canada to investigate properties of the model and to evaluate the reliability with which prey could be distinguished in the model. We then conducted experiments on grey seals (Halichoerus grypus, n ϭ 25) and harp seals (Phoca groenlandica, n ϭ 5) to assess quantitative characteristics of fatty acid deposition and to develop calibration coefficients for individual fatty acids to account for predator lipid metabolism. We then tested the model and calibration coefficients by estimating the diets of experimentally fed captive grey seals (n ϭ 6, switched from herring to a mackerel/capelin diet) and mink kits (Mustela vison, n ϭ 46, switched from milk to one of three oil-supplemented diets). The diets of all experimentally fed animals were generally well estimated using QFASA and were consistent with qualitative and quantitative expectations, provided that appropriate calibration coefficients were used. In a final case, we compared video data of foraging by individual freeranging harbor seals (Phoca vitulina, n ϭ 23) fitted with Crittercams and QFASA estimates of the diet of those same seals using a complex ecosystem-wide prey database. Among the 28 prey species in the database, QFASA estimated sandlance to be the dominant prey species in the diet of all seals (averaging 62% of diet), followed primarily by flounders, but also capelin and minor amounts of other species, although there was also considerable individual variability among seals. These estimates were consistent with video data showing sandlance to be the predominant prey, followed by flatfish. We conclude that QFASA provides estimates of diets for individuals at time scales that are relevant to the ecological processes affecting survival, and can be used to study diet variability within individuals over time, which will provide important opportunities rarely possible with other indirect methods. We propose that the QFASA model we have set forth will be applicable to a wide range of predators and ecosystems.
Summary. Various bootstraps have been proposed for bootstrapping clustered data from one-way arrays. The simulation results in the literature suggest that some of these methods work quite well in practice; the theoretical results are limited and more mixed in their conclusions. For example, McCullagh reached negative conclusions about the use of non-parametric bootstraps for one-way arrays. The purpose of this paper is to extend our understanding of the issues by discussing the effect of different ways of modelling clustered data, the criteria for successful bootstraps used in the literature and extending the theory from functions of the sample mean to include functions of the between and within sums of squares and non-parametric bootstraps to include model-based bootstraps. We determine that the consistency of variance estimates for a bootstrap method depends on the choice of model with the residual bootstrap giving consistency under the transformation model whereas the cluster bootstrap gives consistent estimates under both the transformation and the random-effect model. In addition we note that the criteria based on the distribution of the bootstrap observations are not really useful in assessing consistency.
State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.
In the problem of reconstructing full sib pedigrees from DNA marker data, three existing algorithms and one new algorithm are compared in terms of accuracy, efficiency and robustness using real and simulated data sets. An algorithm based on the exclusion principle and another based on a maximization of the Simpson index were very accurate at reconstructing data sets comprising a few large families but had problems with data sets with limited family structure, while a Markov Chain Monte Carlo (MCMC) algorithm based on the maximization of a partition score had the opposite behaviour. An MCMC algorithm based on maximizing the full joint likelihood performed best in small data sets comprising several medium-sized families but did not work well under most other conditions. It appears that the likelihood surface may be rough and presents challenges for the MCMC algorithm to find the global maximum. This likelihood algorithm also exhibited problems in reconstructing large family groups, due possibly to limits in computational precision. The accuracy of each algorithm improved with an increasing amount of information in the data set, and was very high with eight loci with eight alleles each. All four algorithms were quite robust to deviation from an idealized uniform allelic distribution, to departures from idealized Mendelian inheritance in simulated data sets and to the presence of null alleles. In contrast, none of the algorithms were very robust to the probable presence of error/mutation in the data. Depending upon the type of mutation or errors and the algorithm used, between 70 and 98% of the affected individuals were classified improperly on average.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.