Abstract. The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors. In general, when error rates are Յ50% greater efficiency is gained by adding more sites, whereas when error rates are Ͼ50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.
A common feature of ecological data sets is their tendency to contain many zero values. Statistical inference based on such data are likely to be inefficient or wrong unless careful thought is given to how these zeros arose and how best to model them. In this paper, we propose a framework for understanding how zero-inflated data sets originate and deciding how best to model them. We define and classify the different kinds of zeros that occur in ecological data and describe how they arise: either from Ôtrue zeroÕ or Ôfalse zeroÕ observations. After reviewing recent developments in modelling zero-inflated data sets, we use practical examples to demonstrate how failing to account for the source of zero inflation can reduce our ability to detect relationships in ecological data and at worst lead to incorrect inference. The adoption of methods that explicitly model the sources of zero observations will sharpen insights and improve the robustness of ecological analyses.
Efforts to design monitoring regimes capable of detecting population trends can be thwarted by observational and economic constraints inherent to most biological surveys. Ensuring that limited resources are allocated efficiently requires evaluation of statistical power for alternative survey designs. We simulated the process of data collection on a landscape, where we initiated declines over 3 sample periods in species of varying prevalence and detectability. Changing occupancy levels were estimated using a technique that accounted for effects of false-negative errors on survey data. Declines were identified within a frequentist statistical framework, but the significance level was set at an optimal level rather than adhering to an arbitrary conventional threshold. By varying the number of sites sampled and repeat visits made, we show how managers can design an optimal monitoring regime that maximizes statistical power within fixed budget constraints. Results show that 2 to 3 visits/site are generally sufficient unless occupancy is very high or detectability is low. In both cases, the number of required visits increase. In an example of woodland bird monitoring in the Mt. Lofty Ranges, South Australia, we show that, although the budget required to monitor a relatively rare species of low detectability may be higher than that for a common, easily detectable species, survey design requirements for common species may be more stringent. We discuss implications for multi-species monitoring programs and application of our methods to more complex monitoring problems. JOURNAL OF WILDLIFE MANAGEMENT 69(2):473-482; 2005
Conservation monitoring in Australia has assumed increasing importance in recent years, as societal pressure to actively manage environmental problems has risen. More resources than ever before are being channelled to the task of documenting environmental change.Yet the field remains crippled by a pervasive lack of rigour in analysing, reporting and responding to the results of data collected. Millions of dollars are currently being wasted on monitoring programmes that have no realistic chance of detecting changes in the variables of interest. This is partly because detecting change in ecological systems is a genuinely difficult technical and logistical challenge. However, the failure to plan, fund and execute sophisticated analyses of monitoring data and then to use the results to improve monitoring methods, can also be attributed to the failure of professional ecologists, conservation practitioners and bureaucrats to work effectively together. In this paper, we offer constructive advice about how all parties involved can help to change this situation. We use three case studies of recent monitoring projects from our own experience to illustrate ways in which the disconnect between science and bureaucracy can be bridged and some obstacles to collecting and analysing ecologically meaningful data sets can be overcome. We urge a continuing discussion on this issue and hope to stimulate a change in the culture of conservation monitoring in Australia.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.