Aim Primary forests have high conservation value but are rare in Europe due to historic land use. Yet many primary forest patches remain unmapped, and it is unclear to what extent they are effectively protected. Our aim was to (1) compile the most comprehensive European‐scale map of currently known primary forests, (2) analyse the spatial determinants characterizing their location and (3) locate areas where so far unmapped primary forests likely occur. Location Europe. Methods We aggregated data from a literature review, online questionnaires and 32 datasets of primary forests. We used boosted regression trees to explore which biophysical, socio‐economic and forest‐related variables explain the current distribution of primary forests. Finally, we predicted and mapped the relative likelihood of primary forest occurrence at a 1‐km resolution across Europe. Results Data on primary forests were frequently incomplete or inconsistent among countries. Known primary forests covered 1.4 Mha in 32 countries (0.7% of Europe’s forest area). Most of these forests were protected (89%), but only 46% of them strictly. Primary forests mostly occurred in mountain and boreal areas and were unevenly distributed across countries, biogeographical regions and forest types. Unmapped primary forests likely occur in the least accessible and populated areas, where forests cover a greater share of land, but wood demand historically has been low. Main conclusions Despite their outstanding conservation value, primary forests are rare and their current distribution is the result of centuries of land use and forest management. The conservation outlook for primary forests is uncertain as many are not strictly protected and most are small and fragmented, making them prone to extinction debt and human disturbance. Predicting where unmapped primary forests likely occur could guide conservation efforts, especially in Eastern Europe where large areas of primary forest still exist but are being lost at an alarming pace.
The collation of citizen science data in open-access biodiversity databases makes temporally and spatially extensive species’ observation data available to a wide range of users. Such data are an invaluable resource but contain inherent limitations, such as sampling bias in favour of recorder distribution, lack of survey effort assessment, and lack of coverage of the distribution of all organisms. Any technical assessment, monitoring program or scientific research applying citizen science data should therefore include an evaluation of the uncertainty of its results. We use ‘ignorance’ scores, i.e. spatially explicit indices of sampling bias across a study region, to further understand spatial patterns of observation behaviour for 13 reference taxonomic groups. The data is based on voluntary observations made in Sweden between 2000 and 2014. We compared the effect of six geographical variables (elevation, steepness, population density, log population density, road density and footpath density) on the ignorance scores of each group. We found substantial variation among taxonomic groups in the relative importance of different geographic variables for explaining ignorance scores. In general, road access and logged population density were consistently important variables explaining bias in sampling effort, indicating that access at a landscape-scale facilitates voluntary reporting by citizen scientists. Also, small increases in population density can produce a substantial reduction in ignorance score. However the between-taxa variation in the importance of geographic variables for explaining ignorance scores demonstrated that different taxa suffer from different spatial biases. We suggest that conservationists and researchers should use ignorance scores to acknowledge uncertainty in their analyses and conclusions, because they may simultaneously include many correlated variables that are difficult to disentangle.
Aims Primary forests are critical for forest biodiversity and provide key ecosystem services. In Europe, these forests are particularly scarce and it is unclear whether they are sufficiently protected. Here we aim to: (a) understand whether extant primary forests are representative of the range of naturally occurring forest types, (b) identify forest types which host enough primary forest under strict protection to meet conservation targets and (c) highlight areas where restoration is needed and feasible. Location Europe. Methods We combined a unique geodatabase of primary forests with maps of forest cover, potential natural vegetation, biogeographic regions and protected areas to quantify the proportion of extant primary forest across Europe's forest types and to identify gaps in protection. Using spatial predictions of primary forest locations to account for underreporting of primary forests, we then highlighted areas where restoration could complement protection. Results We found a substantial bias in primary forest distribution across forest types. Of the 54 forest types we assessed, six had no primary forest at all, and in two‐thirds of forest types, less than 1% of forest was primary. Even if generally protected, only ten forest types had more than half of their primary forests strictly protected. Protecting all documented primary forests requires expanding the protected area networks by 1,132 km2 (19,194 km2 when including also predicted primary forests). Encouragingly, large areas of non‐primary forest existed inside protected areas for most types, thus presenting restoration opportunities. Main conclusion Europe's primary forests are in a perilous state, as also acknowledged by EU's “Biodiversity Strategy for 2030.” Yet, there are considerable opportunities for ensuring better protection and restoring primary forest structure, composition and functioning, at least partially. We advocate integrated policy reforms that explicitly account for the irreplaceable nature of primary forests and ramp up protection and restoration efforts alike.
BackgroundOpen-access biodiversity databases including mainly citizen science data make temporally and spatially extensive species’ observation data available to a wide range of users. Such data have limitations however, which include: sampling bias in favour of recorder distribution, lack of survey effort assessment, and lack of coverage of the distribution of all organisms. These limitations are not always recorded, while any technical assessment or scientific research based on such data should include an evaluation of the uncertainty of its source data and researchers should acknowledge this information in their analysis. The here proposed maps of ignorance are a critical and easy way to implement a tool to not only visually explore the quality of the data, but also to filter out unreliable results.New informationI present simple algorithms to display ignorance maps as a tool to report the spatial distribution of the bias and lack of sampling effort across a study region. Ignorance scores are expressed solely based on raw data in order to rely on the fewest assumptions possible. Therefore there is no prediction or estimation involved. The rationale is based on the assumption that it is appropriate to use species groups as a surrogate for sampling effort because it is likely that an entire group of species observed by similar methods will share similar bias. Simple algorithms are then used to transform raw data into ignorance scores scaled 0-1 that are easily comparable and scalable. Because of the need to perform calculations over big datasets, simplicity is crucial for web-based implementations on infrastructures for biodiversity information.With these algorithms, any infrastructure for biodiversity information can offer a quality report of the observations accessed through them. Users can specify a reference taxonomic group and a time frame according to the research question. The potential of this tool lies in the simplicity of its algorithms and in the lack of assumptions made about the bias distribution, giving the user the freedom to tailor analyses to their specific needs.
Binomial N‐mixture models are commonly applied to analyse population survey data. By estimating detection probabilities, N‐mixture models aim at extracting information about abundances in terms of absolute and not just relative numbers. This separation of detection probability and abundance relies on parametric assumptions about the distribution of individuals among sites and of detections of individuals among repeat visits to sites. Current methods for checking assumptions are limited, and their computational complexity has hindered evaluations of their performance. We use simulations and a case study to assess the sensitivity of binomial N‐mixture models to overdispersion in abundance and in detection, develop computationally efficient graphical goodness of fit checks to detect it, and evaluate the ability of the checks to identify overdispersion. The simulations show that if the parametric assumptions are not exact the bias in estimated abundances can be severe: underestimation if there is overdispersion in abundance relative to the fitted model and overestimation if there is overdispersion in detection. Our goodness‐of‐fit checks performed well in detecting lack of fit when the abundance distribution was overdispersed, but struggled to detect lack of fit when detections were overdispersed. We show that the inability to detect lack of fit due to overdispersed detection is caused by a fundamental similarity between N‐mixture models with beta‐binomial detections and N‐mixture models with negative binomial abundances. The strong biases that can occur in the binomial N‐mixture model when the distribution of individuals among sites, or the detection model, is mis‐specified implies that checking goodness of fit is essential for sound inference about abundance. To check the assumptions we provide computationally efficient goodness of fit checks that are available in an R‐package nmixgof. However, even when a binomial N‐mixture model appears to fit the data well, estimates are not robust in the presence of overdispersion. We show that problems can occur even when estimated detection probabilities are high, and that previously reported problems with negative binomial models cannot always be diagnosed by checking the sensitivity of abundance estimates to numerical cutoff values used in likelihood computations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.