Natural‐history collections in museums contain data critical to decisions in biodiversity conservation. Collectively, these specimen‐based data describe the distributions of known taxa in time and space. As the most comprehensive, reliable source of knowledge for most described species, these records are potentially available to answer a wide range of conservation and research questions. Nevertheless, these data have shortcomings, notably geographic gaps, resulting mainly from the ad hoc nature of collecting effort. This problem has been frequently cited but rarely addressed in a systematic manner. We have developed a methodology to evaluate museum collection data, in particular the reliability of distributional data for narrow‐range taxa. We included only those taxa for which there were an appropriate number of records, expert verification of identifications, and acceptable locality accuracy. First, we compared the available data for the taxon of interest to the “background data,” comprised of records for those organisms likely to be captured by the same methods or by the same collectors as the taxon of interest. The “adequacy”of background sampling effort was assessed through calculation of statistics describing the separation, density, and clustering of points, and through generation of a sampling density contour surface. Geographical information systems (GIS) technology was then used to model predicted distributions of species based on abiotic (e.g., climatic and geological) data. The robustness of these predicted distributions can be tested iteratively or by bootstrapping. Together, these methods provide an objective means to assess the likelihood of the distributions obtained from museum collection records representing true distributions. Potentially, they could be used to evaluate any point data to be collated in species maps, biodiversity assessment, or similar applications requiring distributional information.
Introduction Technologies that allow computers and machines to perform tasks normally requiring human intelligence are often referred to as artificial intelligence (AI). These technologies allow machines to complete tasks with traits or capabilities ordinarily associated with human cognition, such as reasoning, problem solving, common-sense knowledge management, planning, learning, translation, perception, vision, speech recognition, and social intelligence (Kaplan and Haenlein 2019). Research in AI is rapidly increasing, as indicated when c omparing the annual publishing rate of papers focused on AI between 1996 and 2017 against the publishing rates of papers focused on any topic or against the publishing rates of papers in the field of computer science (see the growth of annually published papers by topic in Shoham et al. [2018; p. 9]). This growth in AI publications has prompted researchers to critically explore the potential promises and risks of AI (Scherer 2016; Webb 2019; Yudkowsky 2008) as well as ethics and responsibilities (Miller 2019; Cowls and Floridi 2018; Scherer 2016; Dawson et al. 2019). AI has been used in citizen science projects for about 20 years. It was first used in this context in 2000, in collaborative AI databases such as the Generic Artificial Consciousness (GAC)/Mindpixel Digital Mind Modeling Project (McKinstry 2009) and the Open Mind Common Sense project (Singh et al. 2002). In these models, usersubmitted propositions were meant to create a database of common-sense knowledge that could function as a kind of digital brain. This relationship between collective knowledge and algorithmic processing evolved in many directions and, in 2019, is predominantly represented by machine learning, especially applied to computer vision, which includes diverse methods of automatically identifying objects from digital photographs. For example, the iNaturalist platform, a citizen science project and online social network, is designed to enable citizen scientists and ecologists alike to upload observations from the natural world, such as images of animals and plants (Van Horn et al. 2018). The platform is one among many (Wäldchen et al. 2018) that include an automated
2005. Using high-resolution multi-spectral imagery to estimate habitat complexity in open-canopy forests: can we predict ant community patterns? Á/ Ecography 28: 495 Á/504.The structure and composition of arthropod assemblages are strongly associated with habitat complexity. Accurate, time efficient estimates of habitat complexity may provide insights for biodiversity management in natural systems. We obtained high-resolution (0.7 m pixel) multi-spectral aerial imagery of National Parks 20 km north and 20 km south of Sydney, Australia. We explored both the Normalised Difference Vegetation Index (NDVI) and the standard deviation of reflectance in the near-infrared spectrum (stdevR NIR ) as indicators of low and high habitat complexity in sandstone forests north of Sydney. We then tested described predictions of ant community patterns (based on a previous study) using sites selected from high-resolution multi-spectral imagery in sandstone forests south of Sydney. Ground-scored habitat complexity was positively correlated with NDVIs and, to a lesser extent, stdevR NIR values in sandstone forests north of Sydney. As predicted, ant species richness was negatively correlated with NDVIs in forests to the south of Sydney. Also, ant species composition was different in areas with contrasting NDVIs. The ant species driving composition differences responded to habitat complexity in a similar way in forests to the north, and south, of Sydney. The strong association we detected between NDVIs and habitat complexity, most likely reflects the relatively exposed nature of the vegetative layers in the forests we sampled. Remote sensing, integrated with quantitative research testing predictive faunal responses to vegetation structure and biomass at landscape scales, may provide efficient means of estimating biodiversity for management in particular habitats.
Genomic samples of non-model organisms are becoming increasingly important in a broad range of studies from developmental biology, biodiversity analyses, to conservation. Genomic sample definition, description, quality, voucher information and metadata all need to be digitized and disseminated across scientific communities. This information needs to be concise and consistent in today’s ever-increasing bioinformatic era, for complementary data aggregators to easily map databases to one another. In order to facilitate exchange of information on genomic samples and their derived data, the Global Genome Biodiversity Network (GGBN) Data Standard is intended to provide a platform based on a documented agreement to promote the efficient sharing and usage of genomic sample material and associated specimen information in a consistent way. The new data standard presented here build upon existing standards commonly used within the community extending them with the capability to exchange data on tissue, environmental and DNA sample as well as sequences. The GGBN Data Standard will reveal and democratize the hidden contents of biodiversity biobanks, for the convenience of everyone in the wider biobanking community. Technical tools exist for data providers to easily map their databases to the standard.Database URL: http://terms.tdwg.org/wiki/GGBN_Data_Standard
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.