BackgroundMuch of the extensive research regarding transmission of malaria is underpinned by mathematical modelling. Compartmental models, which focus on interactions and transitions between population strata, have been a mainstay of such modelling for more than a century. However, modellers are increasingly adopting agent-based approaches, which model hosts, vectors and/or their interactions on an individual level. One reason for the increasing popularity of such models is their potential to provide enhanced realism by allowing system-level behaviours to emerge as a consequence of accumulated individual-level interactions, as occurs in real populations.MethodsA systematic review of 90 articles published between 1998 and May 2018 was performed, characterizing agent-based models (ABMs) relevant to malaria transmission. The review provides an overview of approaches used to date, determines the advantages of these approaches, and proposes ideas for progressing the field.ResultsThe rationale for ABM use over other modelling approaches centres around three points: the need to accurately represent increased stochasticity in low-transmission settings; the benefits of high-resolution spatial simulations; and heterogeneities in drug and vaccine efficacies due to individual patient characteristics. The success of these approaches provides avenues for further exploration of agent-based techniques for modelling malaria transmission. Potential extensions include varying elimination strategies across spatial landscapes, extending the size of spatial models, incorporating human movement dynamics, and developing increasingly comprehensive parameter estimation and optimization techniques.ConclusionCollectively, the literature covers an extensive array of topics, including the full spectrum of transmission and intervention regimes. Bringing these elements together under a common framework may enhance knowledge of, and guide policies towards, malaria elimination. However, because of the diversity of available models, endorsing a standardized approach to ABM implementation may not be possible. Instead it is recommended that model frameworks be contextually appropriate and sufficiently described. One key recommendation is to develop enhanced parameter estimation and optimization techniques. Extensions of current techniques will provide the robust results required to enhance current elimination efforts.Electronic supplementary materialThe online version of this article (10.1186/s12936-018-2442-y) contains supplementary material, which is available to authorized users.
Eradication of an invasive species can provide significant environmental, economic, and social benefits, but eradication programs often fail. Constant and careful monitoring improves the chance of success, but an invasion may seem to be in decline even when it is expanding in abundance or spatial extent. Determining whether an invasion is in decline is a challenging inference problem for two reasons. First, it is typically infeasible to regularly survey the entire infested region owing to high cost. Second, surveillance methods are imperfect and fail to detect some individuals. These two factors also make it difficult to determine why an eradication program is failing. Agent-based methods enable inferences to be made about the locations of undiscovered individuals over time to identify trends in invader abundance and spatial extent. We develop an agent-based Bayesian method and apply it to Australia's largest eradication program: the campaign to eradicate the red imported fire ant (Solenopsis invicta) from Brisbane. The invasion was deemed to be almost eradicated in 2004 but our analyses indicate that its geographic range continued to expand despite a sharp decline in number of nests. We also show that eradication would probably have been achieved with a relatively small increase in the area searched and treated. Our results demonstrate the importance of inferring temporal and spatial trends in ongoing invasions. The method can handle incomplete observations and takes into account the effects of human intervention. It has the potential to transform eradication practices.Bayesian models | spread models | Markov chain Monte Carlo I nvasive species can cause economic, social, and environmental losses (1), and eradication is therefore desirable. The duration of successful eradication programs varies depending on biological and management factors (2). If the invasion is detected early while it is confined to a small area, eradication can potentially be achieved almost immediately by treating the entire area. Black-striped mussels (Mytilopsis sallei) were eradicated soon after being discovered in a northern Australian marina by applying a highly toxic chemical to the entire marina (3). Such "brute-force" treatment methods may not be available when invasions have spread over a larger area owing to unacceptable impacts on nontarget species or human health or because of financial constraints. When large areas are potentially infested and surveillance is required to determine where to apply treatment, eradication can take many years. Some areas that might be infested are not regularly surveyed owing to the high cost of monitoring (observations are "incomplete"), and some individuals in surveyed locations are missed because surveillance methods are imperfect (4). These two factors create uncertainty about whether eradication efforts will succeed. An invasion may seem to be in decline but in fact be expanding in spatial extent and/or abundance, or declining more slowly than estimated, with a high risk of "escaping" to unman...
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
Advanced data analysis tools such as mathematical optimisation, Bayesian inference and machine learning have the capability to revolutionise the field of quantitative voltammetry.
Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a "metachain" to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely.
Summary1. Many questions in ecology and evolutionary biology consider response variables that are functions (e.g. speciesabundance distributions) rather than a single scalar value (e.g. species richness). Although methods for analysing function-valued data have been available for several decades, ecological and evolutionary applications are rare. 2. We outline methods for regression when the response variable is a function ('function regression') and introduce the R package FREE, which focuses on straightforward implementation and interpretation of function regression analyses. Several computational methods are implemented, including machine learning and several Bayesian methods. We compare different methods using simulated data and real ecological data on individualsize distributions (ISDs) of fish and trees. 3. No single method performed best overall, with several performing equally well for a given data set. Which method to use depends on sample sizes and the questions being considered; in many cases, a consensus approach should be used to combine or compare fitted models. 4. Function regression allows the direct modelling of many function-valued data (e.g. species-abundance distributions) rather than having to reduce those functions to a single scalar response variable (e.g. species diversity or functional diversity indices). Our ecological examples using ISD data show that larger rivers support more-even fish-size distributions than smaller rivers and that low initial planting densities lead to more-even tree-size distributions than high initial planting densities. Function regression provided more informative and intuitive interpretations of these data than conventional non-function-valued approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.