Correlation across species between two quantitative traits, or between a trait and a habitat property, can suggest that a trait value is effective in sustaining populations in some contexts but not others. It is widely held that such correlations should be controlled for phylogeny, via phylogenetically independent contrasts PICs or phylogenetic generalised least squares PGLS. Two weaknesses of this idea are discussed. First, the phylogenetically conservative share of the correlation ought not to be excluded from consideration as potentially ecologically functional. Second, PGLS does not yield a complete or accurate breakdown of A-B covariation, because it corresponds to a generating model where B predicts variation in A but not the reverse. Multi-response mixed models using phylogenetic covariance matrices can quantify conservative trait correlation CTC, a share of covariation between traits A and B that is phylogenetically conservative. Because the evidence is from correlative data, it is not possible to split CTC into causation by phylogenetic history versus causation by continuing reciprocal selection between A and B. Moreover, it is quite likely biologically that the two influences have acted in concert, through phylogenetic niche conservatism. Synthesis: The CTC concept treats phylogenetic conservatism as a conjoint interpretation alongside ongoing influence of other traits. CTC can be quantified via multi-response phylogenetic mixed models.
Information‐theoretic approaches to model selection, such as Akaike's information criterion (AIC) and cross validation, provide a rigorous framework to select among candidate hypotheses in ecology, yet the persistent concern of overfitting undermines the interpretation of inferred processes. A common misconception is that overfitting is due to the choice of criterion or model score, despite research demonstrating that selection uncertainty associated with score estimation is the predominant influence. Here we introduce a novel selection rule that identifies a parsimonious model by directly accounting for estimation uncertainty, while still retaining an information‐theoretic interpretation. The new rule, which is a modification of the existing one‐standard‐error rule, mitigates overfitting and reduces the likelihood that spurious effects will be included in the selected model, thereby improving its inferential properties. We present the rule and illustrative examples in the context of maximum‐likelihood estimation and Kullback‐Leibler discrepancy, although the rule is applicable in a more general setting, including Bayesian model selection and other types of discrepancy.
Specifying, assessing, and selecting among candidate statistical models is fundamental to ecological research. Commonly used approaches to model selection are based on predictive scores and include information criteria such as Akaike's information criterion, and cross validation. Based on data splitting, cross validation is particularly versatile because it can be used even when it is not possible to derive a likelihood (e.g., many forms of machine learning) or count parameters precisely (e.g., mixed‐effects models). However, much of the literature on cross validation is technical and spread across statistical journals, making it difficult for ecological analysts to assess and choose among the wide range of options. Here we provide a comprehensive, accessible review that explains important—but often overlooked—technical aspects of cross validation for model selection, such as: bias correction, estimation uncertainty, choice of scores, and selection rules to mitigate overfitting. We synthesize the relevant statistical advances to make recommendations for the choice of cross‐validation technique and we present two ecological case studies to illustrate their application. In most instances, we recommend using exact or approximate leave‐one‐out cross validation to minimize bias, or otherwise k‐fold with bias correction if k < 10. To mitigate overfitting when using cross validation, we recommend calibrated selection via our recently introduced modified one‐standard‐error rule. We advocate for the use of predictive scores in model selection across a range of typical modeling goals, such as exploration, hypothesis testing, and prediction, provided that models are specified in accordance with the stated goal. We also emphasize, as others have done, that inference on parameter estimates is biased if preceded by model selection and instead requires a carefully specified single model or further technical adjustments.
We study certain Z 2 -graded, finite-dimensional polynomial algebras of degree 2 which are a special class of deformations of Lie superalgebras, which we call quadratic Lie superalgebras. Starting from the formal definition, we discuss the generalised Jacobi relations in the context of the Koszul property, and give a proof of the PBW basis theorem. We give several concrete examples of quadratic Lie superalgebras for low dimensional cases, and discuss aspects of their structure constants for the 'type I' class. We derive the equivalent of the Kac module construction for typical and atypical modules, and a related direct construction of irreducible modules due to Gould. We investigate in detail one specific case, the quadratic generalisation gl 2 (n/1) of the Lie superalgebra sl(n/1). We formulate the general atypicality conditions at level 1, and present an analysis of zero-and one-step atypical modules for a certain family of Kac modules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.