Darwin's classic image of an ''entangled bank'' of interdependencies among species has long suggested that it is difficult to predict how the loss of one species affects the abundance of others. We show that for dynamical models of realistically structured ecological networks in which pair-wise consumer-resource interactions allometrically scale to the 3 ⁄4 power-as suggested by metabolic theory-the effect of losing one species on another can be predicted well by simple functions of variables easily observed in nature. By systematically removing individual species from 600 networks ranging from 10 -30 species, we analyzed how the strength of 254,032 possible pair-wise species interactions depended on 90 stochastically varied species, link, and network attributes. We found that the interaction strength between a pair of species is predicted well by simple functions of the two species' biomasses and the body mass of the species removed. On average, prediction accuracy increases with network size, suggesting that greater web complexity simplifies predicting interaction strengths. Applied to field data, our model successfully predicts interactions dominated by trophic effects and illuminates the sign and magnitude of important nontrophic interactions.
The COVID-19 pandemic illustrates perfectly how the operation of science changes when questions of urgency, stakes, values and uncertainty collide -in the 'post-normal' regime. Well before the coronavirus pandemic, statisticians were debating how to prevent malpractice such as p-hacking, particularly when it could influence policy 1 . Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by 'science' 2 . Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in Pandemic politics highlight how predictions need to be transparent and humble to invite insight, not blame.
<p>Student evaluations of teaching (SET) are widely used in academic personnel decisions as a measure of teaching effectiveness. We show:</p><ul> <li>SET are biased against female instructors by an amount that is large and statistically significant</li> <li>the bias affects how students rate even putatively objective aspects of teaching, such as how promptly assignments are graded</li> <li>the bias varies by discipline and by student gender, among other things</li> <li>it is not possible to adjust for the bias, because it depends on so many factors</li></ul><ul> <li>SET are more sensitive to students' gender bias and grade expectations than they are to teaching effectiveness</li> <li>gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.</li></ul><p>These findings are based on nonparametric statistical tests applied to two datasets: 23,001 SET of 379 instructors by 4,423 students in six mandatory first-year courses in a five-year natural experiment at a French university, and 43 SET for four sections of an online course in a randomized, controlled, blind experiment at a US university.</p>
Splitting of the sun's global oscillation frequencies by large-scale flows can be used to investigate how rotation varies with radius and latitude within the solar interior. The nearly uninterrupted observations by the Global Oscillation Network Group (GONG) yield oscillation power spectra with high duty cycles and high signal-to-noise ratios. Frequency splittings derived from GONG observations confirm that the variation of rotation rate with latitude seen at the surface carries through much of the convection zone, at the base of which is an adjustment layer leading to latitudinally independent rotation at greater depths. A distinctive shear layer just below the surface is discernible at low to mid-latitudes.
Student ratings of teaching have been used, studied, and debated for almost a century. This article examines student ratings of teaching from a statistical perspective. The common practice of relying on averages of student teaching evaluation scores as the primary measure of teaching effectiveness for promotion and tenure decisions should be abandoned for substantive and statistical reasons: There is strong evidence that student responses to questions of "effectiveness" do not measure teaching effectiveness. Response rates and response variability matter. And comparing averages of categorical responses, even if the categories are represented by numbers, makes little sense. Student ratings of teaching are valuable when they ask the right questions, report response rates and score distributions, and are balanced by a variety of other sources and methods to evaluate teaching.Since 1975, course evaluations at University of California, Berkeley, have asked:Considering both the limitations and possibilities of the subject matter and course, how would you rate the overall teaching effectiveness of this instructor? 1 (not at all effective), 2, 3, 4 (moderately effective), 5, 6, 7 (extremely effective) Among faculty, student evaluations of teaching (SET) are a source of pride and satisfaction-and frustration and anxiety. High-stakes decisions including tenure and promotions rely on SET. Yet it is widely believed that they are primarily a popularity contest, that it is easy to "game" ratings, that good teachers get bad ratings and vice versa, and that rating anxiety stifles pedagogical innovation and encourages faculty to water down course content. What is the truth? We review statistical issues in analyzing and comparing SET scores, problems defining and measuring teaching effectiveness, and pernicious distortions that result from using SET scores as a proxy for teaching quality and effectiveness. We argue here-and the literature shows-that students are in a good position to evaluate some aspects of teaching, but SET are at best tenuously connected to teaching effectiveness (defining and measuring teaching effectiveness are knotty problems in themselves; we discuss this below). Other ways of evaluating teaching can be combined with student comments to produce a more reliable and meaningful composite. We make recommendations regarding the use of SET and discuss new policies implemented at
What mathematicians, scientists, engineers and statisticians mean by ‘inverse problem’ differs. For a statistician, an inverse problem is an inference or estimation problem. The data are finite in number and contain errors, as they do in classical estimation or inference problems, and the unknown typically is infinite dimensional, as it is in nonparametric regression. The additional complication in an inverse problem is that the data are only indirectly related to the unknown. Canonical abstract formulations of statistical estimation problems subsume this complication by allowing probability distributions to be indexed in more-or-less arbitrary ways by parameters, which can be infinite dimensional. Standard statistical concepts, questions and considerations such as bias, variance, mean-squared error, identifiability, consistency, efficiency and various forms of optimality apply to inverse problems. This paper discusses inverse problems as statistical estimation and inference problems, and points to the literature for a variety of techniques and results. It shows how statistical measures of performance apply to techniques used in practical inverse problems, such as regularization, maximum penalized likelihood, Bayes estimation and the Backus–Gilbert method. The paper generalizes results of Backus and Gilbert characterizing parameters in inverse problems that can be estimated with finite bias. It also establishes general conditions under which parameters in inverse problems can be estimated consistently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.