SciPy is an open-source scientific computing library for the Python programming language. Since its initial release in 2001, SciPy has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year. In this work, we provide an overview of the capabilities and development practices of SciPy 1.0 and highlight some recent technical developments.
Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis.
The affiliation for Evgeni Burovski was given as Higher School of Economics; the correct affiliation is National Research University, Higher School of Economics. In Box 1, "SciPy is an open-source package that builds on the strengths of Python and Numeric, providing a wide range of fast scientific and numeric functionality" was used as the box title; this has been moved to the beginning of the box text and a new title has been provided: "Excerpt from the SciPy 0.1 release announcement (typos corrected), posted 20 August 2001 on the Python-list mailing list. " From the original first sentence of this box, "(text following the % symbol indicates that a typo in the original text has been corrected in the version reproduced here)" has been deleted, and "% hanker to Hankel" and "% Netwon to Newton" have been deleted from the ends of the special functions row and the optimization row, respectively. In the first sentence of the ndimage section of Box 2, "nonlinear filter" has been changed to plural. At the end of the first paragraph of the section "SciPy matures, " "The library was expanded carefully, with the patience affordable in open-source projects and via best practices common in industry" has been changed to "The library was expanded carefully, with the patience affordable in open-source projects and via best practices, which are increasingly common in the scientific Python ecosystem and industry. " In Table 2, "Inequality constraint" has been changed to plural. In the "Nonlinear optimization: global minimization" section, "scipy.optimize.differentialevolution" had been changed to "scipy.optimize.differential_evolution. " In the first sentence of the section "Maintainers and contributors, " "SciPy developer guide" has been changed to "SciPy contributor guide" and the URL has been changed from
During the last decade, Python (an interpreted, high-level programming language) has arguably become the de facto standard for exploratory, interactive, and computational driven scientific research. This issue discusses the advantages of Python for scientific research and presents several of the core Python libraries and tools used in scientific research. While the articles in the present issue are self-contained, they nicely compliment the articles in the May/June 2007 special issue of CiSE titled "Python: Batteries Included." 1
Computational neuroscience is a subfield of neuroscience that develops models to integrate complex experimental data in order to understand brain function. To constrain and test computational models, researchers need access to a wide variety of experimental data. Much of those data are not readily accessible because neuroscientists fall into separate communities that study the brain at different levels and have not been motivated to provide data to researchers outside their community. To foster sharing of neuroscience data, a workshop was held in 2007, bringing together experimental and theoretical neuroscientists, computer scientists, legal experts and governmental observers. Computational neuroscience was recommended as an ideal field for focusing data sharing, and specific methods, strategies and policies were suggested for achieving it. A new funding area in the NSF/NIH Collaborative Research in Computational Neuroscience (CRCNS) program has been established to support data sharing, guided in part by the workshop recommendations. The new funding area is dedicated to the dissemination of high quality data sets with maximum scientific value for computational neuroscience. The first round of the CRCNS data sharing program supports the preparation of data sets which will be publicly available in 2008. These include electrophysiology and behavioral (eye movement) data described towards the end of this article.
Peer-reviewed publications are the primary mechanism for sharing scientific results. The current peer-review process is, however, fraught with many problems that undermine the pace, validity, and credibility of science. We highlight five salient problems: (1) reviewers are expected to have comprehensive expertise; (2) reviewers do not have sufficient access to methods and materials to evaluate a study; (3) reviewers are neither identified nor acknowledged; (4) there is no measure of the quality of a review; and (5) reviews take a lot of time, and once submitted cannot evolve. We propose that these problems can be resolved by making the following changes to the review process. Distributing reviews to many reviewers would allow each reviewer to focus on portions of the article that reflect the reviewer's specialty or area of interest and place less of a burden on any one reviewer. Providing reviewers materials and methods to perform comprehensive evaluation would facilitate transparency, greater scrutiny, and replication of results. Acknowledging reviewers makes it possible to quantitatively assess reviewer contributions, which could be used to establish the impact of the reviewer in the scientific community. Quantifying review quality could help establish the importance of individual reviews and reviewers as well as the submitted article. Finally, we recommend expediting post-publication reviews and allowing for the dialog to continue and flourish in a dynamic and interactive manner. We argue that these solutions can be implemented by adapting existing features from open-source software management and social networking technologies. We propose a model of an open, interactive review system that quantifies the significance of articles, the quality of reviews, and the reputation of reviewers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.