Summary Understanding the detailed dynamics of neuronal networks will require the simultaneous measurement of spike trains from hundreds of neurons (or more). Currently, approaches to extracting spike times and labels from raw data are time consuming, lack standardization and involve manual intervention, making it difficult to maintain data provenance and assess the quality of scientific results. Here, we describe an automated clustering approach and associated software package that addresses these problems and provides novel cluster quality metrics. We show that our approach has accuracy comparable to or exceeding that achieved using manual or semi-manual techniques with desktop CPU runtimes faster than acquisition time for up to hundreds of electrodes. Moreover, a single choice of parameters in the algorithm is effective for a variety of electrode geometries and across multiple brain regions. This algorithm has the potential to enable reproducible and automated spike sorting of larger scale recordings than is currently possible.
Integral equation methods for the solution of partial differential equations, when coupled with suitable fast algorithms, yield geometrically flexible, asymptotically optimal and well-conditioned schemes in either interior or exterior domains. The practical application of these methods, however, requires the accurate evaluation of boundary integrals with singular, weakly singular or nearly singular kernels. Historically, these issues have been handled either by low-order product integration rules (computed semi-analytically), by singularity subtraction/cancellation, by kernel regularization and asymptotic analysis, or by the construction of special purpose "generalized Gaussian quadrature" rules. In this paper, we present a systematic, highorder approach that works for any singularity (including hypersingular kernels), based only on the assumption that the field induced by the integral operator is locally smooth when restricted to either the interior or the exterior. Discontinuities in the field across the boundary are permitted. The scheme, denoted QBX (quadrature by expansion), is easy to implement and compatible with fast hierarchical algorithms such as the fast multipole method. We include accuracy tests for a variety of integral operators in two dimensions on smooth and corner domains.
Summary The brain is a massive neuronal network, organized into anatomically distributed sub-circuits, with functionally relevant activity occurring at timescales ranging from milliseconds to months. Current methods to monitor neural activity, however, lack the necessary conjunction of anatomical spatial coverage, temporal resolution, and long-term stability to measure this distributed activity. Here we introduce a large-scale, multi-site, extracellular recording platform that integrates polymer electrodes with a modular stacking headstage design supporting up to 1024 recording channels in freely behaving rats. This system can support months-long recordings from hundreds of well-isolated units across multiple brain regions. Moreover, these recordings are stable enough to track large numbers of single units for over a week. This platform enables large-scale electrophysiological interrogation of the fast dynamics and long-timescale evolution of anatomically distributed circuits, and thereby provides a new tool for understanding brain activity.
The method of fundamental solutions (MFS) is a popular tool to solve Laplace and Helmholtz boundary value problems. Its main drawback is that it often leads to ill-conditioned systems of equations. In this paper, we investigate for the interior Helmholtz problem on analytic domains how the singularities (charge points) of the MFS basis functions have to be chosen such that approximate solutions can be represented by the MFS basis in a numerically stable way. For Helmholtz problems on the unit disc we give a full analysis which includes the high frequency (short wavelength) limit. For more difficult and nonconvex domains such as crescents we demonstrate how the right choice of charge points is connected to how far into the complex plane the solution of the boundary value problem can be analytically continued, which in turn depends on both domain shape and boundary data. Using this we develop a recipe for locating charge points which allows us to reach error norms of typically 10 À11 on a wide variety of analytic domains. At high frequencies of order only 3 points per wavelength are needed, which compares very favorably to boundary integral methods.
The nonuniform fast Fourier transform (NUFFT) generalizes the FFT to off-grid data. Its many applications include image reconstruction, data analysis, and the numerical solution of differential equations. We present FINUFFT, an efficient parallel library for type 1 (nonuniform to uniform), type 2 (uniform to nonuniform), or type 3 (nonuniform to nonuniform) transforms, in dimensions 1, 2, or 3. It uses minimal RAM, requires no precomputation or plan steps, and has a simple interface to several languages. We perform the expensive spreading/interpolation between nonuniform points and the fine grid via a simple new kernel-the "exponential of semicircle" e β √ 1−x 2 in x ∈ [−1, 1]-in a cache-aware load-balanced multithreaded implementation. The deconvolution step requires the Fourier transform of the kernel, for which we propose efficient numerical quadrature. For types 1 and 2, rigorous error bounds asymptotic in the kernel width approach the fastest known exponential rate, namely that of the Kaiser-Bessel kernel. We benchmark against several popular CPU-based libraries, showing favorable speed and memory footprint, especially in three dimensions when high accuracy and/or clustered point distributions are desired.
Abstract. Boundary integral equations and Nyström discretization provide a powerful tool for the solution of Laplace and Helmholtz boundary value problems. However, often a weaklysingular kernel arises, in which case specialized quadratures that modify the matrix entries near the diagonal are needed to reach a high accuracy. We describe the construction of four different quadratures which handle logarithmically-singular kernels. Only smooth boundaries are considered, but some of the techniques extend straightforwardly to the case of corners. Three are modifications of the global periodic trapezoid rule, due to Kapur-Rokhlin, to Alpert, and to Kress. The fourth is a modification to a quadrature based on Gauss-Legendre panels due to Kolm-Rokhlin; this formulation allows adaptivity. We compare in numerical experiments the convergence of the four schemes in various settings, including low-and high-frequency planar Helmholtz problems, and 3D axisymmetric Laplace problems. We also find striking differences in performance in an iterative setting. We summarize the relative advantages of the schemes.
This paper presents a direct solution technique for the scattering of timeharmonic waves from a bounded region of the plane in which the wavenumber varies smoothly in space. The method constructs the interior Dirichlet-to-Neumann (DtN) map for the bounded region via bottom-up recursive merges of (discretization of) certain boundary operators on a quadtree of boxes. These operators take the form of impedanceto-impedance (ItI) maps. Since ItI maps are unitary, this formulation is inherently numerically stable, and is immune to problems of artificial internal resonances. The ItI maps on the smallest (leaf) boxes are built by spectral collocation on tensor-product grids of Chebyshev nodes. At the top level the DtN map is recovered from the ItI map and coupled to a boundary integral formulation of the free space exterior problem, to give a provably second kind equation. Numerical results indicate that the scheme can solve challenging problems 70 wavelengths on a side to 9-digit accuracy with 4 million unknowns, in under 5 minutes on a desktop workstation. Each additional solve corresponding to a different incident wave (right-hand side) then requires only 0.04 seconds.
In the three decades since its introduction, resource selection analysis (RSA) has become a widespread method for analyzing spatial patterns of animal relocations obtained from telemetry studies. Recently, mechanistic home range models have been proposed as an alternative framework for studying patterns of animal space-use. In contrast to RSA models, mechanistic home range models are derived from underlying mechanistic descriptions of individual movement behavior and yield spatially explicit predictions for patterns of animal space-use. In addition, their mechanistic underpinning means that, unlike RSA, mechanistic home range models can also be used to predict changes in space-use following perturbation. In this paper, we develop a formal reconciliation between these two methods of home range analysis, showing how differences in the habitat preferences of individuals give rise to spatially explicit patterns of space-use. The resulting unified framework combines the simplicity of resource selection analysis with the spatially explicit and predictive capabilities of mechanistic home range models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.