A model for the development of spatiotemporal receptive fields of simple cells in the visual cortex is proposed. The model is based on the 1990 hypothesis of Saul and Humphrey that the convergence of four types of input onto a cortical cell, viz. non-lagged ON and OFF inputs and lagged ON and OFF inputs, underlies the spatial and temporal structure of the receptive fields. It therefore explains both orientation and direction selectivity of simple cells. The response properties of the four types of input are described by the product of linear spatial and temporal response functions. Extending the 1994 model of one of the authors (K.D. Miller), we describe the development of spatiotemporal receptive fields as a Hebbian learning process taking into account not only spatial but also temporal correlations between the different inputs. We derive the correlation functions that drive the development both for the period before and after eye-opening and demonstrate how the joint development of orientation and direction selectivity can be understood in the framework of correlation-based learning. Our investigation is split into two parts that are presented in two papers. In the first, the model for the response properties and for the development of direction-selective receptive fields is presented. In the second paper we present simulation results that are compared with experimental data, and also provide a first analysis of our model.
In part I of this article a correlation based model for the developmental process of spatiotemporal receptive fields has been introduced. In this model the development is described as an activity-dependent competition between four types of input from the lateral geniculate nucleus onto a cortical cell, viz. non-lagged ON and OFF and lagged ON and OFF inputs. In the present paper simulation results and a first analysis are presented for this model. We study the developmental process both before and after eye-opening and compare the results with experimental data from reverse correlation measurements. The outcome of the developmental process is determined mainly by the spatial and the temporal correlations between the different inputs. In particular, if the mean correlation between non-lagged and lagged inputs is weak, receptive fields with a widely varying degree of direction selectivity emerge. However, spatiotemporal receptive fields may show rotation of their preferred orientation as a function of response delay. Even if the mean correlation between two types of temporal input is not weak, direction-selective receptive fields may emerge because of an intracortical interaction between different cortical maps. In an environment of moving lines or gratings, direction-selective receptive fields develop only if the distribution of the directions of motion presented during development shows some anisotropy. In this case, a continuous map of preferred direction is also shown to develop.
Abstract. For current surgical navigation systems optical tracking is state of the art. The accuracy of these tracking systems is currently determined statically for the case of full visibility of all tracking targets. We propose a dynamic determination of the accuracy based on the visibility and geometry of the tracking setup. This real time estimation of accuracy has a multitude of applications. For multiple camera systems it allows reducing line of sight problems and guaranteeing a certain accuracy. The visualization of these accuracies allows surgeons to perform the procedures taking to the tracking accuracy into account. It also allows engineers to design tracking setups interactively guaranteeing a certain accuracy.Our model is an extension to the state of the art models of Fitzpatrick et al. [1] and Hoff et al. [2]. We model the error in the camera sensor plane. The error is propagated using the internal camera parameter, camera poses, tracking target poses, target geometry and marker visibility, in order to estimate the final accuracy of the tracked instrument.
This paper presents the current state of a computational steering system for interactive computational fluid dynamics (CFD) simulations, allowing engineers to simulate interactively indoor climate and to evaluate human comfort. The tools presented support cooperative planning and design by providing means for interactively adding, removing and modifying geometry and boundary conditions online during a CFD simulation. To ensure interactivity and short-latency updates even for high-resolution runs the parallel Lattice-Boltzmann-based simulation kernel is optimized for high-performance computing systems. Emphasis is placed on the computational steering architecture, connecting a supercomputer with one or more visualization workstations. In particular, we show how a highperformance communication between simulation and visualization or steering front-end can be achieved together with preserving a flexible mechanism of on-the-fly attachment of multiple cooperation clients.
In most close-range photogrammetry applications, the cameras are modelled as imaging systems with perspective projection combined with the lens distortion correction as proposed by Brown in 1971. In the 1980s, the calibration of video cameras received considerable attention. This required compensation for further systematic effects caused by the digitization of the analogue image signal. Modelling the image process in that manner has become the widely-applied standard since then. To take advantage of the increased field of view of individual cameras, the use of wide angle as well as fisheye lenses became common in computer vision and close-range photogrammetry, again requiring appropriate modelling of the imaging process to ensure high accuracies.A.R.T. provides real-time tracking systems with infra-red cameras, which are in some cases equipped with short focal length lenses for the purpose of increased fields of view, resulting in larger trackable object volumes. Unfortunately the lens distortion of these cameras reaches magnitudes which can not be sufficiently modelled with the customary Brown model as -mainly at high excentricities such as image corners -the calculation of the correction is not applicable. Considerations to avoid modelling these lenses as fisheye projections led to an alternate and rather pragmatic approach, where the distortion model is extended by a fourth radial distortion coefficient. Due to numeric instabilities, a stepwise camera calibration is required to achieve convergence in the bundle adjustment process. This paper presents the modified lens distortion model, describes the stepwise calibration procedure and compares results in respect to the conventional approach. The results are also compared to the approach wherein the camera lens is modelled as a fisheye projection. The introduction of a fourth radial lens distortion parameter allows the correction of lens distortion effects over the full sensor area of wide angle lenses, which increases the usable field of view of that specific camera and therefore the size of the trackable observed object volume. The approaches with the extended lens distortion model and the fisheye projection were successfully implemented and tested, and are on target to become part of the A.R.T. product range.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đŸ’™ for researchers
Part of the Research Solutions Family.