Most studies of neural correlates of spatial navigation are restricted to small arenas (≤ 1 m) because of the limits imposed by the recording cables. New wireless recording systems have a larger recording range. However, these neuronal recording systems lack the ability to track animals in large area, constraining the size of the arena. We developed and benchmarked an open-source, scalable multi-camera tracking system based on low-cost hardware. This "Picamera system" was used in combination with a wireless recording system for characterizing neural correlates of space in environments of sizes up to 16.5 m. The Picamera system showed substantially better temporal accuracy than a popular commercial system. An explicit comparison of one camera from the Picamera system with a camera from the commercial system showed improved accuracy in estimating spatial firing characteristics and head direction tuning of neurons. This improved temporal accuracy is crucial for accurately aligning videos from multiple cameras in large spaces and characterizing spatially modulated cells in a large environment.
The vertical intensities of mesons penetrating 5.25 and 30 cm of lead have been measured to an altitude of 15,000 feet and of those penetrating 20 cm of lead to an altitude of 32,000 feet (275 millibars pressure) by coincidence counter telescopes sent up in an aeroplane from Bangalore, magnetic latitude 3.3°N. A comparison of our results with those of Schein, Jesse, and Wollan indicates that the latitude effect between 3.3°N and 52°N of the vertical intensity of mesons shows no marked increase even to altitudes corresponding to pressures of 275 millibars. This is in striking contrast with the total intensity, which shows a very pronounced increase of latitude effect to these heights.
Understanding how the brain learns throughout a lifetime remains a long-standing challenge. In artificial neural networks (ANNs), incorporating novel information too rapidly results in catastrophic interference, i.e., abrupt loss of previously acquired knowledge. Complementary Learning Systems Theory (CLST) suggests that new memories can be gradually integrated into the neocortex by interleaving new memories with existing knowledge. This approach, however, has been assumed to require interleaving all existing knowledge every time something new is learned, which is implausible because it is time-consuming and requires a large amount of data. We show that deep, nonlinear ANNs can learn new information by interleaving only a subset of old items that share substantial representational similarity with the new information. By using such similarity-weighted interleaved learning (SWIL), ANNs can learn new information rapidly with a similar accuracy level and minimal interference, while using a much smaller number of old items presented per epoch (fast and data-efficient). SWIL is shown to work with various standard classification datasets (Fashion-MNIST, CIFAR10, and CIFAR100), deep neural network architectures, and in sequential learning frameworks. We show that data efficiency and speedup in learning new items are increased roughly proportionally to the number of nonoverlapping classes stored in the network, which implies an enormous possible speedup in human brains, which encode a high number of separate categories. Finally, we propose a theoretical model of how SWIL might be implemented in the brain.
With the advent of neural networks and its subfields like deep neural networks and convolutional neural networks, it is possible to make text classification predictions with high accuracy. Among the many subtypes of naive Bayes, multinomial naive Bayes is used for text classification. Many attempts have been made to somehow develop an algorithm that uses the simplicity of multinomial naive Bayes and at the same time incorporates feature dependency. One such effort was put in structure extended multinomial naive Bayes, which uses one-dependence estimators to inculcate dependencies. Basically, one-dependence estimators take one of the attributes as features and all other attributes as its child. This chapter proposes self structure extended multinomial naïve Bayes, which presents a hybrid model, a combination of the multinomial naive Bayes and structure extended multinomial naive Bayes. Basically, it tries to classify the instances that were misclassified by structure extended multinomial naive Bayes as there was no direct dependency between attributes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.