Within the space of this collective image/text article, 18 photographic imagemakers and 4 respondents consider deeply and dialogically a quote from William Ayers' 2016 book Teaching with Conscience in an Imperfect World: An Invitation. The resulting constellation of images and words (1) realizes a space within which works of art, specifically photographs, operate as centers of meaning to generate educational implications, and (2) theorizes a pedagogy that resists unilateral prescriptions and is instead anchored around openness, expansion, and individualization. The paper begins with a few short pieces from Sarah Pfohl, including an overview of Ayers' book and ideas from writings on progressive education, object-based teaching and learning, and close/slow looking to position works of art as sites of rich meaning. While contemporary schooling often drives toward monolithic, numerical representations of the learners in its care, the article employs postdigital gestures to argue that learners have more in common with works of art than numbers, and thus, attention to artworks can open valuable implications for teaching and learning. The diverse group of images that follow offer an emerging portrait of teaching practice as a set of constantly shifting constellations moving across deep time and space from the intensely specific to the wide. Four texts think more about schools, education, and art. Finally, there is a postscript from Bill Ayers himself.
Reservoir computing is a machine learning paradigm that transforms the transient dynamics of high-dimensional nonlinear systems for processing time-series data. Although the paradigm was initially proposed to model information processing in the mammalian cortex, it remains unclear how the nonrandom network architecture, such as the modular architecture, in the cortex integrates with the biophysics of living neurons to characterize the function of biological neuronal networks (BNNs). Here, we used optogenetics and calcium imaging to record the multicellular responses of cultured BNNs and employed the reservoir computing framework to decode their computational capabilities. Micropatterned substrates were used to embed the modular architecture in the BNNs. We first show that the dynamics of modular BNNs in response to static inputs can be classified with a linear decoder and that the modularity of the BNNs positively correlates with the classification accuracy. We then used a timer task to verify that BNNs possess a short-term memory of several 100 ms and finally show that this property can be exploited for spoken digit classification. Interestingly, BNN-based reservoirs allow categorical learning, wherein a network trained on one dataset can be used to classify separate datasets of the same category. Such classification was not possible when the inputs were directly decoded by a linear decoder, suggesting that BNNs act as a generalization filter to improve reservoir computing performance. Our findings pave the way toward a mechanistic understanding of information representation within BNNs and build future expectations toward the realization of physical reservoir computing systems based on BNNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.