Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence. They can approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists. We introduce feedforward and recurrent networks and explain the expressive power of this modeling framework and the backpropagation algorithm for setting the parameters. Finally, we consider how deep neural network models might help us understand brain computation.
Allicin (diallyl thiosulfinate) is the best-known biologically active component in freshly crushed garlic extract. We developed a novel, simple method to isolate active allicin, which yielded a stable compound in aqueous solution amenable for use in in vitro and in vivo studies. We focused on the in vitro effects of allicin on cell proliferation of colon cancer cell lines HCT-116, LS174T, HT-29, and Caco-2 and assessed the underlying mechanisms. This allicin preparation exerted a time- and dose-dependent cytostatic effect on these cells at concentrations ranging from 6.2 to 310 μM. Treatment with allicin resulted in HCT-116 apoptotic cell death as demonstrated by enhanced hypodiploid DNA content, decreased levels of B-cell non-Hodgkin lymphoma-2 (Bcl-2), increased levels of bax and increased capability of releasing cytochrome c from mitochondria to the cytosol. Allicin also induced translocation of NF-E2-related factor-2 (Nrf2) to the nuclei of HCT-116 cells. Luciferase reporter gene assay showed that allicin induces Nrf2-mediated luciferase transactivation activity. SiRNA knock down of Nrf2 significantly affected the capacity of allicin to inhibit HCT-116 proliferation. These results suggest that Nrf2 mediates the allicin-induced apoptotic death of colon cancer cells.
We hardly notice our eye blinks, yet an externally generated retinal interruption of a similar duration is perceptually salient. We examined the neural correlates of this perceptual distinction using intracranially measured ECoG signals from the human visual cortex in 14 patients. In early visual areas (V1 and V2), the disappearance of the stimulus due to either invisible blinks or salient blank video frames ('gaps') led to a similar drop in activity level, followed by a positive overshoot beyond baseline, triggered by stimulus reappearance. Ascending the visual hierarchy, the reappearance-related overshoot gradually subsided for blinks but not for gaps. By contrast, the disappearance-related drop did not follow the perceptual distinction – it was actually slightly more pronounced for blinks than for gaps. These findings suggest that blinks' limited visibility compared with gaps is correlated with suppression of blink-related visual activity transients, rather than with "filling-in" of the occluded content during blinks.DOI: http://dx.doi.org/10.7554/eLife.17243.001
Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models’ ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative–generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models’ inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
In the absence of stimulus or task, the cortex spontaneously generates rich and consistent functional connectivity patterns (termed resting state networks) which are evident even within individual cortical areas. We and others have previously hypothesized that habitual cortical network activations during daily life contribute to the shaping of these connectivity patterns. Here we tested this hypothesis by comparing, using blood oxygen level-dependent-functional magnetic resonance imaging, the connectivity patterns that spontaneously emerge during rest in retinotopic visual areas to the patterns generated by naturalistic visual stimuli (repeated movie segments). These were then compared with connectivity patterns produced by more standard retinotopic mapping stimuli (polar and eccentricity mapping). Our results reveal that the movie-driven patterns were significantly more similar to the spontaneously emerging patterns, compared with the connectivity patterns of either eccentricity or polar mapping stimuli. Intentional visual imagery of naturalistic stimuli was unlikely to underlie these results, since they were duplicated when participants were engaged in an auditory task. Our results suggest that the connectivity patterns that appear during rest better reflect naturalistic activations rather than controlled, artificially designed stimuli. The results are compatible with the hypothesis that the spontaneous connectivity patterns in human retinotopic areas reflect the statistics of cortical coactivations during natural vision.
Research into visual neural activity has focused almost exclusively on onset- or change-driven responses and little is known about how information is encoded in the brain during sustained periods of visual perception. We used intracranial recordings in humans to determine the degree to which the presence of a visual stimulus is persistently encoded by neural activity. The correspondence between stimulus duration and neural response duration was strongest in early visual cortex and gradually diminished along the visual hierarchy, such that is was weakest in inferior-temporal category-selective regions. A similar posterior-anterior gradient was found within inferior temporal face-selective regions, with posterior but not anterior sites showing persistent face-selective activity. The results suggest that regions that appear uniform in terms of their category selectivity are dissociated by how they temporally represent a stimulus in support of ongoing visual perception, and delineate a large-scale organizing principle of the ventral visual stream.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.