Using de Wit-Nicolai D = 4 N = 8 SO(8) supergravity as an example, we show how modern Machine Learning software libraries such as Google's TensorFlow can be employed to greatly simplify the analysis of high-dimensional scalar sectors of some M-Theory compactifications. We provide detailed information on the location, symmetries, and particle spectra and charges of 192 critical points on the scalar manifold of SO(8) supergravity, including one newly discovered N = 1 vacuum with SO(3) residual symmetry, one new potentially stabilizable non-supersymmetric solution, and examples for "Galois conjugate pairs" of solutions, i.e. solution-pairs that share the same gauge group embedding into SO(8) and minimal polynomials for the cosmological constant. Where feasible, we give analytic expressions for solution coordinates and cosmological constants.As the authors' aspiration is to present the discussion in a form that is accessible to both the Machine Learning and String Theory communities and allows adopting our methods towards the study of other models, we provide an introductory overview over the relevant Physics as well as Machine Learning concepts. This includes short pedagogical code examples. In particular, we show how to formulate a requirement for residual Supersymmetry as a Machine Learning loss function and effectively guide the numerical search towards supersymmetric critical points. Numerical investigations suggest that there are no further supersymmetric vacua beyond this newly discovered fifth solution.At the moment, the N = 8 Supergravity Theory is the only candidate in sight. There are likely to be a number of crucial calculations within the next few years that have the possibility of showing that the theory is no good. If the theory survives these tests, it will probably be some years more before we develop computational methods that will enable us to make predictions and before we can account for the initial conditions of the universe as well as the local physical laws. These will be the outstanding problems for theoretical physics in the next twenty years or so.But to end on a slightly alarmist note, they may not have much more time than that.At present, computers are a useful aid in research, but they have to be directed by human minds. If one extrapolates their recent rapid rate of development, however, it would seem quite possible that they will take over altogether in theoretical physics. So, maybe the end is in sight for theoretical physicists, if not for theoretical physics. S. Hawking, Conclusion of his 1981 Inaugural lecture [1]"Is the end in sight for theoretical physics?"
The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be successfully trained on noisy Boolean logic tasks and on the MNIST dataset encoded in time. We show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. The spiking network spontaneously discovers two operating modes, mirroring the accuracy-speed trade-off observed in human decision-making: a highly accurate but slow regime, and a fast but slightly lower-accuracy regime. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient, state-based and more complex biologically-inspired neural architectures.
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
As we fall sleep, our brain traverses a series of gradual changes at physiological, behavioural and cognitive levels, which are not yet fully understood. The loss of responsiveness is a critical event in the transition from wakefulness to sleep. Here we seek to understand the electrophysiological signatures that reflect the loss of capacity to respond to external stimuli during drowsiness using two complementary methods: spectral connectivity and EEG microstates. Furthermore, we integrate these two methods for the first time by investigating the connectivity patterns captured during individual microstate lifetimes. While participants performed an auditory semantic classification task, we allowed them to become drowsy and unresponsive. As they stopped responding to the stimuli, we report the breakdown of alpha networks and the emergence of theta connectivity. Further, we show that the temporal dynamics of all canonical EEG microstates slow down during unresponsiveness. We identify a specific microstate (D) whose occurrence and duration are prominently increased during this period. Employing machine learning, we show that the temporal properties of microstate D, particularly its prolonged duration, predicts the response likelihood to individual stimuli. Finally, we find a novel relationship between microstates and brain networks as we show that microstate D uniquely indexes significantly stronger theta connectivity during unresponsiveness. Our findings demonstrate that the transition to unconsciousness is not linear, but rather consists of an interplay between transient brain networks reflecting different degrees of sleep depth. Electronic supplementary material The online version of this article (10.1007/s10548-018-0689-9) contains supplementary material, which is available to authorized users.
An update on the JPEG XL standardization effort: JPEG XL is a practical approach focused on scalable web distribution and efficient compression of high-quality images. It will provide various benefits compared to existing image formats: significantly smaller size at equivalent subjective quality; fast, parallelizable decoding and encoding configurations; features such as progressive, lossless, animation, and reversible transcoding of existing JPEG; support for high-quality applications including wide gamut, higher resolution/bit depth/dynamic range, and visually lossless coding. Additionally, a royalty-free baseline is an important goal. The JPEG XL architecture is traditional block-transform coding with upgrades to each component. We describe these components and analyze decoded image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.