“…For example, single neuron models abound in neuroscience, and can include a large number of biophysical effects with relatively few approximations. Many such models have also been used to study networks of interconnected neurons with varying degrees of idealization, thereby revealing a huge number of insights [9–11]. However, several key problems arise as network size grows: (i) the computational resources required become prohibitive, meaning that simulations can often only be carried out in physiologically unrealistic scenarios, typically with idealized neurons, which may be quantitatively and/or qualitatively inappropriate for the real brain; (ii) it is increasingly difficult to measure and assign biophysical parameters to the individual neurons—e.g., individual connectivities, synaptic strengths, or morphological features, so large groups of neurons are typically assigned identical parameters, thereby partly removing the specificity of such simulations; (iii) analysis and interpretation of results, such as large collections of timeseries of individual soma voltages, becomes increasingly difficult and demanding on storage and postprocessing; (iv) emergence of collective network-level phenomena can be difficult to recognize; (v) the scales of these simulations are well suited to relate to single-neuron measurements, and microscopic pieces of brain tissue, but are distant from those of noninvasive imaging modalities such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and magnetoencephalography (MEG) [12–14], which detect signals that result from the aggregate activity of large numbers of neurons; and (vi), inputs from other parts of the brain are neglected, meaning that such models tend to represent isolated pieces of neural tissue.…”