We present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that ``there is experience'' and focus on the requirements for structured experience (S)--- the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models---succinct and accurate generative programs of world data---is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of S with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent. KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and S-reporting humans to provide an educated guess about agent experience. A first challenge is to show that a human agent has S to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological, physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.
A: CERN has launched a study phase to evaluate the feasibility of a new high-intensity beam dump facility at the CERN Super Proton Synchrotron accelerator with the primary goal of exploring Hidden Sector models and searching for Light Dark Matter, but which also offers opportunities for other fixed target flavour physics programs such as rare tau lepton decays and tau neutrino studies. The new facility will require -among other infrastructure -a target complex in which a dense target/dump will be installed, capable of absorbing the entire energy of the beam extracted from the SPS accelerator. In theory, the target/dump could produce very weakly interacting particles, to be investigated by a suite of particle detectors to be located downstream of the target complex. As part of the study, a development design of the target complex has been produced, taking into account the handling and remote handling operations needed through the lifetime of the facility. Two different handling concepts have been studied and both resulting designs are presented.
Work in the last two decades has shown that neural mass models (NMM) can realistically reproduce and explain epileptic seizure transitions as recorded by electrophysiological methods (EEG, SEEG). In previous work, advances were achieved by increasing excitation and heuristically varying network inhibitory coupling parameters in the models. Based on these early studies, we provide a laminar NMM capable of realistically reproducing the electrical activity recorded by SEEG in the epileptogenic zone during interictal to ictal states. With the exception of the external noise input into the pyramidal cell population, the model dynamics are autonomous. By setting the system at a point close to bifurcation, seizure-like transitions are generated, including pre-ictal spikes, low voltage fast activity, and ictal rhythmic activity. A novel element in the model is a physiologically motivated algorithm for chloride dynamics: the gain of GABAergic post-synaptic potentials is modulated by the pathological accumulation of chloride in pyramidal cells due to high inhibitory input and/or dysfunctional chloride transport. In addition, in order to simulate SEEG signals for comparison with real seizure recordings, the NMM is embedded first in a layered model of the neocortex and then in a realistic physical model. We compare modeling results with data from four epilepsy patient cases. By including key pathophysiological mechanisms, the proposed framework captures succinctly the electrophysiological phenomenology observed in ictal states, paving the way for robust personalization methods based on NMMs.
Cortical function emerges from the interactions of multi-scale networks that may be studied at a high level using neural mass models (NMM), which represent the mean activity of large numbers of neurons. In order to properly reproduce experimental data, these models require the addition of further elements. Here we provide a framework integrating conduction physics that can be used to simulate cortical electrophysiology measurements, in particular those obtained from multicontact laminar electrodes. This is achieved by endowing NMMs with basic physical properties, such as the average laminar location of the apical and basal dendrites of pyramidal cell populations. We call this framework laminar NMM, or LaNMM for short. We then employ this framework to infer the location of oscillatory generators from laminar-resolved data collected from the prefrontal cortex in the macaque monkey. We define, based on the literature of columnar connectivity, a minimal neural mass model capable of generating amplitude and phase coupled slow (alpha, 4–22 Hz) and fast (gamma, 30–250 Hz) oscillations. The synapse layer locations of the two pyramidal cell populations are treated as optimization parameters, together with two more LaNMM-specific parameters, to compare the models with the multicontact recordings. We rank the candidate models using an optimization function that evaluates the match between the functional connectivity of the model and data, where the FC is defined by the covariance between bipolar voltage measurements at different cortical depths. The family of best solutions reproduces the FC of the observed electrophysiology while selecting locations of pyramidal cells and their synapses that result in the generation of fast activity at superficial layers and slow activity across most depths, in line with recent literature proposals.
In this paper, we present a unifying framework to study consciousness based on algorithmic information theory (AIT). We take as a premise that “there is experience” and focus on the requirements for structured experience ([Formula: see text]) — the spatial, temporal, and conceptual organization of our first-person experience of the world and of ourselves as agents in it. Our starting point is the insight that access to good models — succinct and accurate generative programs of world data — is crucial for homeostasis and survival. We hypothesize that the successful comparison of such models with data provides the structure to experience. Building on the concept of Kolmogorov complexity, we can associate the qualitative aspects of [Formula: see text] with the algorithmic features of the model, including its length, which reflects the structure discovered in the data. Moreover, a modeling system tracking structured data will display dimensionality reduction and criticality features that can be used empirically to quantify the structure of the program run by the agent. KT provides a consistent framework to define the concepts of life and agent and allows for the comparison between artificial agents and [Formula: see text]-reporting humans to provide an educated guess about agent experience. A first challenge is to show that a human agent has [Formula: see text] to the extent they run encompassing and compressive models tracking world data. For this, we propose to study the relation between the structure of neurophenomenological, physiological, and behavioral data. The second is to endow artificial agents with the means to discover good models and study their internal states and behavior. We relate the algorithmic framework to other theories of consciousness and discuss some of its epistemological, philosophical, and ethical aspects.
ObjectiveStereotactic-EEG (SEEG) and scalp EEG recordings can be modeled using mesoscale neural mass population models (NMM). However, the relationship between those mathematical models and the physics of the measurements is unclear. In addition, it is challenging to represent SEEG data by combining NMMs and volume conductor models due to the intermediate spatial scale represented by these measurements.ApproachWe provide a framework combining the multi-compartmental modeling formalism and a detailed geometrical model to simulate the transmembrane currents that appear in a layer 3, 5 and 6 pyramidal cells due to a synaptic input. With this approach, it is possible to realistically simulate the current source density (CSD) depth profile inside a cortical patch due to inputs localized into a single cortical layer and the consequent voltage measured by two SEEG contacts using a volume conductor model. Based on this approach, we built a framework to connect the activity of a NMM with a volume conductor model and we simulated an example as a proof of concept.Main resultsCSD depends strongly on the distribution of the synaptic inputs onto the different cortical layers and the equivalent current dipole strengths display considerable differences (of up to a factor of four in magnitude in our example). Thus, the inputs coming from different neural populations do not contribute equally to the electrophysiological recordings. A direct consequence of this is that the raw output of neural mass models is not a good proxy for electrical recordings. We also show that the simplest CSD model that can accurately reproduce SEEG measurements can be constructed from discrete monopolar sources (one per cortical layer).SignificanceOur results highlight the importance of including a physical model in NMMs to represent measurements. We provide a framework connecting microscale neuron models with the neural mass formalism and with physical models of the measurement process that can improve the accuracy of predicted electrophysiological recordings.
The Beam Dump Facility (BDF) project is a proposed general-purpose facility at CERN, dedicated to beam dump and fixed target experiments. In its initial phase, the facility is foreseen to be exploited by the Search for Hidden Particles (SHiP) experiment. Physics requirements call for a pulsed 400 GeV/c proton beam as well as the highest possible number of protons on target (POT) each year of operation (4.0 · 10 13 /year), in order to search for feebly interacting particles. The target/dump assembly lies at the heart of the facility, with the aim of safely absorbing the full high intensity Super Proton Synchrotron (SPS) beam, while maximizing the production of charmed and beauty mesons. High-Z materials are required for the target/dump, in order to have the shortest possible absorber and reduce muon background for the downstream experiment. The design of the production target is one of the most challenging aspects of the facility design, due to the high energy and power density deposition that are reached during operation, and the resulting thermomechanical loads. The nature of the beam pulse induces very high temperature excursions between pulses (up to 100 • C), leading to considerable thermally-induced stresses and long-term fatigue considerations. The high average power deposited on target (305 kW) creates a challenge for heat removal. During the BDF facility Comprehensive Design Study (CDS), launched by CERN in 2016, extensive studies have been carried out in order to define and assess the target assembly design. These studies are described in the present contribution, which details the proposed design of the BDF production target, as well as the material selection process and the optimization of the target configuration and beam dilution. One of the specific challenges and novelty of this work is the need to consider new target materials, such as a molybdenum alloy (TZM) as core absorbing material and Ta2.5W as cladding. Thermo-structural and fluid dynamics calculations have been performed to evaluate the reliability of the target and its cooling system under beam operation. In the framework of the target comprehensive design, a preliminary mechanical design of the full target assembly has also been carried out, assessing the feasibility of the whole target system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.