2020
DOI: 10.1016/j.jtbi.2020.110352
|View full text |Cite
|
Sign up to set email alerts
|

Using statistical methods to model the fine-tuning of molecular machines and systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
4
1

Relationship

3
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 59 publications
0
11
0
Order By: Relevance
“…The population of bacteria learns S at time k if there is a phase transition such that up to (G k−1 , D k−1 ) the probability of the population acting as a unit is null, whereas it becomes positive for (G k , D k ). This is closely related to fine-tuning of biological systems Thorvaldsen & Hössjer, 2020). As for the direction of causation from cognition to code, Kolmogorov's complexity, which measures the complexity of an outcome as the shortest code that produces it, can be used in place of or jointly with active information to measure learning (Ewert et al, 2015).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The population of bacteria learns S at time k if there is a phase transition such that up to (G k−1 , D k−1 ) the probability of the population acting as a unit is null, whereas it becomes positive for (G k , D k ). This is closely related to fine-tuning of biological systems Thorvaldsen & Hössjer, 2020). As for the direction of causation from cognition to code, Kolmogorov's complexity, which measures the complexity of an outcome as the shortest code that produces it, can be used in place of or jointly with active information to measure learning (Ewert et al, 2015).…”
Section: Discussionmentioning
confidence: 99%
“…According to Definition 1, the upper part of (15) represents a maximum amount of learning, that is, P(A) = 1; whereas the lower part corresponds to a maximal amount of false belief about S when x 0 ∈ A, that is, P(A) = 0 (see also . Suppose S is a proposition that a certain entity or machine functions, then − log P 0 (A) is the functional information associated with the event A of observing such functioning entity Szostak, 2003;Thorvaldsen & Hössjer, 2020). Consequently, in our context, functional information corresponds to the maximal amount of learning about S when the machine works ( f (x 0 ) = 1).…”
Section: Active Information Learning and Knowledgementioning
confidence: 99%
“…Through this lens, a crucial part of the analysis of tuning is mathematical modeling, and cosmological fine-tuning is one particular instantiation. Every mathematical model, whether developed for theoretical or applied purposes, will require parameters, and those parameters can always be analyzed from a fine-tuning perspective [48,49].…”
Section: Determining the Constraintsmentioning
confidence: 99%
“…The paper [37] uses the traditional clustering model as the teacher model. Moreover, the teacher model T is also called the pre-trained model in the fine-tuning method [44]. However, unlike the fine-tuning method, we try to learn knowledge from the pretrained model instead of adjusting it.…”
Section: Guided Learningmentioning
confidence: 99%