2018
DOI: 10.1016/j.heliyon.2018.e00596
|View full text |Cite
|
Sign up to set email alerts
|

An introduction to the maximum entropy approach and its application to inference problems in biology

Abstract: A cornerstone of statistical inference, the maximum entropy framework is being increasingly applied to construct descriptive and predictive models of biological systems, especially complex biological networks, from large experimental data sets. Both its broad applicability and the success it obtained in different contexts hinge upon its conceptual simplicity and mathematical soundness. Here we try to concisely review the basic elements of the maximum entropy principle, starting from the notion of ‘entropy’, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
74
0
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 88 publications
(80 citation statements)
references
References 129 publications
(151 reference statements)
0
74
0
1
Order By: Relevance
“…It is clear that a layer only connects to the previous layer. FFNN applications is classified into two such as control of dynamical systems [ 32 , 33 ], and spaces where the classic machine learning techniques are applied [34] . NNs with two or more hidden layers are called deep networks because the network has become complex with more than 1 hidden layer.…”
Section: Main Textmentioning
confidence: 99%
“…It is clear that a layer only connects to the previous layer. FFNN applications is classified into two such as control of dynamical systems [ 32 , 33 ], and spaces where the classic machine learning techniques are applied [34] . NNs with two or more hidden layers are called deep networks because the network has become complex with more than 1 hidden layer.…”
Section: Main Textmentioning
confidence: 99%
“…As MDF turned out to be significantly worse than LSF (section ), the full details of the MDF method are given in the supporting information (section S2, derivation of the objective function, expressions for the gradient and Hessian), and here we give only the expression for the objective function minimized in MDF: L=aZaiqiobs()normalPalog()qi()normalPa=aZaqobs()normalPalog()qT()normalPa where q (P a ) denotes the calculated quantity – be it v (P a ) or x (P a ), q obs (P a ) is the corresponding experimental quantity, and Z a represents the size of P a sample from which the experimental isotopic distribution is obtained (the number of detected P a molecules). Let us note that the objective function in the maximum entropy method is almost identical, only without the weighting factor Z a .…”
Section: Theorymentioning
confidence: 99%
“…Unlike most machine learning methods such as logistic regression, support vector machine, random forest, k-nearest neighbour and arti cial neural network, that uses presence-absence instances dataset for training, the MaxEnt uses presence-background instances dataset for training. MaxEnt is based on the principle, that the probability distribution that maximizes entropy for the current state of knowledge subject to the constraints of the features is the best t model for the phenomenon under consideration (De Martino and De Martino 2018). It is popular primarily because it considers 'minimum assumption' while selecting a probability distribution (Warton 2013).…”
Section: Introductionmentioning
confidence: 99%