2003
DOI: 10.1016/s0740-0020(02)00104-1
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of logistic regression and neural network-based classifiers for bacterial growth

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2007
2007
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(31 citation statements)
references
References 17 publications
0
31
0
Order By: Relevance
“…A logistic regression model relates the probability (p) of occurrence of an event (Y) conditional on a vector (x) of independent variables (11). The key quantity (called the "conditional mean") is the mean value of the dependent variable (Y), given the value of the independent variable (x), when the logistic distribution is used.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…A logistic regression model relates the probability (p) of occurrence of an event (Y) conditional on a vector (x) of independent variables (11). The key quantity (called the "conditional mean") is the mean value of the dependent variable (Y), given the value of the independent variable (x), when the logistic distribution is used.…”
Section: Methodsmentioning
confidence: 99%
“…A procedure of forward or backward stepwise regression with some criteria to include or reject dummy or quantitative explanatory variables, their quadratic terms, or their interactions is included in most of the standard statistical software packages (11,12).…”
mentioning
confidence: 99%
“…The foundation of the approach is well known decades ago (1960s), however, the method was not of a widespread use because of the lack of sufficient computation power until recently [28]. Donald Specht [29] first introduced the probabilistic neural networks in 1990, who demonstrated how the Bayes-Parzen classifier could be broken up into a large number of simple processes implemented in a multilayer neural network each of which could be run independently in parallel.…”
Section: Probabilistic Neural Network (Pnns)mentioning
confidence: 99%
“…The activation of each summation neuron is executed by applying the remaining part of the Parzen's estimator equation (e.g., the constant multiplier in Eq. (13)) to obtain the estimated probability density function value of population of a particular class [28]. If the misclassification cost and prior probabilities are equal between the two classes, and the classes are mutually exclusive (i.e., no case can be classified into more than one class) and exhaustive (i.e., the training set covers all classes fairly), the activation of the summation neurons will be equal to the posterior probability of each class.…”
Section: Probabilistic Neural Network (Pnns)mentioning
confidence: 99%
“…The number of datasets should be much larger than the number of model parameters (McBride 2006). In addition, statistic models have difficulty in dealing with highly nonlinear functions (Hajmeer and Basheer 2003), and can have only one output variable (Kahane 2001). Furthermore, a set of hypotheses are normally needed to develop a statistic model.…”
Section: Statistical Modelsmentioning
confidence: 99%