2012
DOI: 10.1016/j.jfoodeng.2011.10.024
|View full text |Cite
|
Sign up to set email alerts
|

Artificial neural network model for prediction of cold spot temperature in retort sterilization of starch-based foods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 18 publications
0
18
0
Order By: Relevance
“…When comparing the whole profile, the mean of the absolute values of the relative error ( RE ) was 1.02%, with a standard deviation (SD RE ) of 0.43%. The method for evaluating the error was described in detail by Llave et al (2012). This led to the conclusion that the surface temperature profile of the grilled fish could be determined accurately using the IR thermographic camera.…”
Section: Construction Of the Surface Temperature Distributionmentioning
confidence: 99%
“…When comparing the whole profile, the mean of the absolute values of the relative error ( RE ) was 1.02%, with a standard deviation (SD RE ) of 0.43%. The method for evaluating the error was described in detail by Llave et al (2012). This led to the conclusion that the surface temperature profile of the grilled fish could be determined accurately using the IR thermographic camera.…”
Section: Construction Of the Surface Temperature Distributionmentioning
confidence: 99%
“…In this paper, we employed the feedforward neural network (FNN), because it did not need information related to the probability distribution [42], nor the a priori probabilities of different classes [43]. Figure 4 illustrates the general one-hidden-layer (OHL) FNN.…”
Section: Feed-forward Neural Networkmentioning
confidence: 99%
“…In this paper, we employed the feedforward neural network (FNN), because it did not need information related to the probability distribution [42], nor the a priori probabilities of different classes [43]. To build the FNN to be equal to train the weights/biases of all neurons in the FNN, which is treated as an optimization problem, i.e., we need to obtain the optimal weights/biases in order to make minimal the mean-squared error (MSE) between real outputs and target outputs.…”
Section: Feed-forward Neural Networkmentioning
confidence: 99%
“…The reason we chose FNN was that (1) it has been widely used in pattern classification; (2) it does not need any a priori information about the probability distribution [25]. The common model of one-hidden-layer FNN is shown in Figure 3.…”
Section: Feed-forward Neural Networkmentioning
confidence: 99%