2022
DOI: 10.1016/j.ijheatmasstransfer.2022.123217
|View full text |Cite
|
Sign up to set email alerts
|

Reliable predictions of bubble departure frequency in subcooled flow boiling: A machine learning-based approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 48 publications
0
6
0
Order By: Relevance
“…CatBoost is trained by minimizing the expected loss function through gradient descent h t = arg nobreak0em.25em⁡ min nobreak0em.25em⁡ double-struckE h H ( prefix− L ( y , s ) s | s = F t 1 false( x false) j = 1 J b j double-struckl { x R j } ) where L is a smooth loss function, h is a gradient step function selected from H , R j denotes the disjoint regions ns corresponding the leaves of the tree, b j is the predictive value of the region, and double-struckE and double-struckl are the expectation and indicator functions. Further details on these ML methods can be found in previous references. ,− …”
Section: Boosting Machine Learning Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…CatBoost is trained by minimizing the expected loss function through gradient descent h t = arg nobreak0em.25em⁡ min nobreak0em.25em⁡ double-struckE h H ( prefix− L ( y , s ) s | s = F t 1 false( x false) j = 1 J b j double-struckl { x R j } ) where L is a smooth loss function, h is a gradient step function selected from H , R j denotes the disjoint regions ns corresponding the leaves of the tree, b j is the predictive value of the region, and double-struckE and double-struckl are the expectation and indicator functions. Further details on these ML methods can be found in previous references. ,− …”
Section: Boosting Machine Learning Modelsmentioning
confidence: 99%
“…CatBoost is trained by minimizing the expected loss function through gradient descent where L is a smooth loss function, h is a gradient step function selected from H , R j denotes the disjoint regions ns corresponding the leaves of the tree, b j is the predictive value of the region, and and are the expectation and indicator functions. Further details on these ML methods can be found in previous references. ,− …”
Section: Boosting Machine Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Xu et al [23] found that the fully developed turbulent gas–liquid slug flow and slug translation velocity depended on the local maximum velocity adjacent to the trailing edge of the Taylor bubbles in a long horizontal pipe with an inner diameter of 51 mm. Many methods have been employed to investigate the bubble velocity during flow boiling in minichannels [24] , [25] , [26] , [27] , [28] . Pavlov et al [24] used a high-speed camera to trace the motion of a single bubble in a rectangular minichannel.…”
Section: Introductionmentioning
confidence: 99%
“…There is much research on gas hold-up prediction in a bubble column, and some of it is based on machine learning, Based on an ANN model, a useful prediction model had been developed by Hazare et al 28 He et al also achieved a reliable prediction of bubble departure frequency in subcooled flow boiling by X-G boost. 29 In the region of microflows, Su et al constructed a neural network (NN) to predict the inertial lift in microchannels. 30 Zhou et al established a way to predict the flow condensation heat transfer coefficient in microchannels.…”
Section: Introductionmentioning
confidence: 99%