Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Fuel 324 (2022) 124778

Contents lists available at ScienceDirect

Fuel
journal homepage: www.elsevier.com/locate/fuel

Full Length Article

Data-driven approach to predict the flow boiling heat transfer coefficient of


liquid hydrogen aviation fuel
Yichuan He, Chengzhi Hu *, Bo Jiang *, Zhehao Sun, Jing Ma, Hongyang Li, Dawei Tang
Key Laboratory of Ocean Energy Utilization and Energy Conservation of Ministry of Education, Dalian University of Technology, Dalian 116023, China

A R T I C L E I N F O A B S T R A C T

Keywords: The airline industry targets net-zero carbon emissions by 2050. Hydrogen, a fuel with high energy density and
Machine learning clean combustion products, is poised to substitute kerosene as a new kind of future aviation fuel. Nonetheless,
Neural networks hydrogen aviation fuel is extremely sensitive to heat leakage during the transportation from the fuel tank to the
Hydrogen flow boiling
engine on account of the liquid hydrogen flow boiling. Up to now, a reliable tool to predicate the hydrogen flow
Heat transfer coefficient
boiling heat transfer coefficient with high accuracy remains inaccessible. Herein, we propose a data-driven
machine-learning model to predict this coefficient via a non-linear regression approach. To this end, we first
collected over 864 data points with a hydraulic diameter ranging from 4 mm to 6 mm and with a flow velocity
ranging from 1.33 m/s to 11.56 m/s. Then, eight machine learning regression models are developed and
compared with empirical correlations. Among them, the Extra Tree model exhibited the best prediction per­
formance with a MSE <0.01% and a R2 = 0.9933, significantly outperforming previously reported generalized
prediction correlations. Finally, we found that the Ja number, indispensable to the input parameters, served as a
fundamental descriptor in the accurate prediction of the hydrogen flow boiling heat transfer coefficient. The
machine learning-based technique provides a potent tool to predict the hydrogen nucleate flow boiling heat
transfer coefficients with great precision.

heat leakage, and the flowing boiling coefficient is considered to be the


1. Introduction most indispensable parameter to investigate this process [8-10].
Despite the significance of the flowing boiling coefficient, there have
Before the outbreak of the pandemic, commercial airlines account for been a few investigations about liquid hydrogen on account of its
about 2.5% of global carbon dioxide emissions, already exceeding the dangerousness of hydrogen. Walters et al. [11] first conducted an
emissions of Germany as a whole (2.2%) [1]. Regarding this fact, the experimental study on the liquid hydrogen boiling heat transfer coeffi­
aviation industry pledges to achieve net-zero emissions by 2050, an cient via a horizontal copper tube, with a Reynolds number ranging from
ambitious target that will require large-scale decarbonization and sus­ 22,000 to 66000. These experimental results demonstrate that the heat
tainable aviation fuels [2]. Hydrogen, with the advantages of high en­ transfer coefficient of hydrogen boiling is undoubtedly susceptible to the
ergy density and clean waste products, is now gaining serious traction as steam quality and the Reynolds number, but saturation pressure plays a
a possibility of sustainable aviation fuel, and already tests get underway decisive role in the heat transfer. Nevertheless, the obtained experi­
to prove its effectiveness [3,4]. It is expected that the direct combustion mental data was at a saturation state, and the influence of the subcooling
of hydrogen by a gas turbine engine is capable of reducing the impact of on the coefficient was still elusive. Shirai et al. [12,13] subsequently
aircraft emissions on climate by 50% ~ 75% [5]. Nonetheless, hydrogen investigated the nucleate boiling heat transfer coefficient of two-phase
poses a well-documented challenge to serving as aviation fuel. It is re­ hydrogen and the heat flux, with an experimental condition of sub­
ported that liquid hydrogen is extremely sensitive to heat leakage during cooling from 0 to 8 K and flow velocities from 1.33 to 12.90 m/s at 6.9
the transportation between the fuel tank and the engine, causing liquid atm. They discovered that the flow boiling can be triggered by slight
hydrogen flow boiling. This is because of its high saturation temperature wall super-heat and that the critical heat flux occurs at low vapor quality
of roughly 20 K at atmospheric pressure and the low Onset of Nucleate [14]. Although the heat transfer coefficient of hydrogen flow boiling has
Boiling (ONB) of just 0.1 K [6,7]. Therefore, a better understanding of been experimentally studied, it shows strong dependence on experi­
the liquid hydrogen flow boiling process will help estimate the effects of mental conditions. This means that experimental methods could not

* Corresponding authors.
E-mail addresses: huchengzhi@dlut.edu.cn (C. Hu), bjiang@dlut.edu.cn (B. Jiang).

https://doi.org/10.1016/j.fuel.2022.124778
Received 7 April 2022; Received in revised form 8 May 2022; Accepted 2 June 2022
Available online 9 June 2022
0016-2361/© 2022 Elsevier Ltd. All rights reserved.
Y. He et al. Fuel 324 (2022) 124778

Nomenclature T Temperature [K]


v Flow speed [m/s]
Bd Bond number We Weber number
Bo Boiling number
Cp Specific heat [J/(kg k)] Greek symbols
Dh Hydraulic diameter Thermal conductivity [W/(m K)]
) [m]
λ
(
F =exp - 0.53L/dRe0.4 σ Surface tension [N/m]
G Gravitational acceleration [m/s2] μ Viscosity [Pa s]
h Heat transfer coefficient [W/(m2 k)] ρ Density [kg/m3]
hfg Latent heat [J/kg] Subscripts
Ja Jacob number l Liquid phase
l Loss function/channel length [m] sat saturated
Pr Prandtl number w wall
q Heat flux [W/cm2]
Re Reynolds number

2. Methods
Table 1
Experimental data for hydrogen flow boiling. 2.1. Consolidated datasets
Sources Dh (mm) Tsat (K) v (m/s) Data points
Table 1 summarizes the essential information from these aggregated
Shirai [12] 5.95 29 1.33–8.65 121
datasets. Three sources [12,13,24] have been combined to provide a
Tatsumoto [24] 4.00, 6.00 26, 29 1.52–11.56 590
Yoneda [13] 5.965 29 1.37–7.78 153 consolidated dataset with 864 data points for the hydrogen flow boiling
Nusselt numbers. Shirai et al. [12] investigate the forced convection heat
transfer characteristics at the pressure of 0.7 MPa by quasi-steadily
provide enough data to investigate the effect of heat leakage on liquid increasing heat inputs. The inlet temperatures, Tin, are varied from 21
hydrogen transportation. Except for the experiments, the prediction via K to around its saturated temperature, Tsat, and the flow velocities from
empirical correlations affords an alternative for obtaining enough flow 1.55 m/s up to 12.7 m/s. Tatsumoto et al. [24] investigate the pressures
boiling heat transfer coefficients with broad conditions [15-18]. To this of 0.4, 0.7, and 1.1 MPa under saturated conditions, where the saturated
end, researchers collect experimental data of flow boiling heat transfer temperatures Tsat correspond to 26.0, 29.0, and 31.9 K, respectively.
coefficient in public pieces of literature and conclude correlations. Yoneda et al. [13] measured heat transfer coefficients for the inlet
However, it is undeniable that these correlations can be used to predict temperatures from 20.6 K to each saturated temperature Tsat at 0.4, 0.7,
the heat transfer coefficient of liquid hydrogen at a certain accuracy, 1.1 MPa. The flow velocity varied from about 0.3 to 15 m/s. Three da­
only within 30%, as many of these correlations are developed for room tabases provide liquid hydrogen flow boiling characteristics at three
temperature fluids nor cryogens [20]. different hydraulic diameters. The second database from Tatsumoto et
Recently, big data and artificial intelligence technologies pave the al. has the widest experimental conditions and provides the most data
way for a new technology revolution. Using artificial intelligence to points. All the data are collected from the hydrogen vertical upflow
analyze a huge quantity of experimental data contributes to uncovering experiment with a tube heater. In general, we cannot utilize all the data
hydrogen flow boiling. In particular, machine learning has been to train the model since we would not have enough data to verify the
demonstrated to show strong predictive capability in heat transfer co­ model and evaluate its predicted efficacy. As a result, we split the
efficient with high accuracy [20-23]. For instance, Kuang et al.[7] consolidated dataset into two parts: the training dataset and the testing
employed Artificial Neural Network (ANN) to identify the key parame­ dataset for cross-validation. Typically, 70–30%, 75–25%, or 80–20% is
ters affecting the liquid hydrogen boiling heat transfer. Based on the used to divide data between training and testing datasets. Because the
identified key parameters, they proposed a concise model for predicting training dataset volume is dependent on the model performance, a fair
hydrogen flow boiling heat transfer coefficients with an overall MAE of rule of thumb is that the training dataset is 3–4 times larger than the
12.2%. Then, T. Hughes et al. [19] compared different machine learning testing dataset. As a result, these two datasets makeup 81 % (700 data
algorithms to screen the most powerful model for predicting boiling heat points) and 19 % (164 data points), respectively, of the whole dataset.
transfer coefficients of water, and they found the random forest model
with an absolute average deviation of only about 4%. Yet despite much
progress in the prediction assisted by machine learning, there are still 2.2. Assessment of the neural network
little research devoting to the flow boiling heat transfer coefficient of
liquid hydrogen. The measures of linear regression methods are mainly: mean squared
Herein, we intend to predict hydrogen flow boiling Nusselt numbers error (MSE), mean absolute error (MAE), R-squared error (R2), adjusted
and therefore the heat transfer coefficient by evaluating the perfor­ R2 (R2ad).
mance of several machine learning algorithms. Furthermore, the most MAE represents the sum of the absolute value of the difference be­
significant machine learning model will be compared with generalized tween the target value and the predicted value. Its value ranges from 0 to
prediction correlations to highlight how the machine learning method­ positive infinity, regardless of the direction, and only measures the
ology stacks up against the conventional methods. Finally, utilizing the average modulus length of the prediction error:
selected model, we shed light on the influence of key parameters on
hydrogen flow boiling heat transfer coefficient. The study provides a 1∑ n ⃒ ⃒
MAE = ⃒fpre − fexp ⃒ (1)
potent tool for investigating the effect of heat leakage on liquid n i=1
hydrogen transportation, contributing to the safe usage of hydrogen
MSE is the sum of the squares of the distance between the actual and
aviation fuel.
predicted actual value:

2
Y. He et al. Fuel 324 (2022) 124778

1∑ n
( )2 Table 2
MSE = fpre − fexp (2) Model parameters selected in this study.
n i=1
Model Parameter Value
The proportion of all changes in the dependent variable reflected by
Decision tree Criterion MSE
R2 that the independent variable can explain through the regression
Splitter best
relationship: Min samples split 2
∑n ( )2 Min samples leaf 1
fpre − fexp AdaBoost N estimators 50
R2 = 1 − ∑i=1
n ( )2 (3)
Learning rate 1
i=1 fpre − fexp
loss linear
The adjusted R2 is to explain the false increase of R2 when additional Random Forest Estimators number 100
Min samples split 2
predictors are added to the model, and it will also be used to understand Min samples leaf 1
the predictability of the model: Extra Tree Estimators number 100
[ ] Min samples split 2
( ) n− 1
R2ad = 1 − 1 − R2 (4) Min samples leaf 1
n− p− 1 Gradient Boosting Loss Least squares regression
Learning rate 0.1
Estimators number 100
Subsample 1
2.3. Machine learning models Min samples split 2
Max depth 4
Python version 3.8.10 (https://www.python.org/), Scikit-learn XGBoost Learning rate 0.3
version 0.24.2 (https://scikit-learn.org/stable/), CoolProp version 6.4.1 Max depth 6
Min child weight 1
(http://www.coolprop.org/) are used to create the machine learning
Max delta step 0
models. KNN Neighbors number 5
Weights Uniform weights
2.3.1. Decision tree Leaf size 30
P 2
The decision tree model is based on the known probability of situa­
Metric Minkowski
tions when building the decision tree. Obtain the probability that the Bagging Estimators number 10
expected net present value is greater than or equal to zero. Evaluate the Max samples 1
project risks and determine the feasibility of decision analysis methods. Max features 1
This is a graphical method for the intuitive use of probability analysis.
However, this decision branch is called a decision tree because it is
are boosting methods. Except for some differences in project imple­
drawn as a tree branch.
mentation and problem-solving, the most significant difference is the
definition of the objective function.
2.3.2. AdaBoost
AdaBoost is adaptive; weak learners will be adjusted to support those
2.3.7. k-NearestNeighbor
wrongly classified instances by the previous classifier. AdaBoost is
The core idea of the k-NearestNeighbor algorithm (KNN) is that if
sensitive to noise data and outliers. It may not be accessible to over-fit
most K nearest samples in the feature space of a sample belong to a
other learning algorithms on some issues. Individual learners may be
certain category, then the sample also belongs to this category and has
weak, but as long as each learner’s performance is slightly better than
the characteristics of this category sample. When determining the clas­
random guessing, the final model can prove the integration of strong
sification decision, this method only determines the sample category to
learners.
be classified according to the category of one or several recent samples.
2.3.3. Random forest
2.3.8. Bootstrap aggregation
Random forest is an integrated learning method used for classifica­
Bootstrap aggregation (Bagging) is a technique that reduces gener­
tion, regression, and other tasks. Its working principle is to build mul­
alization errors by combining several models. The main idea is to train
tiple decision trees in the training process and output a category as a
several different models separately and then let all models vote on the
pattern (classification) or average prediction (regression). The random
output of the test instance.
decision tree of a single tree corrects the habit of overfitting the decision
Dimensionless parameters as the input parameters can be used to
tree to its training set.
characterize the flow boiling heat transfer process. And the physical
properties in dimensionless parameters come from the NIST database.
2.3.4. Extremely randomized trees
The Bo number, We number, Pr number, Fr number, Re number, Pe
The Extremely randomized trees (Extra Tree) algorithm is very
number, Bd number, Ja number, and ld number are chosen as the input
similar to the random forest algorithm, and both are composed of many
parameters, which are widely used in the universal correlations to
decision trees. But all the samples used by Extra Tree, only the features
describe the flow boiling. Especially, a dimensionless parameter F
are randomly selected because the splitting is random, so to some extent, ( )
l/d
the results are better than the random forest. (=exp − 0.53Re 0.4 ) [25], which comes from the universal correlations,

are chosen as the input parameter. It has a more specific description of


2.3.5. Gradient boosting the heated surface, i.e., the heating surface’s aspect ratio and the import
Gradient boosting is a machine learning technique used for regres­ Re number. The final hyperparameters used in this study for the ML
sion and classification tasks. It gives a prediction model in a group of models are summarized in Table 2.
weak prediction models (usually decision trees). The algorithm is called
a gradient-boosted tree. When the decision tree is a weak learner, it is
generally better than a random forest.

2.3.6. eXtreme Gradient Boosting


Both eXtreme Gradient Boosting (XGBoost) and Gradient Boosting

3
Y. He et al. Fuel 324 (2022) 124778

rest of the distribution. The supervised learning model is a feature based


on physical and data-driven standards. Considering the strong depen­
dence of the regression models and the basic assumptions in each ma­
chine learning model, the model only needs to deal with the relevant and
non-redundant features[26]. Usually, the Pearson correlation coefficient
is used to measure the correlation between two variables parameters
with a value r between − 1 and 1. When r >0, the two variables are
positively correlated. On the contrary, it means that the two variables
are negatively related. Consequently, the larger the absolute value is, the
stronger the correlation is. Therefore, the Pearson correlation coefficient
is evaluated for each pair of parameters in the training database to
determine its correlation with the heat transfer coefficient.
Fig. 2 shows a heat map of Pearson’s rank coefficient to show the
monotonous relationship between all variables in the data set. Because it
does not rely on any primary distribution, this is a non-parametric
technique. The heatmap demonstrates that dimensionless parameters
have a significant association (>0.5) with Ja. Unfortunately, the
generalized correlation does not take into account the parameter Ja.

3.2. Model performance comparison


Fig. 1. Swarm plot of the target variable.
The prediction performance of eight ML models was investigated and
studied using the full data of the heat transfer coefficient of hydrogen
3. Results and discussion
flow boiling gathered in this study. The dimensionless parameters are
assumed to be the input parameters for ML models, whereas the Nusselt
3.1. Data distribution and correlations
numbers are considered the goal parameters. The needless duplication
of the ML model is eliminated, the prediction accuracy is attained, and
The swarm plot of the target variables in various liquids is shown in
the model’s scalability is encouraged by choosing input parameters.
Fig. 1. The swarm plot shows all results and primary distribution in the
The comparison of the predicted and experimental Nusselt number
database. The x-axis is the classification of databases in which hydrogen
under different ML models is shown in Fig. 3, and the performance
is the only working fluid in this manuscript. We use a swarm plot instead
characteristics of the ML models are included in Table 3. Surprisingly,
of a box plot or violin plot to show the distribution of the Nusselt
with MSE/MAE of 0 and R2/R2ad of 1, the Decision Tree model performs
number. Except for the outliers, which are depicted as dots, this box
flawlessly in the training dataset. However, the performance of the
describes the quartile of the distribution; the beard extension covers the
testing dataset is not satisfactory, which indicates that the ML model has

Fig. 2. Pearson rank coefficient heatmap for all variables in datasets.

4
Y. He et al. Fuel 324 (2022) 124778

Fig. 3. Comparison of machine learning-predicted hydrogen flow boiling Nusselt number data with experimentally obtained data Models include: (a) Decision Tree;
(b) AdaBoost; (c) Random Forest; (d) Extra Tree; (e) XGBoost; (f) Gradient Boosting; (g) KNN; (h) Bagging.

been overfitted [27]. The findings indicate that the Decision Tree model especially on the training dataset; both of them have a high R2/R2ad
may not be appropriate for this project. In addition, the AdaBoost, (>0.99). On the testing dataset, the primary performance difference can
Gradient Boosting, and KNN models did not do well in terms of pre­ be seen. Although the R2/R2ad of the ML model is >0.96 in the testing
diction. High MSEs/MAEs and low R2/R2ad are not the desired result in dataset, the R2/R2ad of the Extra Tree model is 0.99. This demonstrates
the ML regression process. The remaining four ML models (Random that the Extra Tree model performs admirably on both the training and
Tree, Extra Tree, XGBoost, Bagging) have shown excellent performance, testing datasets. Furthermore, the MAE/MSE of the Extra Tree model is

5
Y. He et al. Fuel 324 (2022) 124778

Table 3
Performance of different model predictions.
Model Train test Model train test

Decision Tree MSE 0 12344.513 AdaBoost MSE 39481.73 45689.40


MAE 0 67.9352 MAE 172.9216 176.4117
R2 1 0.972197 R2 0.893896 0.897096
R2ad 1 0.970572 R2ad 0.892512 0.891082

Random Forest MSE 1242.022 8614.083 Extra Tree MSE 1.287e− 24 3508.639
MAE 18.70457 54.39256 MAE 6.800e-13 31.5238
R2 0.996662 0.980599 R2 0.999999 0.99209
R2ad 0.996618 0.979465 R2ad 0.999999 0.99163

XGBoost MSE 1587.33 9214.42 Gradient Boosting MSE 4756.681 13107.13


MAE 20.5916 55.0708 MAE 40.8283 68.8140
R2 0.99573 0.97924 R2 0.987216 0.970479
R2ad 0.99567 0.97803 R2ad 0.987050 0.968754

KNN MSE 18170.54 35011.79 Bagging MSE 4467.329 15055.93


MAE 69.9971 102.675 MAE 39.15408 85.72513
R2 0.951168 0.921145 R2 0.994176 0.979675
R2ad 0.950531 0.916536 R2ad 0.993643 0.96661

Table 4
Extra Tree Model predictions for fixed input parameters.
Case Input dimensionless Training dataset Testing dataset
numbers
R2 R2ad R2 R2ad

1 Bo, We, Prl, Frl, Rel, Pel, 0.99999 0.99999 0.99182 0.99134
Bd, Ja, ld, F
2 Bo, We, Prl, Frl, Rel, Pel, 0.99999 0.99999 0.99277 0.99240
Bd, Ja
3 Bo, We, Prl, Frl, Rel, Pel, 0.96938 0.96907 0.75260 0.74150
Bd
4 Bo, We, Prl, Frl, Rel, Pel 0.96938 0.96912 0.75562 0.74629
5 Bo, We, Prl, Rel, Bd, Ja 0.99999 0.99999 0.99186 0.99155
6 Bo, We, Prl, Rel, Ja 0.99999 0.99999 0.99163 0.99136
7 Bo, Prl, Rel, Ja 0.99999 0.99999 0.99030 0.99005
8 Bo, Prl, Rel 0.96938 0.96925 0.75187 0.74722
9 Prl, Rel 0.24774 0.24559 0.26231 0.25314
10 Bo, Prl 0.96692 0.96682 0.09526 0.08402

extremely low, accounting for less than half of the MAE/MSE of the
other three models. Extra tree consists of many decision trees, all the
samples used, but the features are randomly selected because the split is
random, so better results will be obtained to some extent. When training
each decision tree, the random forest provided by Scikit-learn does not
look for the optimal partition on all features on each node, but on a
random subset of features, which increases the randomness of the
ensemble learning sub-model. The more differences between sub-
models, the more conducive to integrated learning, and one of the Fig. 4. Feature selection results obtained by Extra Tree Model.
ways to create differences are to increase randomness. The random
forest encapsulated by Scikit-learn looks for the optimal partition on the
The model performance for fixed input parameters is examined using
random feature subset, which further increases the randomness. The
the Extra Tree model, as shown in Table 4, to compare the effect of
second-best performing model, XGBoost, usually can outperform many
changing input parameters. We employed various combinations of
other ML algorithms since it uses Newton’s method, which employs a
dimensionless parameters to perform ten instances as inputs. As can be
higher-order alternative to optimization problems [28]. The decision
observed, Case1′ s performance is inferior to Case 2, indicating that ld
tree is a nonparametric supervised learning method, which can sum­
and F are ineffective input parameters. If there are too many input pa­
marize decision rules from a series of data with features and labels and
rameters, multiple higher-order terms must be formed, resulting in
present these rules with the structure of a tree graph to solve classifi­
excess of features, learning parameters, and complexity [32]. When Ja is
cation and regression problems. Each internal node in the decision tree
not utilized as an input parameter in Cases 3, 4, and 5, the ML model’s
represents a judgment on an attribute, each branch represents an output
prediction accuracy is considerably lowered. As a result, Ja helps in­
of a judgment result. Finally, each leaf node represents a classification
crease the ML model’s prediction accuracy. Cases 6 and 7 demonstrate
result. However, the decision tree model easily produces over-fitting
that We have a moderate impact on accuracy, with a more significant
[29]. In addition, tiny data changes back to change the shape of the
effect reflected in the testing set. The Bo, on the other hand, is crucial.
whole tree and is unfriendly to unbalanced data. The combination of
When comparing Cases 8, 9, and 10, it can be seen that R2/R2ad exhibits a
PCA, ICA, or the dimensionality reduction algorithm in the feature se­
cliff-like decline for Case 9 without Bo input.
lection module can prevent overfitting [30,31].

6
Y. He et al. Fuel 324 (2022) 124778

Table 5
The mean absolute relative errors between predictions of models and experimental results.
Authors Correlations MAE R2

Fang et al.[38] ( )0.29 [ ( μ ) ]− 1 6.30% 0.8897


0.72 ρl lf
Nu = Ff M− 0.18 Bo0.98 Fr0.48
lo Bd ln Ya
ρg μlw
Gungor and Winterton[39] ( ) Dh b 11.59% 0.7913
Nu = E × hl + Shpool
λl
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
Liu and Winterton[40] ( )2 Dh c 22.49% 0.7930
Nu = (E × hl )2 + Shpool
λl
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
Kim and Mudawar[25] Dh 61.70% 0.4460
Nu = h2nb + h2cb d
λl
Kandlikar[15] ( ) Dh e 539.1% 0.2321
Nu = hl C1 CoC2 + C3 BoC4 Ffl
λl
Warrier et al.[11] Dh 100% 0.3489
Nu = E × hl f
λ1
[ ( 1 )0.87 ]
Oh and Son[41] 446.3% 0.2600
0.3
Nu = 0.034Re0.8 l Prl 1.58
Xtr
Kuang and Wang[7] Nutp = 427.13Bo0.6223 0.023Re0.8 0.4
fo Prl
13.2% 0.9641
Extra Tree Model – <0.01% 0.9933
{
a 1 for pr ⩽0.43
Y = , Ff = 1715 for nitrogen, Not listed for hydrogen.
1.38 − p1.15
r for pr > 0.43
0.4 λ1
( )− 1 ( )− 0.55 − 0.5 0.67
b
E = 1 + 24000Bo1.16 + 1.37(1/Xtt)0.86 ,hl = 0.023Re0.8 l Pr ,S = 1 + 1.15 × 10− 6 E2 Re1.17 l ,hpool = 55p0.12
r − log10 pr M q
Dh
[ ( ) ]0.35 ( )
ρ ( ) 1
0.4 λ1 − 0.55 − 0.5 0.67 −
c
hl = 0.023Re0.8
fo Pr l ,E = 1 + xPrl l − 1 ,hpool = 55p0.12r − log10 pr M q ,S = 1 + 0.055E0.1 Re0.16 f0
Dh ρg
[ ( P )0.70 ]( ) [ ( )0.08 )0.94 ( )0.25 ]( )
t 0.4 λl Pt ( 1 ρg 0.4 λl
d
hnb = 2345 Bo p0.38
r (1 − x)− 0.51 0.023Re0.8 l Pr l ,hcb = 5.2 Bo We−fo0.54 + 3.5 0.023Re0.8 l Pr l
Pf Dh PF Xt ρl Dh
) )
e
( 1 − x 0.8 (ρg 0.5 0.8 0.4 λ1
Co = ,hl = 0.023Refo Pr l ,
x ρl Dh
f λ 1
hl = 0.023Re0.8
fo Prl
0.4
,E = 1.0 + 6.0Bo1/16 − 5.3(1 − 855Bo)χ 0.65
Dh

Fig. 5. Hydrogen flow boiling Nusselt number variations with (a) system pressure; (b) flow speed; (c) heat flux.

7
Y. He et al. Fuel 324 (2022) 124778

The majority of the input samples used the characteristics at the top a higher heat transfer coefficient. Nu rises initially and subsequently
of the tree in the final prediction decision. As a result, the predicted drops when the heat flux increases, as shown in Fig. 5(c). When the heat
contribution of the models may be used to evaluate the relative rele­ load is large, the number of vaporization cores on the inner wall of the
vance of the features. By averaging these projected activity rates on evaporation tube grows dramatically, and the production speed of vapor
numerous random trees, the estimated variance may be decreased and bubbles surpasses the separation speed, forming a vapor film, also
used to feature selection. The feature selection results of Extra Tree are known as the departure from nucleate boiling (DNB). It’s worth
depicted in Fig. 4. It can be seen that the top five factors are Ja, Bo, Prl, mentioning that the ML model’s prediction error for the crucial heat
We, Rel. Kim and Mudawar’s general correlation can further confirm flow DNB is <5% compared to the experimental value. Sound reasons
these most important features[25] (except Ja). Also, some machine support the predicted results of ML reasonable physical properties.
learning works do not include Ja number in the input parameters for
predicting heat transfer processes [21,33]. However, Ja number is a 4. Conclusion
measure of liquid superheat and latent heat in the process of liquid phase
change. The Ja number is often employed as the benchmark of the In this study, the machine learning approach is utilized to explore the
bubble departure process in the boiling process [34], demonstrating the flow boiling heat transfer coefficient of hydrogen aviation fuel. The
significance of the Ja number in the boiling process. To sum up, Ja suggested approach for determining hydrogen nucleate flow boiling
number is recommended as one fundamental descriptor. heat transfer coefficients is presented in this paper. With the aid of the
consolidated dataset, the Extra Tree model is used to investigate the
3.3. Understanding predictions performance prediction. The following are the primary conclusions:

The ability of the machine learning model (Extra tree model) on the 1. Liquid hydrogen nucleate flow boiling heat transfer coefficients are
consolidated dataset was tested in this part, and the results were compiled into a consolidated dataset. The dataset was collected from
compared to generalized correlations for heat transfer coefficient pre­ three sources and contains 864 data points.
dictions, as illustrated in Table 5. The value Ff for hydrogen is not 2. The Extra Tree model outperforms the other seven machine learning-
investigated in Fang’s work; thus, we adopt the number 1315 that Kuang based models in prediction, as well as the input parameters are also
recommends [7]. Fang’s correlation forecasting findings have an MAE of contrasted for a suitable approach. For this dataset, the relevant
6.30 percent, which appears to be good. This correlation proposes an input parameters are: Bo, We, Prl, Frl, Rel, Pel, Bd, Ja. And the Ja
adjustment for wall surface liquid viscosity, making the correlation more number is indispensable to the input parameters as one fundamental
suited for low-temperature fluids[14]. However, Fang’s correlation is descriptor.
not recommended since it is very dependent on the choice of Ff. The heat 3. The ML model outperformed very reliable generalized prediction
transfer Nusselt number was separated into two portions by the Gungor- correlations compared to the generalized correlation. ML models did
Winterton and Liu-Winterton correlations: the enhanced single liquid- not provide unrealistic or unphysical findings to predict liquid
phase convection heat transfer and the suppressed pool boiling heat hydrogen nucleate flow boiling heat transfer coefficients. A reliable
transfer. Previous research has demonstrated that these connections are prediction approach is unlocked.
extremely adaptable to room temperature fluids[35]. The Kim-Mudawar
correlation can solve the heat transmission ability of a system using
Declaration of Competing Interest
nucleate boiling and convection boiling primarily and has good accu­
racy, particularly in micro/mini channels[25]. Despite considering
The authors declare that they have no known competing financial
several parameters, this correlation does not function well for heat
interests or personal relationships that could have appeared to influence
transfer prediction of low-temperature fluids. This might be since it was
the work reported in this paper.
created using room temperature fluids. The Kandlikar, Warrier, and Oh-
Son correlations are notoriously over-predicted. The scattered pre­
dictions indicate that the stability of these correlations are not satis­ Acknowledgements
factory. Hence, they are not recommended. Since it is an associative
created exclusively for hydrogen, Kuang’s correlation outperforms the This work is supported by National Natural Science Foundation of
others. However, even the most powerful predictive correlations cannot China (Grant No. 52176058, 52106078, 51806028, 51876027,
match the predictive potential of a machine learning model. Large 52106226), the Fundamental Research Funds for the Central Univer­
amounts of data may aid machine learning. Its tremendous capacity aids sities (DUT20RC(3)095, DUT22RC(3)003), and the Natural Science
in resolving a variety of issues[36,37]. Our original intention is to use it Foundation of Liaoning Province of China (2021-BS-068).
to predict the heat transfer coefficient of hydrogen flow boiling.
References
3.4. Physical results validation
[1] Eregowda T, Chatterjee P, Pawar DS. Impact of lockdown associated with COVID19
on air quality and emissions from transportation sector: case study in selected
In both the training and testing datasets, the ML model performed Indian metropolitan cities. Environ Syst Decis 2021;41:401–12. https://doi.org/
well. However, one drawback of machine learning models is that they 10.1007/S10669-021-09804-4/TABLES/6.
[2] Dincer I, Acar C. A review on potential use of hydrogen in aviation applications. Int
might provide unrealistic or unphysical outcomes in various situations J Sustain Aviat 2016;2:74. https://doi.org/10.1504/IJSA.2016.076077.
[42]. We will test the “physical” performance of ML models in this part. [3] Ma J, Jiang B, Li L, Yu K, Zhang Q, Lv Z, et al. A high temperature tubular reactor
The fluctuation of the hydrogen flow boiling Nusselt number with sys­ with hybrid concentrated solar and electric heat supply for steam methane
reforming. Chem Eng J 2022;428:132073. https://doi.org/10.1016/J.
tem pressure, flow speed, and heat flux is shown in Fig. 5. The greater CEJ.2021.132073.
the system pressure, the more significant the Nu number, and hence the [4] Khandelwal B, Karakurt A, Sekaran PR, Sethi V, Singh R. Hydrogen powered
higher the heat transfer coefficient, as seen in Fig. 5(a). The bubble aircraft : The future of air transport. Prog Aerosp Sci 2013;60:45–59. https://doi.
org/10.1016/J.PAEROSCI.2012.12.002.
departure size will shrink as system pressure rises, and the bubble de­
[5] Zhang C, Chen L, Ding S, Zhou X, Chen R, Zhang X, et al. Mitigation effects of
parture frequency will also increase. Small bubbles are more prone to alternative aviation fuels on non-volatile particulate matter emissions from aircraft
wall slippage, accelerating boundary layer rupture. The higher flow gas turbine engines: A review. Sci Total Environ 2022;820:153233. https://doi.
velocity accelerates the bubble’s exit and causes it to slide against the org/10.1016/J.SCITOTENV.2022.153233.
[6] Kuang Y, Han F, Sun L, Zhuan R, Wang W. Modeling and numerical investigation of
wall in Fig. 5(b). Convective heat transfer can also be improved by hydrogen nucleate flow boiling heat transfer. Int J Hydrogen Energy 2021;46:
increasing flow speed. Finally, a more significant flow speed may lead to 19617–32. https://doi.org/10.1016/J.IJHYDENE.2021.03.084.

8
Y. He et al. Fuel 324 (2022) 124778

[7] Kuang Y, Han F, Sun L, Zhuan R, Wang W. Saturated hydrogen nucleate flow [23] He Y, Hu C, Li H, Jiang B, Hu X, Wang K, et al. A flexible image processing
boiling heat transfer coefficients study based on artificial neural network. Int J technique for measuring bubble parameters based on a neural network. Chem Eng
Heat Mass Transf 2021;175:121406. https://doi.org/10.1016/J. J 2022;429:132138. https://doi.org/10.1016/J.CEJ.2021.132138.
IJHEATMASSTRANSFER.2021.121406. [24] Tatsumoto H, Shirai Y, Shiotsu M, Hata K, Naruo Y, Kobayasi H, et al. Forced
[8] Darr SR, Hu H, Glikin N, Hartwig JW, Majumdar AK, Leclair AC, et al. An convection heat transfer of saturated liquid hydrogen in vertically-mounted heated
experimental study on terrestrial cryogenic tube chilldown II. Effect of flow pipes. AIP Conf Proc 2015;1573:44. https://doi.org/10.1063/1.4860681.
direction with respect to gravity and new correlation set. Int. J Heat Mass Transf [25] Kim SM, Mudawar I. Review of databases and predictive methods for heat transfer
2016;103:1243–60. in condensing and boiling mini/micro-channel flows. Int J Heat Mass Transf 2014;
[9] Darr SR, Hu H, Glikin NG, Hartwig JW, Majumdar AK, Leclair AC, et al. An 77:627–52. https://doi.org/10.1016/J.IJHEATMASSTRANSFER.2014.05.036.
experimental study on terrestrial cryogenic transfer line chilldown I. Effect of mass [26] Van Dijck G, Van Hulle MM. Speeding Up the Wrapper Feature Subset Selection in
flux, equilibrium quality, and inlet subcooling. Int. J Heat Mass Transf 2016;103: Regression by Mutual Information Relevance and Redundancy Analysis. Lect Notes
1225–42. Comput Sci (Including Subser Lect Notes Artif Intell Lect Notes Bioinformatics)
[10] Tang X, Pu L, Shao X, Lei G, Li Y, Wang X. Dispersion behavior and safety study of 2006;4131 LNCS-I:31–40. https://doi.org/10.1007/11840817_4.
liquid hydrogen leakage under different application situations. Int J Hydrogen [27] Ying X. An Overview of Overfitting and its Solutions. J Phys Conf Ser 2019;1168:
Energy 2020;45:31278–88. https://doi.org/10.1016/J.IJHYDENE.2020.08.031. 022022. https://doi.org/10.1088/1742-6596/1168/2/022022.
[11] Walters HH. Single-Tube Heat Transfer Tests with Liquid Hydrogen. Adv Cryog Eng [28] Nielsen D. Tree Boosting With XGBoost Why Does XGBoost Win “Every” Machine
1961:509–16. https://doi.org/10.1007/978-1-4757-0534-8_53. Learning Competition? Norwegian University of Science and Technology, n.d.
[12] Shirai Y, Tatsumoto H, Shiotsu M, Hata K, Kobayashi H, Naruo Y, et al. Forced flow [29] Helmbold DP, Schapire RE. Predicting Nearly As Well As the Best Pruning of a
boiling heat transfer of liquid hydrogen for superconductor cooling. Cryogenics Decision Tree 1997;27:51–68.
(Guildf) 2011;51(6):295–9. [30] Sun W, Chen J, Li J. Decision tree and PCA-based fault diagnosis of rotating
[13] Yoneda K, Shirai Y, Shiotsu M, Oura Y, Horie Y, Matsuzawa T, et al. Forced Flow machinery. Mech Syst Signal Process 2007;21:1300–17. https://doi.org/10.1016/
Boiling Heat Transfer Properties of Liquid Hydrogen for Manganin Plate Pasted on J.YMSSP.2006.06.010.
One Side of a Rectangular Duct. Phys Procedia 2015;67:637–42. [31] Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-
[14] Dittus FW, Boelter LMK. Heat transfer in automobile radiators of the tubular type. learn: Machine Learning in Python Pedregosa, Varoquaux, Gramfort et al. vol. 12.
Int Commun Heat Mass Transf 1985;12:3–22. https://doi.org/10.1016/0735-1933 2011.
(85)90003-X. [32] Domingos P. A few useful things to know about machine learning. Commun ACM
[15] Kandlikar SG. A general correlation for saturated two-phase flow boiling heat 2012;55:78–87. https://doi.org/10.1145/2347736.2347755.
transfer inside horizontal and vertical tubes. J Heat Transfer 1990;112:219–28. [33] Qiu Y, Garg D, Zhou L, Kharangate CR, Kim SM, Mudawar I. An artificial neural
https://doi.org/10.1115/1.2910348. network model to predict mini/micro-channels saturated flow boiling heat transfer
[16] Ganesan V, Patel R, Hartwig J, Mudawar I. Review of Databases and Correlations coefficient based on universal consolidated data. Int J Heat Mass Transf 2020;149:
for Saturated Flow Boiling Heat Transfer Coefficient for Cryogens in Uniformly 119211. https://doi.org/10.1016/J.IJHEATMASSTRANSFER.2019.119211.
Heated Tubes, and Development of New Consolidated Database and Universal [34] Zhou P, Huang R, Huang S, Zhang Yu, Rao X. Experimental investigation on bubble
Correlations. Int J Heat Mass Transf 2021;179:121656. https://doi.org/10.1016/J. contact diameter and bubble departure diameter in horizontal subcooled flow
IJHEATMASSTRANSFER.2021.121656. boiling. Int. J Heat Mass Transf 2020;149:119105.
[17] Zhang J, Ma Y, Wang M, Zhang D, Qiu S, Tian W, et al. Prediction of flow boiling [35] Kuang YW, Wang W, Zhuan R, Yi CC. Simulation of boiling flow in evaporator of
heat transfer coefficient in horizontal channels varying from conventional to small- separate type heat pipe with low heat flux. Ann Nucl Energy 2015;75:158–67.
diameter scales by genetic neural network. Nucl Eng Technol 2019;51(8): https://doi.org/10.1016/J.ANUCENE.2014.08.008.
1897–904. [36] Vinyals O, Babuschkin I, Czarnecki WM, Mathieu M, Dudzik A, Chung J, et al.
[18] Lee SK, Lee Y, Brown NR, Terrani KA. Elucidating the Impact of Flow on Material- Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nat
Sensitive Critical Heat Flux and Boiling Heat Transfer Coefficients: An 2019;575(7782):350–4.
Experimental Study with Various Materials. Int J Heat Mass Transf 2020;158: [37] Castelvecchi D. DeepMind’s AI helps untangle the mathematics of knots. Nature
119970. https://doi.org/10.1016/J.IJHEATMASSTRANSFER.2020.119970. 2021;600:202. https://doi.org/10.1038/D41586-021-03593-1.
[19] Hughes MT, Fronk BM, Garimella S. Universal condensation heat transfer and [38] Fang X, Wu Q, Yuan Y. A general correlation for saturated flow boiling heat
pressure drop model and the role of machine learning techniques to improve transfer in channels of various sizes and flow directions. Int J Heat Mass Transf
predictive capabilities. Int J Heat Mass Transf 2021;179:121712. https://doi.org/ 2017;107:972–81. https://doi.org/10.1016/J.
10.1016/J.IJHEATMASSTRANSFER.2021.121712. IJHEATMASSTRANSFER.2016.10.125.
[20] Qiu Y, Garg D, Kim S-M, Mudawar I, Kharangate CR. Machine learning algorithms [39] Gungor KE, Winterton RHS. A general correlation for flow boiling in tubes and
to predict flow boiling pressure drop in mini/micro-channels based on universal annuli. Int J Heat Mass Transf 1986;29:351–8. https://doi.org/10.1016/0017-
consolidated data. Int. J Heat Mass Transf 2021;178:121607. 9310(86)90205-X.
[21] Zhu G, Wen T, Zhang D. Machine learning based approach for the prediction of [40] Liu Z, Winterton RHS. A general correlation for saturated and subcooled flow
flow boiling/condensation heat transfer performance in mini channels with boiling in tubes and annuli, based on a nucleate pool boiling equation. Int J Heat
serrated fins. Int J Heat Mass Transf 2021;166:120783. https://doi.org/10.1016/J. Mass Transf 1991;34:2759–66. https://doi.org/10.1016/0017-9310(91)90234-6.
IJHEATMASSTRANSFER.2020.120783. [41] Oh HK, Son CH. Evaporation flow pattern and heat transfer of R-22 and R-134a in
[22] Kwon B, Ejaz F, Hwang LK. Machine learning for heat transfer correlations. Int small diameter tubes. Heat Mass Transf Und Stoffuebertragung 2011;47:703–17.
Commun Heat Mass Transf 2020;116:104694. https://doi.org/10.1016/J. https://doi.org/10.1007/S00231-011-0761-4/FIGURES/17.
ICHEATMASSTRANSFER.2020.104694. [42] Pilozzi L, Farrelly FA, Marcucci G, Conti C. Machine learning inverse problem for
topological photonics. Commun Phys 2018.1–7.;2018(11):1. https://doi.org/
10.1038/s42005-018-0058-8.

You might also like