AndreaCoraddu - Data Driven URN Modelling of Cavitating Propellers

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

34nd Symposium on Naval Hydrodynamics

Washington, USA, 26 June - 1 July 2022

Data-driven Underwater Radiated Noise Modelling of


Cavitating Marine Propellers
M. Kalikatzarakis1 , A. Coraddu 2 , M. Atlar1 , S. Gaggero3 , G. Tani3 , L. Oneto 3
(1 University of Strathclyde, UK
2 Delft University of Technology, the Netherlands
3 University of Genoa, Italy)

ABSTRACT of different methods to accurately predict ship-generated


noise becomes more important and imminent (Brooker
The potential impact of ships underwater radiated noise and Humphrey, 2016; Ianniello et al., 2013, 2014; Li
(URN) on marine fauna has become an important issue. et al., 2018). Computational tools for URN prediction can
The most dominant noise source on a propeller-driven be divided in two main categories, characterised by dif-
vessel is propeller cavitation, and the accurate predic- ferent complexity and capability to model the underlying
tion of its noise signature is fundamental for the design physics in (Li et al., 2018): Empirical and semi-empirical
process. In this work, we investigate the potential of models, and Computational Fluid Dynamics (CFD) meth-
using Machine Learning methods for the prediction of ods.
URN from cavitating marine propellers that can be conve-
niently implemented within the design process. We com- Empirical and semi-empirical models were the
pare computational and experimental results on a subset first attempts in predicting cavitation noise and have
of the Meridian standard propeller series, behind different been investigated by several researchers (Bosschers, 2017,
severities of axial wake, for a total of 432 experiments. 2018; Lafeber and Bosschers, 2016; Wittekind, 2014;
The results on an interpolation and two extrapolation tasks Wittekind and Schuster, 2016). Most simplified ap-
support the validity and effectiveness of the proposed ap- proaches for very initial predictions utilise fully-empirical
proach. formulas based on curve fitting to available measurement
data, and although these approaches model limited parts
of the underlying physical phenomena, they are utilised
1 INTRODUCTION extensively in the initial design stage due to their limited
Underwater Radiated Noise (URN) is a subject of in- computational cost (Bosschers et al., 2017). More accu-
creased interest in naval architecture primarily because of rate results can be achieved with sophisticated CFD mod-
its negative consequences on marine life and, secondly, els, while also in this case achieving accurate and reliable
of considerations about the comfort of crew and passen- results is not trivial. The commonly used approach in-
gers (IMO, 2012, 2014). A seagoing vessel has a num- volves the coupling of hydrodynamic solvers that detect
ber of noise sources: the main and auxiliary engines, the hydrodynamic sources of sound for cavitating pro-
electric motors, flow noise due to the turbulence in the pellers with the Ffowcs William-Hawkings (FWH) acous-
boundary layer, the wake of the hull and appendages, tic analogy (Bensow and Liefvendahl, 2016; Ianniello
and the noise due to the wave breaking. The propeller et al., 2013, 2014; Lidtke, 2017; Sezen et al., 2021a,b).
is the most significant noise source, generating the high- However, URN prediction using most CFD methods is
est noise levels at frequencies below 200 Hz (Hildebrand, still a computationally expensive procedure, even with the
2009). Unfortunately, this low frequency band overlaps crude simplification of splitting the acoustic and hydrody-
with the frequency band in the audible range of several namic parts. This makes their use impractical in a con-
marine mammals, thus affecting their fundamental living ventional propeller design loop (Aktas et al., 2016; van
activities (Hildebrand, 2005). In addition, when cavita- Wijngaarden, 2005).
tion occurs, propeller noise may dominate the acoustic As a consequence, it is of interest to replace
signature of the ship even at higher frequencies. While these efforts with a computationally cheap method that
the understanding of various noise generation mecha- is able to accurately predict URN by having only infor-
nisms is still an ongoing area of research, the development mation which is readily available at design stage, such
as the propeller geometry, the inflow wake, and an ap- The Meridian propeller series, derived from the
proximate estimate of cavitation from numerical results proprietary propeller design of Stone Manganese Marine
of a Boundary Element Method (BEM), which is realisti- Ltd., is a unique standard series based solely on practical
cally available during the design stage. For this reason, in propeller designs with standardised variations in pitch-to-
this work we propose to employ Machine Learning (ML) diameter ratio (p/D), blade area ratio (AE /AO ) and num-
approaches (Györfi et al., 2002; Vapnik, 1998), in which ber of blades Z. Currently there are 60 propellers in the se-
models rely on robust statistical inference procedures and ries (Carlton, 2018), from which we utilise the following
historical data, to make predictions about previously un- models: KCD 65, KCD 74, KCD 129, KCD 191, KCD
seen cases at a very small fraction of the cost of CFD 192, and KCD 193, presented in Figure 1.
methods. Three different algorithms are utilised to solve
this problem, i.e. Extreme Learning Machines (ELM),
Kernel Regularised Least Squares (KRLS), and Random
Forests (RF). We consider a dedicated feature engineering
process to extract meaningful information from the avail-
able experimental data and several additional quantities
computed by a Boundary Element Method (BEM) (Gag-
gero and Villa, 2018; Gaggero et al., 2017), mindful to
keep the cardinality of the input space as low as possible. (a) KCD 65 (b) KCD 74 (c) KCD 129
The algorithms are trained and tested utilising 432 exper-
iments conducted in the Emerson Cavitation Tunnel (At-
lar, 2011), performed on a small, but commercially repre-
sentative, subset of the Meridian standard series propeller
models (Aktas, 2017) behind different severities of the ax-
ial wake that were created using three two-dimensional
wake screens. Models are tested in real-world scenarios,
aiming to predict the URN spectra in groups of experi- (d) KCD 191 (e) KCD 192 (f) KCD 193
ments that lie outside of the experiment space utilised to
develop them. The advantages of the current approach are Figure 1: Visual impression of the propellers in the
analysed in terms of generalization capabilities. dataset.
The rest of the paper is organised as follows.
Section 2 details the experimental data used to evaluate
the accuracy of the proposed methods. Section 3 presents
the set of computational methods used to evaluate flow
features and the URN for the considered propellers and
wake fields. Section 4 reports comparative results be-
tween the numerical methods and the experimental data,
and Section 5 collects some conclusions of the paper.
(a) W1 (b) W2 (c) W3
2 AVAILABLE DATA
Figure 2: Contour plots of axial velocity distributions of
For the scope of this work, we utilise the dataset gener- the wakefields in the dataset.
ated in the measurement campaign of (Aktas, 2017; Ak-
tas et al., 2018). The same dataset has also been utilised The authors selected the wakefields based on the
in our previous work (Kalikatzarakis et al., 2021), but a criteria suggested by (Angelopoulos et al., 1988; Konno
brief description is provided for the sake of completeness. et al., 2002; Odabaşi and Fitzsimmons, 1978). According
The authors conducted systematic cavitation tunnel tests to these studies, wakefields with steeper velocity changes
at the Emerson Cavitation Tunnel of Newcastle Univer- produce higher tonal amplitudes of pressure fluctuations,
sity (Atlar, 2011), with 6 members of the Meridian Stan- as well as high-frequency contributions from increased
dard propeller series (Emerson, 1978) and 3 wakefields. dynamic cavity collapses, both away from and on the
A full factorial experimental design was conducted with blade surface. Based on these observations, the wake non-
a constant inflow velocity of 3 [m/s], 3 different levels uniformity, mean wake, half- wake width and wake depth
of tunnel vacuum conditions (atmospheric, 150 [mmHg], were controlled to generate 3 wakefields, referred to as
300 [mmHg]) and 8 propeller rotational speeds for a total W1, W2, and W3, that will induce variation in the inflow
of 432 experiments. velocities of varying severity. These changes will subse-
quently induce the formation of unsteady cavitation from and 60 angular positions, as well as the unsteady radial
the collapse and rebound of cavity volumes at the exit of circulation distribution Γ for the same radial sections and
the wake peak region. Figure 2 provides a visual impres- angular positions. Cpn is directly related to the occurrence
sion of these wakefields. of blade surface cavitation and can provide an estimation
of cavitation inception, and Γ is related to the forces act-
Table 1: Quantities available in the dataset. ing on the hydrofoil, and in particular to the lift, according
Symbol Description Size∗ Units
to the well known Kutta–Joukowski theorem, and corre-
lates with the load acting on the blades and its distribution,
Propeller Geometry which in turn is strictly related to the strength of the shed
Dp Propeller diameter [m] vortices and the occurrence of tip vortex cavitation. By
Z Number of blades [–] considering all these quantities we can fully characterise
AE
AO Blade aspect ratio - the dynamic pressure acting on the whole surface of the
p Pitch 1×8 [–] blades during their rotation in any wakefield (Miglianti
c Chord 1×8 [–] et al., 2020). Table 1 lists the full set of quantities avail-
iT Total rake 1×8 [–] able in our dataset.
tmax Max. thickness 1×8 [–]
fmax Max. camber 1×8 [–]
3 NUMERICAL METHODS
θs Skew angle 1×8 [deg]
Operating Conditions The problem of predicting the URN spectra can be
straightforwardly mapped in a typical Machine Learning
np Rotational speed [rpm]
(ML) regression framework, which corresponds to esti-
Va Advance velocity [m/s]
prel Tunnel pressure [mbar]
mating the relation between propeller geometry, inflow
wt Axial wakefield 22 × 60 [–] conditions and URN spectra. In this framework (Vap-
J Advance coefficient [–] nik, 1998), a set of data Dn = {(x1 , y1 ) , · · · , (xn , yn )},
Kt Thrust coefficient [–] where xi ∈ X ⊆ Rd are the inputs and yi ∈ Y ⊆ Rq are
10Kq Torque coefficient [–] the outputs, needs to be available. The goal is to iden-
ηo Propeller efficiency [–] tify the unknown model which maps inputs to outputs
σv Cav. index ref. on Va [–] S : X → Y through an algorithm AH which chooses
σn Cav. index ref. on n p [–] a model h : X → Y in a set of models FH , defined by
tip
σn,tip Cav. index ref. on n p [–] some hyperparameters H . The accuracy of h in repre-
αG Angle of attack 22 × 60 [deg] senting the unknown system S is measured with a pre-
f Frequency vector 1 × 31 [Hz] scribed loss function ` : Y × Y → [0, ∞). In this context,
RNL RNLs in 1/3 octave 1 × 31 [dB] y corresponds to the Radiated Noise Levels (RNLs) at the
Quantities estimated by BEM simulations frequencies defined by f , whereas x consists of all other
Γ Blade circulation 22 × 60 [–] quantities of Table 1, or any quantities derived from them.
Cpn Pressure coefficient 44 × 22 × 60 [–] Kernel Regularised Least Squares
∗ Empty field indicates scalar quantity. As far as the algorithm AH is concerned, one of the
approaches considered is known as Kernel Regularized
URN was measured by one hydrophone placed Least Squares (KRLS) (Caponnetto and De Vito, 2007;
in the tunnel test section, and was acquired in the form of Györfi et al., 2002; Hainmueller and Hazlett, 2014; Pol-
pressure time traces. These time-traces were subsequently lard, 1990), which is one of the most known and effective
converted in 1/3 octave band, corrected for background approaches from the ML Kernel Methods family (Hain-
noise, and converted to the standard measuring distance mueller and Hazlett, 2014). Kernel methods are a fam-
of 1 [m] according to the recommendations of (ITTC Spe- ily of ML techniques which exploit the “Kernel trick” in
cialist Committee on Hydrodynamic Noise, 2017). To order to extend linear techniques to the solution of non-
provide a richer representation of the underlying phenom- linear problems (Cristianini and Shawe-Taylor, 2000),
ena for the ML models, we also estimated 2 tensor quan- and they provide a flexible and expressive learning frame-
tities with a computationally cheap BEM numerical code work that has been successfully applied to a wide range of
that is extensively described in our preliminary work (Ka- real world problems.
likatzarakis et al., 2021) and verified in (Gaggero and As stated, during the learning phase the quality
Brizzolara, 2009; Gaggero and Villa, 2017, 2018; Gag- of the regressor h(x) is measured according to a loss func-
gero et al., 2010, 2013, 2014, 2016, 2019). The estimated tion `(h(x), y) which calculates the discrepancy between
quantities include the distribution of the pressure coeffi- the true and the estimated output, y and ŷ = h(x), respec-
cient Cpn for 44 chord-wise locations, 22 radial sections tively. Then the generalization error of h, namely the true
error of h, can be defined as to the set of weights describing the regressor. The com-
plexity of the approximation function is measured as
L(h) = E(x,y) `(h(x), y). (1)
C(h) = kwk22 , (5)
Obviously L(h) cannot be computed but its empirical es- i.e. the Euclidean norm of w, which is a standard com-
timator, the empirical error, can be derived by the discrep- plexity measure in ML (Shalev-Shwartz and Ben-David,
ancy between the true y and the estimated output h(x), 2014). Since the considered problem is a regression
respectively observed over Dn , as one, the most suited loss function is the squared one
n `(h(xi ), yi ) = [h(xi ) − yi ]2 (Rosasco et al., 2004). Con-
b = 1 ∑ `(h(xi ), yi ).
L(h) (2) sequently, the training problem can be reformulated by
n i=1
exploiting kernels as
A simple criterion for selecting the final model during n 2
the training phase could then be to choose the approxi- w∗ : min ∑ wT ϕ(x) − yi + λ kwk22 . (6)
w
i=1
mating function that minimises the empirical error L̂n (h).
This approach is known as Empirical Risk Minimization By exploiting the Representer Theorem (Schölkopf et al.,
(ERM) (Vapnik, 1998). However, ERM is usually avoided 2001), the solution h∗ of the RLS Problem of Eq. (6) can
in ML as it leads to severe overfitting of the model on the be expressed as a linear combination of the samples pro-
training dataset. As a matter of fact, in this case the train- jected in the space defined by ϕ
ing process could choose a model, complicated enough n
to perfectly describe all the training samples (including h∗ (x) = ∑ αi ϕ(xi )T ϕ(x). (7)
noise, which afflicts them). In other words, ERM implies i=1

memorisation of data rather than learning from them. It is worth underlining that, according to the kernel trick,
A more effective approach is to minimise a cost it is possible to reformulate h∗ (x) without an explicit
function where the trade-off between accuracy on the knowledge of ϕ, and consequently avoiding the course of
training data and a measure of the complexity of the se- dimensionality of computing ϕ, by using a proper kernel
lected model is achieved (Tikhonov and Arsenin, 1979), function K(xi , x) = ϕ(xi )T ϕ(x)
implementing the Occam’s razor principle n
h∗ (x) = ∑ αi K(xi , x). (8)
h∗ : min L̂n (h) + λ C(h). (3) i=1
h
Several kernel functions can be retrieved from liter-
In other words, the best approximating function h∗ is cho- ature (Cristianini and Shawe-Taylor, 2000; Scholkopf,
sen as the one that is complicated enough to learn from 2001), each one with a particular property that can be
data without over-fitting them. This approach is known exploited based on the problem under exam. The KRLS
as Structural Risk Minimisation (SRM) (Vapnik, 1998) In problem of Eq. (6) can be reformulated by exploiting ker-
particular, C(·) is a complexity measure: depending on nels as
the exploited ML approach, different measures are real-
ized. Instead, λ ∈ [0, ∞) is a hyperparameter, that must α∗ : min kQα − yk22 + λ αT Qα, (9)
α
be set a-priori and is not obtained as an output of the op-
timization procedure: it regulates the trade-off between where y = [y1 , . . . , yn ]T , α = [α1 , . . . , αn ]T , Q is matrix
the over-fitting tendency, related to the minimization of such that Qi, j = K(x j , xi ), and I ∈ Rn×n is the identity
the empirical error, and the under-fitting tendency, related matrix. By setting the gradient equal to zero w.r.t. α it is
to the minimization of C(·). The optimal value for λ is possible to state that
problem-dependent, and tuning this hyperparameter is a (Q + λ I) α∗ = y, (10)
non-trivial task (Anguita et al., 2011), as will be discussed
later in this section. In KRLS, approximation functions which is a linear system for which effective solvers have
are defined as been developed over the years, allowing it to cope with
even very large datasets (Young, 2003).
h(x) = wT ϕ(x) (4) A problem we still face is how to choose ϕ, the
kernel K, and how to set up the hyperparameter λ . It is
where ϕ : Rd → RD , D  d, is an a-priori defined Feature possible to start by setting ϕ and the kernel K. Usually
Mapping (FM) (Shalev-Shwartz and Ben-David, 2014), the Gaussian kernel is exploited in real world applications
which strongly depends on the particular problem under because of the theoretical reasons described in (Keerthi
examination and will be described later in this section, al- and Lin, 2003; Scholkopf, 2001) and because of its ef-
lowing to keep the structure of h(x) linear, and w refers fectiveness (Fernández-Delgado et al., 2014; Wainberg
et al., 2016). It is characterised by a single hyperparame- set of parameters. A vector of weighted links, w ∈ Rm ,
ter γ ∈ R+ , according to connects the hidden neurons to the output neuron without
any bias. The overall output function h(x) of the network
2
K(xi , x) = e−γkxi −xk2 , (11) is:
!
m d m
where γ regulates the non-linearity of the solution (Oneto h(x)= ∑ wi ϕ Wi,0 + ∑ Wi, j x j = ∑ wi ϕi (x). (14)
et al., 2015) and must be set a-priori, similarly to λ . Small i=1 j=1 i=1
values of γ lead the optimisation to converge to simpler
functions h(x) (note that for γ → 0 the optimisation con- It is convenient to define an activation matrix, A ∈ Rn×m ,
verges to a linear regressor), while high values of γ allow such that the entry Ai, j is the activation value of the j-th
higher complexity of h(x). Basically the Gaussian ker- hidden neuron for the i-th input pattern. The A matrix is:
nel is able to implicitly create an infinite dimensional ϕ " ϕ (x ) ··· ϕ (x ) #
1 1 h 1
and because of that, the KRLS are able to learn any pos- A= .. . . .. . (15)
sible function (Keerthi and Lin, 2003). The last problem . . .
ϕ1 (xn ) ··· ϕh (xn )
is how to tune the hyperparameter set HKRLS = {γ, λ },
which will take place during the Model Selection (MS) In the ELM model the weights W are set randomly and
phase (Oneto, 2018), as discussed later in this section. are not subject to any adjustment, and the quantity w in
Eq. (14) is the only degree of freedom. Hence, the training
Extreme Learning Machine problem reduces to the minimisation of the convex cost
Another approach considered for AH is the Extreme
Learning Machine (ELM) (Cambria et al., 2013; Huang w∗ = arg min kAw − yk2 , (16)
w
et al., 2015, 2004). ELMs were introduced to overcome
problems posed by the back-propagation training algo- for which a matrix pseudo-inversion yields the unique L2
rithm (Ridella et al., 1997; Rumelhart et al., 1986), such as solution as (Huang et al., 2006, 2011)
potentially slow convergence rates, critical tuning of op- w∗ = A+ y. (17)
timization parameters, and presence of local minima that
call for multi-start and re-training strategies. ELMs were Despite the simplicity of the approach, even the random
originally developed for single-hidden-layer feed-forward weights in the hidden layer endow a network with no-
neural networks (Huang et al., 2006, 2004) table representation ability. Moreover, the theory derived
m
in (Huang et al., 2011) proves that regularisation strategies
h(x) = ∑ wi gi (x). (12) can further improve the approach’s generalisation perfor-
i=1 mance. As a result, the cost function of Eq. 16 is aug-
mented by a regularisation factor (Huang et al., 2011). A
where gi : Rd → R, i ∈ {1, · · · , m} is the hidden-layer out- common approach is to utilise the L2 regulariser
put corresponding to the input sample x ∈ Rd , and w ∈ Rh
is the output weight vector between the hidden layer and w∗ = arg min kAw − yk2 + λ kwk22 , (18)
w
the output layer. In this case, the input layer has d neu-
rons and connects to the hidden layer (having m neurons) where λ ∈ [0, ∞) is a hyperparameter that must be tuned
through a set of weights W ∈ Rh×(0,··· ,d) and a nonlin- during the Model Selection (MS) phase (Oneto, 2018).

ear activation function, ϕ : R → R. In this work the tanh Consequently, the vector of weights w is then obtained
function was adopted as suggested in the original work as follows
of (Huang et al., 2004), nevertheless using other activa- w∗ = (AT A + λ I)+ AT y, (19)
tion functions such as the sigmoid one does not really af- where I ∈ R m×m +
is an identity matrix and (·) is the
fect the final performance. Moore-Penrose pseudo inverse matrix. Note that m, the
Thus, the i-th hidden neuron response to an input number of hidden neurons, is another hyperparameter that
stimulus x is needs to be tuned based on the problem under considera-
d
! tion, similar to λ . The problem we still face is how to
gi (x) = ϕ Wi,0 + ∑ Wi, j x j . (13) tune the hyperparameter set HELM = {m, λ }, which we
j=1 will revisit later in this section.

Note that Eq. (13) can be further generalised to include a Random Forest
wider class of functions (Bisio et al., 2015; Huang et al., The final approach considered for AH is the Random
2006, 2004); therefore, the response of a neuron to an in- Forest (RF). RFs were first introduced in (Breiman, 2001),
put stimulus x can be generally represented by any non- in an attempt to optimise the generalisation performance
linear piece-wise continuous function characterised by a of a model that combines several classifiers, and are
known to be one of the state-of-the-art algorithms in clas- and evaluating the performance of AH (Oneto, 2020).
sification (Fernández-Delgado et al., 2014). RFs combine Among the available approaches, we exploit resampling
bagging and random subset feature selection. In bagging, techniques, which represent the state-of-the-art MS ap-
each tree is independently constructed using a bootstrap proaches in real-world applications (Oneto, 2020). These
sample of the dataset (Efron, 1992). RFs add an additional rely on a simple idea: Dn is resampled one or many (nr )
layer to bagging: In addition to constructing each tree us- times, with or without replacement, to build three inde-
ing a different bootstrap sample of the data, RFs change pendent datasets called learning (Llr ), validation (Vvr )
how the trees are constructed. In standard trees, each node and test (Tt r ) sets containing l, v, and t experiments re-
is split using the best division among all variables. In RFs, spectively, with Llr ∩ Vvr = , Llr ∩ Tt r = , Vvr ∩ Tt r =
each node is split using the best among a subset of predic- , and Llr ∪ Vvr ∪ Tt r = Dn for all r ∈ {1, · · · , nr }.
tors randomly chosen at that node. In (Breiman, 2001) To select the best combination of hyperparame-
it was shown that the predictive power of the final model ters H ∗ in a set of possible ones H = {H1 , H2 , · · · } for
depends primarily on three different factors: the number the algorithm AH , i.e. to perform the MS phase, we ap-
of trees composing the RF, the predictive power of each ply the following procedure
tree, and the correlation between them. Moreover, it was nr
shown that the predictive power of the RF converges to H∗: arg min ∑ M(AH (Llr ), Vvr ), (20)
H ∈H r=1
a limit as the number of trees composing it increases,
it rises as the predictive power of each tree increases, where AH (Llr ) is a model developed with the algorithm
as well as when the correlation between trees decreases. A and the set of hyperparameters H utilising the data
RFs’ counter intuitive learning strategy turns out to per- Llr , and M(·, ·) is an appropriate error metric. Since the
form very well compared to many other approaches, and data sets Llr and Vvr are independent, H ∗ should be the
is robust against over-fitting (Breiman, 2001; Fernández- set of hyperparameters that allows the model to achieve a
Delgado et al., 2014). small error on a dataset that is independent from the train-
The learning phase of each of the nt trees com- ing set.
posing the RF is simple (Oneto et al., 2016): From Dn , To evaluate the performance of the optimal
bbnc samples are sampled with replacement and Dbbnc 0
model h∗A = AH ∗ (Dn ), i.e. to perform the EE phase, a
is generated. A tree is constructed utilising Dbbnc 0 , but separate set of data Tt is needed, since the error that the
the best split is chosen among a subset of nv predic- h∗A commits over Dn would be optimistically biased, as
tors over the possible d ones, randomly chosen at each Dn has been used to learn it. For this reason we compute
node, and each tree is grown until its terminal nodes M(AH ∗ (Llr ∪ Vvr ), Tt r ) (21)
contain a maximum of nl samples. During the predic-
tion phase of a previously unseen x, each tree assigns a Since the data in Llr ∪ Vvr are independent from the ones
value ŷi , i ∈ {1, · · · , nt } in each y ∈ Y , and the final re- in Tt r , the metric of Equation (21) is an unbiased esti-
sponse is the unweighted average of all √ ŷi . In the original mator of the true performance of the final model (Oneto,
work (Breiman, 2001) b = 1, and nv = d for regression 2020).
problems, while nt can be chosen according to some con- If nr = 1, and l, v, and t are aprioristically set
sistency result (Hernández-Lobato et al., 2013) or based such that n = l + v + t, and if the resampling procedure
on the out-of-bag estimate (Breiman, 2001). is performed without replacement, the hold out method
The hyperparameters characterising the RF in- is obtained (Oneto, 2020). To implement the complete
clude the weights pi∈{1,··· ,nt } , the number of trees nt , the nested k-fold cross validation, the following must be set
number of samples to extract during the bootstrap pro- n n − nk
  
n n n
cedure bn, the number of samples in a terminal node of nr ≤ , l = (k − 2) , v = , t = , (22)
k k k k k
the BDT nl , and the number of predictors nv utilised in
and resampling must be done without replacement (Ko-
each subset during the growth of each tree. Several other
havi, 1995).
hyperparameters exist, but they are set to default values,
since they are not as influential, according to some recent Finally, for the implementation of the nested
non-parametric bootstrap, l = n and Llr must be sam-
work in the field (Oneto et al., 2017; Orlandi et al., 2016).
pled with replacement from Dn , while Vvr and Tt r are
Furthermore, we consider pi∈{1,··· ,nt } = n1t as in (Breiman,
sampled without replacement from the sample of Dn that
2001), and we need to tune the hyperparameter set HRF =
{b, nv , nl , nt } during the MS phase. has not been sampled in Llr (Kohavi, 1995). Note that
for the bootstrap procedure, nr ≤ 2n−1

n . In this work,
Model Selection & Error Estimation we utilise the complete nested k-fold cross validation as
Model Selection (MS) and Error Estimation (EE) tech- it represents the state-of-the-art approach (Kohavi, 1995;
niques address the problem of finding a suitable H Oneto, 2020).
Regardless of the adopted MS and EE technique, Miglianti et al., 2019, 2020). These features include the
the error that a model commits on Tt r needs to be mea- average volumetric axial wake w̄t
sured w.r.t. different error metrics that can fully charac- R R R 2π
terise its quality. Assuming that y ∈ Y is scalar, we make r r 0 wt (r, θ )dθ dr
w̄t = h  , (26)
use of the Mean Absolute Percentage Error (MAPE), π R2 − rh2
computed as the absolute loss value of h over Tt r in per-
centage where rh corresponds to the hub radius, and R to the pro-
peller radius, and several quantities on two radial sections,
t t − h(xt ) namely r/R = 0.7 and r/R = 0.9. We choose these sec-
1 y
MAPE(h, Tt r ) = ∑ i t i , (23) tions since they are relevant both for sheet cavitation and
t i=1 yi
TVC (Carlton, 2018). The quantities specific to these sec-
the Mean Absolute Error (MAE), computed as the abso- tions are illustrated in Figure 3, and they include
lute loss of h over Tt r
• the maximum and minimum derivatives of wt w.r.t. the
t {+,−}
1 angular position Dθ w|{07,09} , which represent the rate
MAE(h, Tt r ) = ∑ yti − h(xti ) , (24)
t i=1 of variation of blade loading during one revolution,
• the wake width wwd{07,09} , which is the angular sector
and the Pearson Product-Moment Correlation Coefficient where the wake fraction is greater than 0.05, i.e. the
(PPMCC), which measures the linear dependency be- sector where the axial velocity on the propeller plane is
tween h(xti ) and yti , given by reduced by at least 5%,
t • and the wake depth wmax{07,09} , which corresponds to
∑i=1 (yti − ȳ) h(xti ) − ȳˆ

PPMCC(h, Tt r ) = q q 2 , the maximum value of wt for a given radial section.
2
∑ti=1 (yti − ȳ) ∑ti=1 h(xti ) − ȳˆ
(25)
where ȳ = 1t ∑ti=1 yti and ȳˆ = 1t ∑ti=1 h(xi ).
Other state-of-the-art error measures exist, but
from a physical point of view, the ones already reported
give a complete description of the quality of the model,
therefore we only report these.
However, Y ⊆ Rq is the vector representing the
measured URN spectra of Table 1, instead of a scalar
quantity. As such, we redefine the error metrics of Equa-
tions (23) - (25) as the average metrics among the pre-
dicted and measured RNLs that compose the URN spec-
tra. Figure 3: Wake features defined by (Odabaşi and
Fitzsimmons, 1978) and utilised in this work.
Feature Engineering
ML approaches are very effective, but only under a strict To further enrich the representation of wt , we
assumption: Dn should contain information that is rich utilise Fourier’s theorem to decompose the total fluctuat-
enough to allow AH to find a good approximation of ing component at any radial section into a finite set of si-
S, but it should also be characterised by an input space nusoidal components of various harmonic orders. We use
with cardinality that is not too high w.r.t. n (Goodfellow 4 components as they are sufficient to accurately describe
et al., 2016; Shalev-Shwartz and Ben-David, 2014). This w for the available experimental data. Using this basis,
means that utilising the tensor quantities outlined in Ta- the general approximation of wt at a particular propeller
ble 1 would produce an exploding number of input fea- radius is given by
tures for the models, which would result in poor perfor-
mance. To mitigate this problem, we need to manually 4    
kθ kθ
generate new input features of lower cardinality to re- wt (θ )|r = ∑ aw,k|r cos + bw,k|r sin , (27)
k=0 2π 2π
place the high-dimensional tensors wt , αG , C, Cpn , while
maintaining a rich enough representation to capture the with aw,k|r , aw,k|r being the Fourier coefficients of order
necessary information about the desired output. k = {1, · · · , 4} that have been utilised as additional fea-
For what concerns wt , we define the extracted tures. We also extract the same features from αG .
features partially following (Odabaşi and Fitzsimmons, Cpn can provide a good approximation regarding
1978), as they have been shown to provide a rich represen- the presence of cavitation under the assumption that it oc-
tation of the propeller inflow conditions (Carlton, 2018; curs when the opposite of the local pressure coefficient is
higher than the cavitation index at a given operating con- θmin Γ095 , as well as the 3rd order Fourier coefficients
dition, meaning that the local pressure is lower than the (a, b)|Γ095 . It should be noted that the features computed
vapour pressure (Miglianti et al., 2020). With this consid- with BEM can also be enhanced with their corresponding
eration in mind, for each of the 60 angular positions and errors between computational results and actual experi-
for both suction and pressure sides of the blade, we com- mental data if the latter is available. These errors could
pute the blade areas Ac |(s) , Ac |(p) for which the pressure also be processed in the same manner as Cpn and Γ and
is lower than the vapour pressure, as an estimation of the used as features for the ML algorithms. Nevertheless, this
region where true cavitation starts. Subsequently, from process has not been followed in this work, not only to
these two vectors we further compute simplify the modelling effort, but primarily because such
th extensive experimental data is usually not readily avail-
• the 4 order Fourier coefficients (a, b)|Ac |(s) ,
able during the early design stage.
(a, b)|Ac |(p) ,
From this process, we are able to generate a
• the minimum and maximum areas encountered dataset D of acceptable cardinality, from which the ML
n
Amin c |(s) , Amin c |(p) , Amax c |(s) , Amax c |(p) , models will be developed. Each experiment in Dn is fully
• and their corresponding angular positions θmin Ac |(s) , characterised by an input space of d = 207 scalar quanti-
θmin Ac |(p) , θmax Ac |(s) , θmax Ac |(p) . ties, and an output space of q = 31 RNLs that compose the
Moreover, we split each side of the blade into the measured URN spectra at a reference pressure of 1 [µPa],
following 4 panels, according to Figure 4. corrected for a measuring distance of 1 [m], at various
frequencies in the 1/3 octave band.
• Panel 1 (P1): From 70% of the propeller radius to the
tip of the blade, and from the leading edge to 20% of 4 RESULTS
the chord,
• Panel 2 (P2): From 70% of the propeller radius to the In this section the performance of the models of Section 3
tip of the blade, and from 20% to 60% of the chord. is tested utilising the data described in Section 2 and the
• Panel 3 (P3): From blade root to 70% of the propeller performance measures defined in Section 3, in several sce-
radius, and from the leading edge to 20% of the chord, narios commonly encountered in practice. These scenar-
• Panel 4 (P4): From blade root to 70% of the propeller ios differ only in the way Dn is split on Ll and Tt at
r r

radius, and from 20% to 60% of the chord. each repetition, and they consist of:

This subdivision was chosen in order to indicate the oc- • Scenario I: Extrapolation on the propeller geometry, in
currence of sheet cavitation near the leading edge of the which Tt r consists of the group of experiments con-
blade, and bubble cavitation round the mid-chord region ducted on one specific propeller from the Meridian stan-
of the blade. For panels P1, P2 on the pressure side of the dard series. Given that Dn consists of 6 propeller ge-
blade, and panels P1 - P4 on the suction side, we evalu- ometries, this scenario has been repeated nr = 6 times,
ate the minimum value of Cpn for each angular position each with experiments of a different propeller in Tt r .
th
of the key blade, and subsequently compute the 4 order • Scenario II: Extrapolation on the wakefield, where Tt r
Fourier coefficients. contains all experiments involving a specific wakefield,
for a total of nr = 3 repetitions.
• Scenario III: Extrapolation on the rotational speed of
the propeller, with Tt r containing all experiments con-
ducted with a specific rotational speed, with a total of
nr = 2 repetitions. Note that, to design an extrapolation
scenario of adequate complexity, we decided to reduce
the size of the dataset by approximately 50%, retain-
ing only experiments conducted with the following ro-
tational speeds: {600, 1200, 1400, 2000} [rpm]. Based
on this reduced dataset, we tested the ability of each
Figure 4: Blade subdivision in panels. model to predict the URN spectra with Tt r containing
all experiments with rotational speed of 600 [rpm] or
Finally, from Γ, we evaluate the strength of the 2000 [rpm].
vortex shed in the wake Γ0.95 at r/R = 0.95, which is pro-
portional to the cavitating tip vortex occurrence, for ev- Based on the considered scenario, Dn was divided into
ery angular position of the key blade. Subsequently, we different subsets for the MS and EE procedures, as re-
compute the corresponding minimum and maximum val- ported in Section 3. For each ML algorithm we tested the
ues Γmin 095 , Γmax 095 and their angular positions θmax Γ095 , following list of hyperparameters for the MS procedure:
KRLS: The set of hyperparameters is HKRLS =
•  available (6). Another factor is the choice of the wake-
(γ, λ ) : γ ∈ Gγ , λ ∈ Gλ , with Gγ = 10{−5,−4.8,··· ,3} , fields themselves. More specifically, these were selected
and Gλ = 10{−5,−4.8,··· ,3} . by the authors of (Aktas, 2017; Aktas et al., 2018) based
• ELM: The set of hyperparameters is HELM = on the criteria suggested by (Angelopoulos et al., 1988;
{(m, λ ) : m ∈ Gm , λ ∈ Gλ }, chosen in Gm = Konno et al., 2002; Odabaşi and Fitzsimmons, 1978). The
10{1,1.4,··· ,4} , and Gλ = 10{−5,−4.6,··· ,4} . wake non-uniformity, mean wake, half- wake width, and
RF: The set of hyperparameters is HRF =
•  wake depth were controlled to generate 3 highly different
(b, nv , nl , nt ) : b ∈ Gb , nv ∈ Gnv , nl ∈ Gnl , nt ∈ Gnt , wakefields, contributing to the difficulty of this scenario.
with Gb = {0.20, 0.24, · · · , 1}, Gnv = d {0,0.04,··· ,1} , Nevertheless, errors ranging between 4% - 6% can still be
Gnl = n · {0.01, 0.03, · · · , 0.5} + 1, Gnt = 10{2,2.2,··· ,4} . considered acceptable for early stage designs.
In Scenario III, all approaches commit the high-
We have utilised the Python scikit-learn1 (Pe- est errors observed in this work, due to the reduction we
dregosa et al., 2011) library for the RF, whereas for KRLS conducted on the dataset, as explained in the beginning of
and ELM custom Matlab2 implementations have been this section. Once again, the RF commits the lowest av-
developed. For each scenario, the performance of each erage errors (7.8 ± 1.7, 7.0 ± 1.4, 0.93 ± 0.03) [dB, %, –],
model is measured according to the error metrics de- followed by KRLS (8.6 ± 2.1, 7.7 ± 1.8, 0.91 ± 0.04) [dB,
scribed in Section 3, and for each experiment we report %, –], and the ELM (10.0 ± 2.2, 9.3 ± 1.8, 0.89 ± 0.07)
average results together with their t-student 95% confi- [dB, %, –]], for the MAE, MAPE, and PPMCC, respec-
dence interval on Table 2. Given the high number of ex- tively. Irrespective of the model considered, higher MAE,
periments present in the dataset, we do not report individ- MAPE and lower PPMCC values are observed for 2000
ual results. However, Figure 5 gives an illustration of the [rpm] w.r.t. the errors committed at 600 [rpm]. This oc-
predictions of all three approaches in a single example, curs due to the significant deviation of the predictions at
for each of the three scenarios. relatively high frequencies (> 200 [Hz]), which is not ob-
From Table 2 it is possible to observe that all ap- served at 600 [rpm], which is illustrated in Figure 5c.
proaches are able to approximate the URN spectra suc-
cessfully, with the RF experiencing the lowest errors for 5 CONCLUSIONS
all scenarios, followed by KRLS, and ELM. For what In this work, a methodology to estimate propeller cavita-
regards Scenario I, the average errors of the RF equal tion noise utilising ML models has been proposed. To this
4.5 ± 0.7 [dB], 4.2 ± 0.5 [%], and 0.89 ± 0.06 [–], for the aim, a dataset containing model scale measurements in a
MAE, MAPE, and PPMCC respectively, with very small cavitation tunnel has been utilised, combined with sev-
deviations w.r.t. the propeller model considered. KRLS eral quantities estimated by Boundary Element Method
commits slightly higher average errors of 4.5 ± 0.7 [dB], calculation to better characterise the hydrodynamic flow
4.3 ± 0.6, and 0.88 ± 0.05 [–], which increase further to around the propeller. The models are developed strictly
5.2 ± 0.8 [dB], 4.8 ± 0.8 [%], and 0.87 ± 0.06 [–] for utilising quantities that can be obtained during the early
the ELM. However, the confidence intervals reported in design stages, to accommodate the limited amount of in-
Table 2 do not allow us to draw statistically significant formation that is usually available.
conclusions w.r.t the relative performance of the three ap- To validate the proposed modelling approaches,
proaches. Nevertheless, errors of this magnitude can be a set of cavitation tunnel tests has been exploited, con-
considered acceptable, especially during the early stages sisting of 432 experiments with 6 propeller models and
of design. 3 wakefields at various operating conditions. It must be
Scenario II is similar to Scenario I when consid- noted that, despite the effort required to generate this
ering the relative performance of the approaches utilised dataset, it still accounts for a rather limited number of pro-
in this work: RF commits the lowest errors on average pellers and wakefields, preventing the opportunity to ver-
(4.6 ± 0.9, 4.4 ± 0.8, 0.91 ± 0.07) [dB, %, –], followed by ify the performance of the method on fully unseen cases.
KRLS (5.1 ± 1.4, 4.7 ± 1.3, 0.86 ± 0.07) [dB, %, –], and The performance of all models on three real-
ELM (6.4 ± 1.6, 6.2 ± 1.6, 0.84 ± 0.07), however we can world scenarios has been investigated:
not state that one approach consistently outperforms the
rest. Compared to Scenario I, all three approaches com- • Extrapolation on the propeller geometry, in which we
mit higher errors when extrapolating on a new wakefield, evaluate the ability of the proposed models to estimate
instead of a propeller geometry. This can be partially ex- the URN spectra in new, previously unseen, propeller
plained by the smaller number of wakefields present in our geometries,
dataset (only 3), compared to the number of propellers • Extrapolation on the wakefield, in which the models are
1 https://scikit-learn.org/stable/index.html
2 https://www.mathworks.com/products/matlab.html
Table 2: KRLS, ELM, and RF performance measured with the MAE, MAPE, and PPMCC on all scenarios.
MAE [dB] MAPE [%] PPMCC [–]
KRLS ELM RF KRLS ELM RF KRLS ELM RF
Scenario I: Propeller Geometry Extrapolation
KCD 65 4.6±0.7 4.9 ± 0.7 4.2±0.4 4.3±0.7 4.7±0.7 4.0±0.4 0.87 ± 0.05 0.87 ± 0.05 0.89 ± 0.05
KCD 74 4.7±0.7 5.2 ± 0.8 4.5±0.6 4.2±0.6 4.7±0.7 4.2±0.5 0.88 ± 0.05 0.84 ± 0.05 0.91 ± 0.05
KCD 129 5.2±0.8 6.9 ± 1.2 5.0±0.8 4.8±0.7 6.5±1.1 4.7±0.7 0.87 ± 0.06 0.85 ± 0.06 0.87 ± 0.06
KCD 191 4.6±0.7 6.1 ± 1.0 3.8±0.6 4.2±0.7 5.8±1.1 3.5±0.6 0.88 ± 0.05 0.87 ± 0.05 0.89 ± 0.05
KCD 192 3.9±0.7 4.0 ± 0.7 3.7±0.4 3.6±0.7 3.8±0.8 3.5±0.4 0.90 ± 0.05 0.90 ± 0.05 0.91 ± 0.04
KCD 193 3.9±0.6 3.8 ± 0.6 3.9±0.4 3.5±0.6 3.5±0.7 3.6±0.5 0.88 ± 0.07 0.87 ± 0.07 0.89 ± 0.07
all 4.5±0.7 5.2 ± 0.8 4.4±0.5 4.3±0.6 4.8±0.8 4.2±0.5 0.88 ± 0.05 0.87 ± 0.06 0.89 ± 0.06
Scenario II: Wakefield Extrapolation
W1 5.9±1.3 7.0 ± 1.3 4.9±0.9 5.3±1.2 6.8±1.3 4.6±0.8 0.85 ± 0.07 0.81 ± 0.07 0.89 ± 0.08
W2 4.6±1.4 6.3 ± 2.0 4.3±0.8 4.3±1.3 6.1±2.0 4.1±0.7 0.87 ± 0.06 0.87 ± 0.06 0.93 ± 0.07
W3 4.9±1.4 5.9 ± 1.4 4.4±0.9 4.6±1.4 5.8±1.6 4.2±0.9 0.86 ± 0.07 0.85 ± 0.07 0.92 ± 0.08
all 5.1±1.4 6.4 ± 1.6 4.6±0.9 4.7±1.3 6.2±1.6 4.4±0.8 0.86 ± 0.07 0.84 ± 0.07 0.91 ± 0.07
Scenario III: Rotational Speed Extrapolation
600 rpm 7.1±1.9 9.0 ± 2.0 6.9±1.6 6.9±1.4 8.7±1.6 6.6±1.2 0.92 ± 0.04 0.90 ± 0.07 0.94 ± 0.03
2000 rpm 9.9±2.3 11.1±2.4 8.9±1.8 8.3±2.0 9.7±2.1 7.5±1.5 0.90 ± 0.05 0.88 ± 0.08 0.92 ± 0.04
all 8.6±2.1 10.0±2.2 7.8±1.7 7.7±1.8 9.3±1.8 7.0±1.4 0.91 ± 0.04 0.89 ± 0.07 0.93 ± 0.03

(a) Scenario I: KCD193, W1 (b) Scenario II: KCD74, W2 (c) Scenario II: KCD129, W2

Figure 5: Representative examples of measured and predicted URN spectra.

tested with previously unseen wakefields, and tional experiments that cover a broader set of propellers
• Extrapolation on the rotational speed of the propeller, and operating conditions. This will allow us not only to
in which we test the capabilities of the models to pre- evaluate more thoroughly the capabilities of the proposed
dict URN at various, different, rotational speeds than approaches, but will also allow us to validate them in ad-
the ones utilised to develop them. ditional scenarios encountered in practice.

In all cases, the ML approaches considered have


shown sufficient capabilities in predicting the URN spec-
tra, with errors that are certainly acceptable during the ACKNOWLEDGEMENTS
early stage design process, even though the variance of
the results does not allow us to draw statistically signif-
icant conclusions w.r.t the relative performance of each The authors would like to express their gratitude to Dr.
approach. For this reason, our future activities will in- Batuhan Aktas, who generated and kindly provided the
clude the enlargement of the existing dataset with addi- experimental data used in the present work.
REFERENCES J. Bosschers, G. Choi, H. Hyundai, K. Farabee,
D. Fréchou, E. Korkut, K. Sato, et al. Special-
B. Aktas. A systematic experimental approach to
ist committee on hydrodynamic noise. Final report
cavitation noise prediction of marine propellers. PhD
and recommendations to the 28th International Towing
thesis, Newcastle University, 2017.
Tank Conference, 45, 2017.
B. Aktas, M. Atlar, S. Turkmen, W. Shi, R. Sampson, L. Breiman. Random forests. Machine learning, 45(1):
E. Korkut, and P. Fitzsimmons. Propeller cavitation 5–32, 2001.
noise investigations of a research vessel using medium
size cavitation tunnel tests and full-scale trials. Ocean A. Brooker and V. Humphrey. Measurement of radiated
Engineering, 120:122–135, 2016. underwater noise from a small research vessel in shal-
low water. Ocean Engineering, 120:182–189, 2016.
B. Aktas, M. Atlar, P. Fitzsimmons, and W. Shi. An
advanced joint time-frequency analysis procedure to E. Cambria, G. Huang, C. Kasun, H. Zhou, et al. Extreme
study cavitation-induced noise by using standard se- learning machines [trends & controversies]. IEEE
ries propeller data. Ocean Engineering, 170:329–350, intelligent systems, 28(6):30–59, 2013.
2018. A. Caponnetto and E. De Vito. Optimal rates for the
regularized least-squares algorithm. Foundations of
A. Angelopoulos, P. Fitzsimmons, and A. Odabasi. A
Computational Mathematics, 7(3):331–368, 2007.
semi-empirical method for propeller broad-band noise.
Technical report, British Maritime Technology Lim- J. Carlton. Marine propellers and propulsion.
ited, 1988. Butterworth-Heinemann, 2018.

D. Anguita, A. Ghio, L. Oneto, and S. Ridella. In- N. Cristianini and J. Shawe-Taylor. An introduction
sample model selection for support vector machines. to support vector machines and other kernel-based
In International Joint Conference on Neural Networks, learning methods. Cambridge university press, 2000.
pages 1154–1161. IEEE, 2011.
B. Efron. Bootstrap methods: another look at the jack-
M. Atlar. Recent upgrading of marine testing facilities knife. In Breakthroughs in statistics, pages 569–593.
at newcastle university. In Second international Springer, 1992.
conference on advanced model measurement A. Emerson. Propeller design and model experiments.
technology for the EU maritime industry, pages Transactions of the North-east coast institution of
4–6, 2011. engineers and shipbuilders, 944:199–234, 1978.
R. Bensow and M. Liefvendahl. An acoustic analogy M. Fernández-Delgado, E. Cernadas, S. Barro, and
and scale-resolving flow simulation methodology for D. Amorim. Do we need hundreds of classifiers to
the prediction of propeller radiated noise. In 31st solve real world classification problems? The journal
Symposium on Naval Hydrodynamics, pages 11–16, of machine learning research, 15(1):3133–3181, 2014.
2016.
S. Gaggero and S. Brizzolara. A panel method for
F. Bisio, P. Gastaldo, R. Zunino, and E. Cambria. A trans-cavitating marine propellers. In 7th International
learning scheme based on similarity functions for af- Symposium on Cavitation, 2009.
fective common-sense reasoning. In International Joint
S. Gaggero and D. Villa. Steady cavitating propeller per-
Conference on Neural Networks, pages 1–6. IEEE,
formance by using openfoam, starccm+ and a bound-
2015.
ary element method. Proceedings of the Institution of
J. Bosschers. A semi-empirical method to predict broad- Mechanical Engineers, Part M: Journal of Engineering
band hull pressure fluctuations and underwater radiated for the Maritime Environment, 231(2):411–440, 2017.
noise by cavitating tip vortices. In 5th International S. Gaggero and D. Villa. Cavitating propeller perfor-
Symposium on Marine Propulsors, 2017. mance in inclined shaft conditions with openfoam:
Pptc 2015 test case. Journal of Marine Science and
J. Bosschers. A semi-empirical prediction method Application, 17(1):1–20, 2018.
for broadband hull-pressure fluctuations and underwa-
ter radiated noise by propeller tip vortex cavitation. S. Gaggero, D. Villa, and S. Brizzolara. Rans and panel
Journal of Marine Science and Engineering, 6(2):49, method for unsteady flow propeller analysis. Journal of
2018. Hydrodynamics, Ser. B, 22(5):564–569, 2010.
S. Gaggero, M. Viviani, G. Tani, F. Conti, P. Becchi, and G. Huang, H. Zhou, X. Ding, and R. Zhang. Ex-
F. Valdenazzi. Comparison of different approaches for treme learning machine for regression and multiclass
the design and analysis of ducted propellers. In 5th classification. IEEE Transactions on Systems, Man,
International Conference on Computational Methods in and Cybernetics, Part B (Cybernetics), 42(2):513–529,
Marine Engineering, pages 723–736. CIMNE, 2013. 2011.

S. Gaggero, D. Villa, and M. Viviani. An investigation on G. Huang, G. Huang, S. Song, and K. You. Trends in ex-
the discrepancies between ranse and bem approaches treme learning machines: A review. Neural Networks,
for the prediction of marine propeller unsteady per- 61:32–48, 2015.
formances in strongly non-homogeneous wakes. In
G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew. Extreme
International Conference on Offshore Mechanics and
learning machine: a new learning scheme of feedfor-
Arctic Engineering, 2014.
ward neural networks. In IEEE International Joint
S. Gaggero, J. Gonzalez-Adalid, and M. Sobrino. De- Conference on Neural Networks, 2004.
sign of contracted and tip loaded propellers by us-
S. Ianniello, R. Muscari, and A. Di Mascio. Ship under-
ing boundary element methods and optimization algo-
water noise assessment by the acoustic analogy. part i:
rithms. Applied Ocean Research, 55:102–129, 2016.
nonlinear analysis of a marine propeller in a uniform
S. Gaggero, D. Villa, G. Tani, M. Viviani, and D. Bertetta. flow. Journal of Marine Science and Technology, 18
Design of ducted propeller nozzles through a ranse- (4):547–570, 2013.
based optimization approach. Ocean Engineering, 145: S. Ianniello, R. Muscari, and A. Di Mascio. Ship under-
444–463, 2017. water noise assessment by the acoustic analogy, part iii:
S. Gaggero, G. Dubbioso, D. Villa, R. Muscari, and measurements versus numerical predictions on a full-
M. Viviani. Propeller modeling approaches for off– scale ship. Journal of Marine Science and Technology,
design operative conditions. Ocean Engineering, 178: 19(2):125–142, 2014.
283–305, 2019. IMO. Resolution msc.337(91) (xii) - code on noise levels
on board ships, 2012.
I. Goodfellow, Y. Bengio, and A. Courville. Deep
learning, volume 1. MIT press Cambridge, 2016. IMO. Guidelines for the reduction of underwater noise
from commercial shipping to address adverse impacts
L. Györfi, M. Kohler, A. Krzyżak, and H. Walk. A
on marine life, 2014.
distribution-free theory of nonparametric regression,
volume 1. Springer, 2002. ITTC Specialist Committee on Hydrodynamic Noise.
Model scale cavitation test. In Recommended
J. Hainmueller and C. Hazlett. Kernel regularized least Procedures and Guidelines 7.5-01-02-05 International
squares: Reducing misspecification bias with a flexible Towing Tank Conference, 2017.
and interpretable machine learning approach. Political
Analysis, 5(2):143–168, 2014. M. Kalikatzarakis, A. Coraddu, M. Atlar, G. Tani, S. Gag-
gero, D. Villa, and L. Oneto. Computational predic-
D. Hernández-Lobato, G. Martı́nez-Muñoz, and tion of propeller cavitation noise. In 9th Conference on
A. Suárez. How large should ensembles of clas- Computational Methods in Marine Engineering, 2021.
sifiers be? Pattern Recognition, 46(5):1323–1336,
2013. S. S. Keerthi and C. J. Lin. Asymptotic behaviors of
support vector machines with Gaussian kernel. Neural
J. Hildebrand. Impacts of anthropogenic sound. Marine Computation, 15(7):1667–1689, 2003.
mammal research: conservation beyond crisis, pages
101–124, 2005. R. Kohavi. A study of cross-validation and boot-
strap for accuracy estimation and model selec-
J. Hildebrand. Anthropogenic and natural sources of am- tion. In International Joint Conference on Artificial
bient noise in the ocean. Marine Ecology Progress Intelligence, 1995.
Series, 395:5–20, 2009.
A. Konno, K. Wakabayashi, H. Yamaguchi, M. Maeda,
G. Huang, L. Chen, and C. Siew. Universal approxima- N. Ishii, S. Soejima, and K. Kimura. On the mecha-
tion using incremental constructive feed-forward net- nism of the bursting phenomena of propeller tip vortex
works with random hidden nodes. IEEE Trans. Neural cavitation. Journal of Marine science and Technology,
Networks, 17(4):879–892, 2006. 6(4):181–192, 2002.
F. Lafeber and J. Bosschers. Validation of computa- I. Orlandi, L. Oneto, and D. Anguita. Random forests
tional and experimental prediction methods for the un- model selection. In 24th European Symposium on
derwater radiated noise of a small research vessel. In Artificial Neural Networks, Computational Intelligence
Proceedings of PRADS, 2016. and Machine Learning, pages 441–446, 2016.

D. Li, J. Hallander, and T. Johansson. Predicting under- F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel,
water radiated noise of a full scale ship with model test- B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
ing and numerical methods. Ocean Engineering, 161: R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
121–135, 2018. D. Cournapeau, M. Brucher, M. Perrot, and E. Duches-
nay. Scikit-learn: Machine learning in Python. Journal
A. Lidtke. Predicting radiated noise of marine of Machine Learning Research, 12:2825–2830, 2011.
propellers using acoustic analogies and hybrid
Eulerian-Lagrangian cavitation models. PhD thesis, D. Pollard. Empirical processes: theory and applica-
University of Southampton, 2017. tions. In NSF-CBMS regional conference series in
probability and statistics, pages i–86. JSTOR, 1990.
F. Miglianti, F. Cipollini, L. Oneto, G. Tani, and M. Vi-
S. Ridella, S. Rovetta, and R. Zunino. Circular backprop-
viani. Model scale cavitation noise spectra predic-
agation networks for classification. IEEE transactions
tion: Combining physical knowledge with data science.
on neural networks, 8(1):84–97, 1997.
Ocean Engineering, 178:185–203, 2019.
L. Rosasco, E. De Vito, A. Caponnetto, M. Piana, and
L. Miglianti, F. Cipollini, L. Oneto, G. Tani, S. Gaggero,
A. Verri. Are loss functions all the same? Neural
A. Coraddu, and M. Viviani. Predicting the cavitating
Computation, 16(5):1063–1076, 2004.
marine propeller noise at design stage: A deep learn-
ing based approach. Ocean Engineering, 209:107481, D. Rumelhart, G. Hinton, and R. Williams. Learning rep-
2020. resentations by back-propagating errors. Nature, 323
(6088):533–536, 1986.
Y. Odabaşi and P. Fitzsimmons. Alternative methods for
wake quality assessment. International Shipbuilding B. Scholkopf. The kernel trick for distances. Advances in
Progress, 25(282):34–42, 1978. neural information processing systems, pages 301–307,
2001.
L. Oneto. Model selection and error estimation with-
out the agonizing pain. WIREs Data Mining and B. Schölkopf, R. Herbrich, and A. J. Smola. A gener-
Knowledge Discovery, 2018. alized representer theorem. In Computational learning
theory, 2001.
L. Oneto. Model Selection and Error Estimation in a
Nutshell. Springer, 2020. S. Sezen, M. Atlar, and P. Fitzsimmons. Prediction of
cavitating propeller underwater radiated noise using
L. Oneto, A. Ghio, S. Ridella, and D. Anguita. Support rans & des-based hybrid method. Ships and Offshore
vector machines and strictly positive definite kernel: Structures, pages 1–13, 2021a.
The regularization hyperparameter is more important
S. Sezen, T. Cosgun, A. Yurtseven, and M. Atlar. Numer-
than the kernel hyperparameters. In International Joint
ical investigation of marine propeller underwater radi-
Conference on Neural Networks, pages 1–4. IEEE,
ated noise using acoustic analogy part i: The influence
2015.
of grid resolution. Ocean Engineering, 220:108448,
L. Oneto, E. Fumeo, G. Clerico, R. Canepa, et al. 2021b.
Advanced analytics for train delay prediction sys-
S. Shalev-Shwartz and S. Ben-David. Understanding
tems by including exogenous weather data. In
machine learning: From theory to algorithms. Cam-
IEEE International Conference on Data Science and
bridge university press, 2014.
Advanced Analytics, pages 458–467. IEEE, 2016.
A. N. Tikhonov and V. Y. Arsenin. Methods for solving
L. Oneto, A. Coraddu, P. Sanetti, O. Karpenko, ill-posed problems. Nauka, Moscow, 1979.
F. Cipollini, T. Cleophas, and D. Anguita. Marine
safety and data analytics: Vessel crash stop maneuver- E. van Wijngaarden. Recent developments in predicting
ing performance prediction. In International conference propeller-induced hull pressure pulses. In Proceedings
on artificial neural networks, pages 385–393. Springer, of the 1st International Ship Noise and Vibration
2017. Conference, pages 1–8, 2005.
V. N. Vapnik. Statistical learning theory. Wiley New York,
1998.
M. Wainberg, B. Alipanahi, and B. J. Frey. Are ran-
dom forests truly the best classifiers? The Journal of
Machine Learning Research, 17(1):3837–3841, 2016.

D. Wittekind. A simple model for the underwater noise


source level of ships. Journal of Ship Production &
Design, 30(1), 2014.
D. Wittekind and M. Schuster. Propeller cavitation noise
and background noise in the sea. Ocean Engineering,
120:116–121, 2016.
D. M. Young. Iterative solution of large linear systems.
DoverPublications, 2003.

You might also like