Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Design space exploration for resonant metamaterials

using physics guided neural networks

N.G.R. Melo Filho 1,3 , A. Angeli 1,3 , S. van Ophem 1,3 , B. Pluymers 1,3 , C. Claeys 1,3 , E. Deckers 2,3 ,
W. Desmet 1,3
1 KU Leuven, Department of Mechanical Engineering - Division LMSD,

Celestijnenlaan 300, B-3001, Heverlee, Belgium


2KU Leuven, Campus Diepenbeek, Department of Mechanical Engineering – Division LMSD,
Wetenschapspark 27, B-3590 Diepenbeek, Belgium
3 DMMS lab, Flanders Make

Abstract
Data-driven models have been increasingly used in recent years. However, their application to explore engi-
neering design spaces only recently attracted attention. These design spaces are generally complex, which
suits the use of such data-driven models, like neural networks. In this paper, a neural network is built and
trained to create a surrogate model which enables the design of a resonator of a resonant metamaterial. For
increased training efficiency of the neural network, physical relations are embedded in it. The trained neural
network is used in design optimisation of a resonant metamaterial and benchmarked with an optimisation
using finite elements. The optimisation using neural networks is shown to be computationally cheaper and
yields to a better resonator design than the optimisation with finite elements. Moreover, the data dependency
of neural networks is also studied. Therefore, this work shows the potential of neural networks to explore the
design space of engineering design, in which multiple design tunings are required from a same geometry.

1 Introduction

Engineering design spaces are generally large and with a high complexity due to their large amount of degrees
of freedom. This can make design optimisation a time consuming and complex process, which generally
does not lead to an optimum due to resources constraints. Recently, machine learning algorithms have again
attracted attention due to the increase in computational power [1, 2, 3, 4, 5]. These algorithms allow the
construction of black-box models which do not require the implementation of the explicit mathematical
relations among inputs and outputs. They are obtained through a training procedure using the data available
from the case to be modelled. This training procedure is generally computationally very expensive. However,
the resulting trained model has negligible computational cost when used, which can enable a more time
efficient design space exploration when repetitive design of a similar geometry is required. Among the
different machine learning algorithms, neural networks have been demonstrated to be able to deal with
highly complex problems [1, 5] as are the design spaces present in engineering.
In this paper, neural networks are used to explore the design space of resonant metamaterials [6, 7]. For the
studied case, only the resonator that composes the resonant metamaterial solution is considered to reduce
the design space and consequently reduce the required training dataset size, which is generated from finite
element (FE) simulations. To increase the training efficiency and accuracy of the neural network, known
physical relations are embedded in its training [5]. The trained neural network is then used to design a
resonator through optimisation, in which interpolation w.r.t. the training data is required. This optimisation
is then benchmarked with a commonly used finite element optimisation procedure [8]. Furthermore, the
data dependency of the neural network is evaluated by re-training it with a smaller dataset and by requesting

2503
2504 PROCEEDINGS OF ISMA2020 AND USD2020

extrapolation w.r.t. the training data. It is shown that the neural network optimisation yields a better design
in a shorter time, while its accuracy is strongly dependent on the training data available.
This paper is organized as follows. Section 2 defines the important characteristics to be considered in a
resonator design in a view of a resonant metamaterial solution. Section 3 discusses the database used to
train the neural network model, whose designing, training and embedding of physical relations are discussed
in Section 4. Next, Section 5 discusses the use of the neural network model to design a resonator using
optimisation and also shows the limitations of using such a model. Finally, this paper ends with conclusions.

2 Problem definition

Resonant metamaterials recently became a potential noise and vibration harshness (NVH) solution for the
hard to address low-frequency region [7]. They are created by adding resonators on a sub-wavelength scale
onto a host structure. This creates a frequency region around the tuned frequency of the resonator of improved
noise and vibration insulation performance, called stop band [6, 7]. The design of resonant metamaterials
for practical applications is mainly based on FE modelling, which can be time-consuming [9]. Moreover, a
rather complex design space can be found for this application [10]. Consequently, the use of neural networks
to allow an efficient design exploration can be effective in reducing the design time of a resonator that
composes a resonant metamaterial.

Figure 1: First out-of-plane mode of a cantilever beam-like resonator used to create resonance based stop
band.

Resonant metamaterials design mainly entails the design of the resonator to be added onto a host structure
since the latter generally is defined by the application case. Many geometries can be used as a resonator
to compose a resonant metamaterial [11, 12, 13]. This paper focuses on a cantilever beam-like resonator
design, which was previously demonstrated to result in resonance based stop band at its first out-of-plane
mode [14, 15] (Fig. 1). Two main resonators parameters influence the stop band frequency region and width:
resonance frequency and mass. The former determines its frequency region, and the latter has a proportional
relationship with its bandwidth [7]. Since the bandwidth of the stop band is only influenced by the portion
of the resonator mass that vibrates during the first out-of-plane mode, the modal effective mass [16] of the
out-of-plane mode should be increased for an efficient increase of the mass ratio. Consequently, a neural
network relating the resonance frequency and modal effective mass (outputs) to a given resonator geometry
and materials (inputs), would facilitate the design of a cantilever beam-like resonator to be used in a resonant
metamaterial solution.

3 Data generation

The data used to train the neural network is obtained through a design surface screening using HEEDS R MDO
2019. This is done using a geometric parametrised FE model of the resonator in which Polymethyl metha-
crylate (PMMA) material properties are considered (Tab. 1). The considered parametrised dimensions are
shown in Figure 2, and the surface screening is carried out by simulating all the combinations of the defined
maximum, medium and minimum defined values for each dimension (Tab. 2). This results in a database of
P ERIODIC STRUCTURES AND METAMATERIALS 2505

6930 possible resonator designs, which took 2 days for completion running on a personal computer with
2.8 GHz CPU and 16 GB of RAM. Resonant metamaterials are especially promising as a low-frequency
NVH solution. Therefore, resonator designs with resonance frequency higher than 2000 Hz are removed
from the database, which results in a database used to train the neural network of 3608 resonator designs.

Table 1: PMMA material properties [14].

Young’s modulus (E) Poisson’s ratio(ν) Density (ρ)


PMMA 4850 MPa 0.31 1188.38 kg/m3

Table 2: Dimension values


available in the training data.

Dimension
Values
A [2, 5, 8] mm
B [2, 4, 6] mm
C [2, 6, 10] mm
D [2, 9, 16] mm
E [2, 4, 6] mm
Figure 2: Parametrised dimensions of the resonator F [2, 11, 20] mm
design. G [2, 11, 20] mm
H (offset) [50, 75, 100] %

4 Neural network

In this section, the neural network is designed, and physical knowledge embedded on it for the studied case.
Next its training performance is evaluated by comparing it with a neural network designed without physical
knowledge consideration.

4.1 Neural network design

Neural networks are composed of an input layer, one or more hidden layers and an output layer (Fig. 3). The
input layer is defined by the set of inputs given to the neural network, as such the neural network can predict
the requested set of outputs. The hidden and output layers are composed of weights, which multiply the
inputs of the layer, a bias, a net input function, which is generally a summation of all the weights multiplied
by the inputs and the bias, and neurons, in which activation functions are present.
The weights and bias are the training variables to be updated according to a defined loss function following
an optimisation procedure. The values obtained from the net input function are transformed through the
activation functions present in each neuron in the layer. The number of neurons in the hidden layers can
vary, however, in the output layer, they have to be equal to the number of outputs. In this way, some main
parameters can be identified to design a neural network: the set of inputs and outputs, the optimisation
algorithm to update weights and biases, the number of hidden layers and neurons in each hidden layer and
the activation function of the neurons.
The inputs are defined as the 8 dimensions (Fig. 2) and the total static mass of the resonator. As outputs, the
resonance frequency and modal effective mass are selected.
Many optimisation procedures are available, and they generally depend on the software used to implement
the neural network. In this paper, Python is used with the Tensorflow package [17] since it is a freeware and
allows easy customisation of the loss function. In Tensorflow, one of the most used optimisation algorithms
to train neural networks is Adam, which is a stochastic optimisation method [18]. Therefore, it is used in this
2506 PROCEEDINGS OF ISMA2020 AND USD2020

Figure 3: Schematic representation of a neural network.

work. The optimiser also needs a loss function to be minimised. For regression problems, the mean square
error (MSE) is generally used:
n
1X
M SE = (yi − ŷi )2 , (1)
n
i=1
where n is the number of samples, yi is the known value, and ŷi is the predicted value.
There are no strict rules to define the number of hidden layers and the number of neurons in each layer, which
are generally dependent on the dataset available, however, some guidelines can be found [19]. For mapping
approximation, 2 hidden layers can drastically decrease the number of neurons required in the neural network
[20, 19, 21]. Moreover, considering a 2 hidden layer neural network, the number of neurons can be defined
for the first layer as: r
p N
N eurons1st = N (m + 2) + 2 , (2)
m+2
and for the second layer as: r
N
N eurons2nd = m , (3)
m+2
where N is the number of training samples and m is the number of outputs [19, 21]. It is important to keep
in mind that those are guidelines, and optimisation procedures could be applied to define these numbers [1].
The activation functions of the neurons also have to be defined. The commonly used sigmoid activation
function, g(x) = 1/(1 − e−x ), is applied in this work to the neurons of the 2 hidden layers. At the output
layer, generally linear activation functions are used hence also applied in this work. The final neural network
design without physical knowledge consideration is shown in Figure 4. It has 180 neurons in the first hidden
layer, 60 neurons in the second hidden layer, 2 neurons in the output layer.

Figure 4: Schematic representation of the designed neural network.


P ERIODIC STRUCTURES AND METAMATERIALS 2507

4.2 Embedding physical knowledge

Physical knowledge is embedded in the neural network by adding a penalty function in the loss function [5].
For the case of a resonator design, a known physical relation is that the modal effective mass cannot be larger
than the total mass of the resonator. Consequently, the loss function used to train the neural network can be
defined as:
n
1X
Loss = M SE + λ [ReLU (m̂M EM i − mT i )]2 , (4)
n
i=1
where λ = 1000 is a weight for the penalty function since the difference in mass can be too small to be
significant for the optimiser, ReLU is an activation function which has linear behaviour for positive values
and returns zero for negative values, in such a way that the penalty is only applied when the total mass of a
given case mT i is smaller than the predicted modal effective mass m̂M EM i .

4.3 Training

The neural networks accuracy with and without physical knowledge embedded are evaluated among the
training data since no data is left unseen by the neural network to compose a validation data set due to the
small database available. The evaluation is done by calculating the error as:
v
uP  2
n u n (yi −ŷi )
− E
1 X (yi − ŷi ) t i=1 yi
E= ,σ = , (5)
n yi n
i=1

where E and σ are the average error and standard deviation, respectively, and they are calculated separately
for the natural frequency and modal effective mass. Moreover, the two neural networks are trained by the
same amount of one 105 iterations. This number is defined by the neural network error stabilization. The
training is done in six minutes by using Google Colab cloud computers.

1.5 30

20
1

10
Error [%]
Error [%]

0.5
0

0
-10

-0.5 -20
NN PGNN NN PGNN

(a) Natural frequency (b) Modal effective mass

Figure 5: Average error and standard deviation w.r.t. the training data for resonator natural frequency (a) and
modal effective mass (b) prediction for a neural network designed without physical knowledge consideration
(NN) and for a physics guided neural network (PGNN). The error bars represent the calculated standard
deviation.

Figure 5 shows that the physics guided neural network has an error of 0.3% ± 0.7% and 5.8% ± 20.2%
for the natural frequency and modal effective mass, respectively, while the neural network without physical
knowledge consideration presents an error of 0.2% ± 0.5% and 4.7% ± 14.7% for the same outputs. Hence
the physics guided neural network has a slightly smaller average error and a considerable smaller standard
deviation for the two output quantities than the neural network without physical knowledge consideration.
2508 PROCEEDINGS OF ISMA2020 AND USD2020

Moreover, both neural networks predict the natural frequency of the resonators more accurately than its
modal effective mass. This might be because the natural frequency is more sensitive to a change in geometry
than the modal effective mass.

5 Designing of a resonator using neural networks


In this section, the trained physics guided neural network is used as a surrogate model to design a resonator
through optimisation, and the results of the optimisation are compared with the results of an optimisation
using FE to model the resonator. Next, limitations on the use of neural networks are demonstrated. First, by
assessing its accuracy sensitivity to the size of the training database, and second, by requesting the neural
network to extrapolate.

5.1 Design optimisation

The trained neural network is used to design a resonator through an optimisation procedure described in [8].
The optimisation objective function is:


{M odeM EM }max ,
450Hz < M odef req < 550Hz, (6)

0.7g < m < 1.3g,
T

where M odeM EM and M odef req are the modal effective mass and resonance frequency of the targeted first
out-of-plane mode of the cantilever beam-like resonator. The optimisation is done using genetic algorithms,
and the optimiser is allowed to change the parametrised dimensions shown in Figure 2 accordingly to Table 3.
Therefore, the design space now allows the creation of more designs in the optimisation procedure within the
training data maximum and minimum bounds, by allowing the parametrised dimension to vary in a smaller
step as compared to the training data (Tab. 2).

Table 3: Dimension bounds and steps used during the optimisation.


A B C D E F G H
Dimension
2 − 2 − 8 mm 2 − 0.5 − 6 mm 2 − 0.5 − 10 mm 2 − 0.5 − 16 mm 2 − 0.5 − 6 mm 2 − 0.5 − 20 mm 2 − 0.5 − 20 mm 50 − 10 − 100%
(min-step-max)

The optimisation is run until convergence, which happened after 5680 designs and 3 hours of calculation
time, which is performed in the same computer used to generate the database. The resonance frequency
and modal effective mass of the six best resonator designs retained from the optimisation are compared
with their FE counterpart results in Figure 6. The average error among the six best designs is 1.2% for the
resonance frequency and 3.8% for the modal effective mass. The best resonator design dimensions are shown
in Figure 7, in which only two of the eight dimensions are present in the training data (B and C dimension).
Following the same objective function (Eq. 6) and the same bounds (Tab. 3), the optimisation is performed
again but now using a parametrised FE model to benchmark against the optimisation using the trained neural
network model. The comparison of the two optimisations shows that the optimisation using the neural
network model can evaluate 21 times more designs in a similar time than the optimisation using FE modelling
(Fig. 8a), allowing the former to find a resonator design with a 20% higher modal effective mass (Fig. 8b).
The optimisation using FE modelling is run until completion determined by the software, taking 13 hours,
hence more than 4 times longer than the optimisation using neural networks, and it still evaluates 5 times
less designs, and the best resonator design obtained still has a 6% lower modal effective mass (Fig. 8).

5.2 Neural networks limitations

To study the sensitivity of the accuracy of the neural network model to the training database size, 25% of
the data is randomly removed from the original training database, the neural network is re-trained, and the
P ERIODIC STRUCTURES AND METAMATERIALS 2509

MEM NN MEM FE
Nat. freq. NN Nat. freq. FE

3
7 0.9
520
6 0.85
2.5
500 5

Frequency [Hz]
2 0.8

Error (%)

MEM [g]
Error (%)

480 4
1.5 0.75
460 3
1 0.7
440 2
0.5 420 1 0.65

0 400 0 0.6
1 2 3 4 5 6 1 2 3 4 5 6
Resonator design Resonator design

(a) Natural frequency (b) Modal effective mass (MEM)

Figure 6: Six best resonator designs (a)resonance frequency and (b)modal effective mass obtained through
optimisation procedure using the trained neural network compared to them FE counterpart.

Figure 7: Resonator dimensions of the optimal resonator design obtained through the optimisation procedure
using the neural network model. The dimensions are shown in millimetres.

optimisation is re-run. The optimisation results are again evaluated as previously. As shown in Figure 9,
the error between the neural network and FE model counterpart increases an average with a factor of 5 for
both outputs when compared to the error obtained for the neural network trained with the complete database
(Fig. 6). This shows that the data set available is already limited and that the accuracy of the neural network
is very sensitive to the amount of data available for training.
A known problem for neural networks is that they lose accuracy when they have to extrapolate instead of
interpolate. This is verified by re-running the optimisation using the neural network trained with the full
database but with following objective function:


{M odeM EM }max ,
3000Hz < M odef req < 3010Hz, (7)

0.7g < m < 1.3g,
T

as the data set is limited to resonators with a maximum resonance frequency of 2000 Hz, the neural network
has to extrapolate to reach the requested resonance frequency. Figure 10 shows that the average error of the
neural network among the 6 best resonator designs is 2 times and 8.5 times higher for the resonance frequency
and modal effective mass, respectively, when compared to the error obtained for the neural network trained
with the full database due to the extrapolation. This shows that is important to know the training data when
using these kind of models for design space exploration.
2510 PROCEEDINGS OF ISMA2020 AND USD2020

6000 0.9

5000
0.85
Designs evaluated

4000

MEM [g]
0.8
3000
0.75
2000

0.7
1000

0 0.65

3h

3h
h

h
h

h
-3

-3
-3

-6

-9

-3

-6

-9
-1

-1
N

N
FE

FE

FE

FE

FE

FE
FE

FE
N

N
PG

PG
(a) Designs evaluated (b) Modal effective mass (MEM)

Figure 8: Comparison over time of the optimisation using the neural network model and the optimisation
using FE model considering (a) number of designs evaluated and (b) maximum obtained modal effective
mass.

Nat. freq. NN Nat. freq. FE MEM NN MEM FE

20 650 50 1.2

600 40 1
15
Frequency [Hz]
Error (%)

Error (%)

MEM [g]
550 30
0.8
10
500 20
0.6
5
450 10
0.4
0 400 0
1 2 3 4 5 6 1 2 3 4 5 6
Resonator design Resonator design

(a) Natural frequency (b) Modal effective mass (MEM)

Figure 9: Six best resonator designs (a) resonance frequency and (b) modal effective mass obtained through
optimisation procedure using the neural network trained with only 75% of the data set available compared to
them FE model counterpart.

6 Conclusions

The potential of physics guided neural networks for the design exploration of resonant metamaterials is
investigated. To that aim, a neural network is designed that can predict the two main characteristics that
drive the performance of a resonator to create stop band: resonance frequency and modal effective mass. As
inputs 8 geometrical dimensions that determine the design and the total mass of the resonator are selected.
The training database is generated by FE model simulations. To improve the training efficiency of the neural
network, known physical relations are embedded in the loss function used to train the neural network. This
creates a more accurate neural network as compared to a neural network where these physical relations
are not embedded. This trained neural network is then used to design a resonator through optimisation.
This optimisation result shows that the neural network yields accurate predictions of the requested outputs,
even though, the obtained result needs to be interpolated from the training data. The optimisation result
is also compared to an optimisation using parametrised FE model directly, and it is shown that the neural
network allows the evaluation of more designs in a considerably shorter time, resulting in a better resonator
design. Furthermore, two limitations on the use of neural networks are demonstrated. Firstly, it is shown
that the neural network accuracy depends on the size of the training database. Secondly, it is confirmed that
extrapolation leads to an increased inaccuracy of the neural network model. Thus, showing the importance
P ERIODIC STRUCTURES AND METAMATERIALS 2511

Nat. freq. NN Nat. freq. FE 55 0.7

5 3200 0.6
50

4 0.5
3000 45

Frequency [Hz]

Error (%)

MEM [g]
0.4
Error (%)

3
40
2800 0.3
2 35
0.2
2600
1 30 0.1

0 2400 25 0
1 2 3 4 5 6 1 2 3 4 5 6
Resonator design Resonator design

(a) Natural frequency (b) Modal effective mass (MEM)

Figure 10: Six best resonator designs (a)resonance frequency and (b)modal effective mass obtained through
optimisation procedure using the fully trained neural network, however following the objective function
shown in Eq. 7 compared to them FE model counterpart.

of knowing the training database used for an accurate design space exploration. Therefore, this paper shows
that neural networks can be used to design space exploration, however, it has limitations mostly dependent
on the training database available.

Acknowledgements

Elke Deckers is a postdoctoral researcher of the Fund for Scientific Research Flanders (F.W.O.). The Re-
search Fund KU Leuven is gratefully acknowledged for its support. The AI impulse program of the Flemish
Government is gratefully acknowledged for its support. This research was partially supported by Flanders
Make, the strategic research centre for the manufacturing industry.

References
[1] K. M. Hamdia, X. Zhuang, and T. Rabczuk, “An efficient optimization approach for designing machine
learning models based on genetic algorithm,” Neural Computing and Applications, 2020.

[2] Y.-C. Chan, F. Ahmed, L. Wang, and W. Chen, “Metaset: Exploring shape and property spaces for
data-driven metamaterials design,” arXiv preprint arXiv:2006.02142, 2020.

[3] B. Warsito, R. Santoso, H. Yasin et al., “Cascade forward neural network for time series prediction,” in
Journal of Physics: Conference Series, vol. 1025, no. 1. IOP Publishing, 2018, p. 012097.

[4] R. Stewart and S. Ermon, “Label-free supervision of neural networks with physics and domain knowl-
edge,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017.

[5] A. Karpatne, W. Watkins, J. Read, and V. Kumar, “Physics-guided neural networks (pgnn): An appli-
cation in lake temperature modeling,” arXiv preprint arXiv:1710.11431, 2017.

[6] Z. Liu, X. Zhang, Y. Mao, Y. Zhu, Z. Yang, C. T. Chan, and P. Sheng, “Locally resonant sonic materi-
als,” science, vol. 289, no. 5485, pp. 1734–1736, 2000.

[7] C. Claeys, K. Vergote, P. Sas, and W. Desmet, “On the potential of tuned resonators to obtain low-
frequency vibrational stop bands in periodic panels,” Journal of Sound and Vibration, vol. 332, no. 6,
pp. 1418–1436, 2013.
2512 PROCEEDINGS OF ISMA2020 AND USD2020

[8] N. Rocha de Melo Filho, C. Claeys, E. Deckers, and W. Desmet, “Optimised thermoformed metama-
terial panel design with a foam core for improved noise insulation performance,” in Proceedings of
ISMA2020 International Conference on Noise and Vibration Engineering and USD2020 International
Conference on Uncertainty in Structural Dynamics, Leuven, Belgium, Sep. 2020.

[9] N. G. Rocha de Melo Filho, M. Clasing Villanueva, L. Sangiuliano, C. Claeys, E. Deckers, and
W. Desmet, “Optimisation based design of a metamaterial foam core sandwich panel with in-core res-
onators,” in INTER-NOISE 2020 Congress and Conference Proceedings, vol. Accepted, Seoul, South
Korea, Aug. 2020.

[10] M. Clasing Villanueva, C. Claeys, N. G. Rocha de Melo Filho, E. Deckers, K. Geurts, I. Van de Weyen-
berg, P. Campestrini, B. Pluymers, and W. Desmet, “Design tool for realisable vibro-acoustic meta-
materials based on their nvh performance,” in Proceedings of International Conference on Noise and
Vibration Engineering (ISMA2018)/International Conference on Uncertainty in Structural Dynamics
(USD2018). KU Leuven, Dept. Werktuigkunde, 2018, pp. 3125–3134.

[11] N. Rocha de Melo Filho, C. Claeys, E. Deckers, and W. Desmet, “Realisation of a thermoformed vibro-
acoustic metamaterial for increased stl in acoustic resonance driven environments,” Applied Acoustics,
vol. 156, pp. 78–82, 2019.

[12] C. Claeys, E. Deckers, B. Pluymers, and W. Desmet, “A lightweight vibro-acoustic metamaterial


demonstrator: Numerical and experimental investigation,” Mechanical Systems and Signal Process-
ing, vol. 70, pp. 853–880, 2016.

[13] L. Van Belle, C. Claeys, E. Deckers, and W. Desmet, “On the impact of damping on the dispersion
curves of a locally resonant metamaterial: Modelling and experimental validation,” Journal of Sound
and Vibration, vol. 409, pp. 1–23, 2017.

[14] C. Claeys, N. G. R. de Melo Filho, L. Van Belle, E. Deckers, and W. Desmet, “Design and validation of
metamaterials for multiple structural stop bands in waveguides,” Extreme Mechanics Letters, vol. 12,
pp. 7–22, 2017.

[15] N. Rocha de Melo Filho, C. Claeys, E. Deckers, and W. Desmet, “Metamaterial foam core sandwich
panel designed to attenuate the mass-spring-mass resonance sound transmission loss dip,” Mechanical
Systems and Signal Processing, vol. 139, p. 106624, 2020.

[16] J. J. Wijker, “Modal effective mass,” Spacecraft Structures, pp. 247–263, 2008.

[17] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard
et al., “Tensorflow: A system for large-scale machine learning,” in 12th {USENIX} symposium on
operating systems design and implementation ({OSDI} 16), 2016, pp. 265–283.

[18] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint
arXiv:1412.6980, 2014.

[19] D. Stathakis, “How many hidden layers and nodes?” International Journal of Remote Sensing, vol. 30,
no. 8, pp. 2133–2147, 2009. [Online]. Available: https://doi.org/10.1080/01431160802549278

[20] K.-I. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural
networks, vol. 2, no. 3, pp. 183–192, 1989.

[21] G.-B. Huang, “Learning capability and storage capacity of two-hidden-layer feedforward networks,”
IEEE transactions on neural networks, vol. 14, no. 2, pp. 274–281, 2003.

You might also like