A Neural-Network-Based Methodology For The Evaluation of The Center of Gravity of A Motorcycle Rider

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Article

A Neural-Network-Based Methodology for the Evaluation of


the Center of Gravity of a Motorcycle Rider
Francesco Carputo 1 , Danilo D’Andrea 2 , Giacomo Risitano 2 , Aleksandr Sakhnevych 1 , Dario Santonocito 2
and Flavio Farroni 1, *

1 Department of Industrial Engineering, University of Naples Federico II, 80125 Naples, Italy;
francesco.carputo@unina.it (F.C.); ale.sak@unina.it (A.S.)
2 Department of Engineering, University of Messina, Contrada di Dio (S. Agata), 98166 Messina, Italy;
dandread@unime.it (D.D.); giacomo.risitano@unime.it (G.R.); dsantonocito@unime.it (D.S.)
* Correspondence: flavio.farroni@unina.it

Abstract: A correct reproduction of a motorcycle rider’s movements during driving is a crucial and
the most influential aspect of the entire motorcycle–rider system. The rider performs significant
variations in terms of body configuration on the vehicle in order to optimize the management of the
motorcycle in all the possible dynamic conditions, comprising cornering and braking phases. The
aim of the work is to focus on the development of a technique to estimate the body configurations
of a high-performance driver in completely different situations, starting from the publicly available
videos, collecting them by means of image acquisition methods, and employing machine learning
and deep learning techniques. The technique allows us to determine the calculation of the center
of gravity (CoG) of the driver’s body in the video acquired and therefore the CoG of the entire

 driver–vehicle system, correlating it to commonly available vehicle dynamics data, so that the force
Citation: Carputo, F.; D’Andrea, D.; distribution can be properly determined. As an additional feature, a specific function correlating
Risitano, G.; Sakhnevych, A.; the relative displacement of the driver’s CoG towards the vehicle body and the vehicle roll angle
Santonocito, D.; Farroni, F. A has been determined starting from the data acquired and processed with the machine and the deep
Neural-Network-Based Methodology learning techniques.
for the Evaluation of the Center of
Gravity of a Motorcycle Rider. Keywords: motorcycle driver; multibody co-simulation; machine learning; deep learning
Vehicles 2021, 3, 377–389. https://
doi.org/10.3390/vehicles3030023

Academic Editor: Chen Lv 1. Introduction


Human body movement is an object of a crucial interest, especially in the biomedical
Received: 1 June 2021
Accepted: 9 July 2021
field [1,2]. Technological evolution has allowed considerable progress, especially in motor
Published: 15 July 2021
rehabilitation techniques in sports, in the study of motor problems related to behavioral
pathologies and in the analysis of dynamic systems in which a person interacts with the
Publisher’s Note: MDPI stays neutral
surrounding environment both it real or virtual situations.
with regard to jurisdictional claims in
The fields of application of such a discipline can be various and most of the related
published maps and institutional affil- issues concern the study of the balance characteristics of a human body and the determina-
iations. tion of its center of gravity during motion, which is fundamental for a proper calculation
of the inertial components and to evaluate the load distribution in motion phases [3,4]. In
particular, the vehicular simulation field nowadays is lacking robust and usable methodolo-
gies to account for the driver/rider position in the vehicle, and for the two-wheel domain
Copyright: © 2021 by the authors.
the problem is even deeper, due to the significant influence that the rider’s mass has on the
Licensee MDPI, Basel, Switzerland.
overall rider + vehicle system [5].
This article is an open access article
Due to the difficulty in sensing the rider with specific sensors aimed to measure
distributed under the terms and the CoG position, the optimal method to acquire data on his position is based on image-
conditions of the Creative Commons processing. For this reason, an analysis aimed to obtain a preliminary comprehension of
Attribution (CC BY) license (https:// the state of the art in such a field has been carried out.
creativecommons.org/licenses/by/ In recent years, motion analysis has evolved substantially, alongside major technologi-
4.0/). cal advances, and there is growing demand for faster and more sophisticated techniques

Vehicles 2021, 3, 377–389. https://doi.org/10.3390/vehicles3030023 https://www.mdpi.com/journal/vehicles


Vehicles 2021, 3 378

for capturing motion in a wide range of contexts, ranging from clinical gait assessment [6]
to videogame animation.
Biomechanical tools have greatly developed, from manual image annotation to marker-
based optical trackers, inertial sensor-based systems, and marker-free systems using sophis-
ticated human body models, by double energy X-ray absorptiometry (DXA) [7], machine
vision, and machine learning algorithms. With such scope, the use of sophisticated sen-
sors based on the presence of physical markers applied to humans’ bodies allows one to
measure the physical quantities (force, speed, acceleration and displacement) linked to the
different movements made by the body which, for example, in the sports field, allows one
to carry out studies aiming at the improvement of the athlete’s performance [8,9].
An alternative method, markerless motion capture, based on the use of video acquisi-
tions processed by machine learning techniques, aims to identify the positions of various
key points belonging to the human body starting from a singular video frame or images,
with no need of uncomfortable and impractical physical markers [10,11].
The major difficulty of this technique is that some body parts cover some others
during the movement or in some given postures. As a result, automatic and marker-
less identification of body segments faces many difficulties that turn it into a complex
problem [12].
In recent years, thanks to the evolution of image-processing tools, the interest in
marker-free motion capture systems has significantly increased and different software
methods allowing one to automatically identify the anatomical landmarks have been
developed, among them the OpenPose software [13]. The OpenPose package is capable
of performing real-time skeleton tracking on a large number of subjects analyzing 2D
images [14].
Starting from the research output of the collaboration between the University of
Messina and the University of Naples Federico II [15], based on the employment of the
OpenPose software aiming to predict the center of gravity (CoG) of a human subject posing
in a specific set of 2D images, the present work focuses on a motorcycle rider adopting the
images acquired from a motorcycle simulation game, MotoGP19.
The work aims to develop of a technique employing the neural network technology
for correlation with vehicle data, applied in a deep learning environment, which, starting
from a partial capture of the driver position acquired in each video frame, allows one to
determine the key points not visible from the camera and corresponding to the entirety of
the driver’s body. Starting from the information collected, the calculation of the CoG of
the driver’s body is evaluated by adopting deep learning technology. The neural network
technique is then employed to determine the correlation between the relative displacement
of the driver’s center of gravity and the motorcycle’s body roll angle. The continuous
availability of information on the position of the driver/rider’s CoG is fundamental in the
motorcycle industry, both for racing and safety applications, due to the need to consider
the influence of the human body on the rider + vehicle system in design and simulations
activities [16,17].

2. Materials and Methods


The presented work aims to illustrate a methodologic approach, developed by using
data acquired from a reference scenario, that will eventually be substituted by real video
data. Each rider moves in a different way, as the driving style of racing riders demonstrates,
and once the methodology is validated, the developed algorithms can be trained for
each different rider, reproducing their typical motion and style, with the final aim of
determining the continuous position of the center of gravity, which is fundamental for
simulation activities and usually hard to determine for motorcycles. Such consideration
has been better expressed in the text. In order to obtain a reliable dataset with repeatable
and robust data, an approach based on the use of the MotoGP19 simulation videogame has
been chosen in the acquisition phase, in which several runs have been captured. The video
frames acquired in the various dynamic conditions of the motorcycle and the body driver
Vehicles 2021, 3, FOR PEER REVIEW 3
Vehicles 2021, 3, FOR PEER REVIEW 3

with repeatable
with repeatable and and robust
robust data,
data, an an approach
approach based based on on the
the use
use of of the
the MotoGP19
MotoGP19
Vehicles 2021, 3 simulation videogame has been chosen in the acquisition
simulation videogame has been chosen in the acquisition phase, in which several runs phase, in which several runs
379
have been captured. The video frames acquired in the various
have been captured. The video frames acquired in the various dynamic conditions of the dynamic conditions of the
motorcycle and the body driver configuration constitute a suitable
motorcycle and the body driver configuration constitute a suitable and repeatable dataset and repeatable dataset
on which
on which thethe OpenPose
OpenPose software software processing
processing has has beenbeen employed
employed to to calculate
calculate all all the
the
configuration
necessary body constitute
body markers’ a suitable
markers’ positions and
positions and, repeatable
and, therefore, dataset
therefore, the on which
the center
center of the
of gravityOpenPose
gravity of of the software
the driver’s
driver’s
necessary
processing has been employed to calculate allvideogame
the necessary body markers’ positions
body. The
body.therefore, particular
The particular choice
choice to start
to start of with
with the
the videogame is motivated
is motivated by
by the fact
the fact that
that
and,
modern simulationthe center
and of gravity
sports games the
are driver’s
generally body.
very The
faithful particular
to the real choice
movements to start of
modern
with the simulation
videogame and sports games
is motivated by theare fact
generally very faithful
thatismodern to the and
simulation real movements
sports games of
the athletes
the generally
athletes andand drivers,
drivers, since
since their modelling based on the extrapolation of Active
are
Marker Mocap very faithful
real data to thetheir
[18–20]. real
As
modelling
movements
a result,
isofbased
the simulation
on the and
the athletes extrapolation
output drivers,
reproduces
of Active
since their
all the
Marker
modelling Mocap real data
is based oninthe [18–20]. As
extrapolation a result,
of Active the simulation
Marker Mocap output reproduces
real data [18–20]. all
Asthea
athlete’s
athlete’s movements
movements a realistic
in a realistic way, and
way, and in the
in the particular
particular case under
case under analysis,
analysis, it allows
it allows
result,
one to the
to havesimulation
have aa great output
great coherence reproduces
coherence with with the all
the realthe athlete’s
real movements movements
movements assumed assumed by in a realistic
by motorcycle way,
motorcycle riders and
riders
one
in the particular case under analysis, it allows one to have a great coherence with the real
during the operation
during the operation of the motorcycle
ofmotorcycle
the motorcycle even in extreme dynamic conditions. Eventual
movements assumed by ridersevenduring in extreme
the operationdynamicof theconditions.
motorcycleEventual
even in
distortions and
distortions and inaccuracies
inaccuracies of of the
the virtual
virtual camera
camera did did not not represent
represent aa particular
particular issue,
issue, due
due
extreme dynamic conditions. Eventual distortions and inaccuracies of the virtual camera
to the
to the methodologic
methodologic spirit spirit ofof the
the study, whose whose data data can can be
be progressively
progressively improved in in
did not represent a particular issue, study,
due to the methodologic spirit of the study,improved
whose data
terms
terms of quality in
of quality in improvedthe following
the following activities,
activities, keeping
keeping the value of
the value activities,the demonstrated
of the demonstrated
can be progressively in terms of quality in the following keeping the
feasibility.
feasibility.
value of the demonstrated feasibility.
Modern gyroscopic
Modern gyroscopic cameras,cameras, represented
represented in in Figure
Figure 1a, 1a, installed
installed on on racing
racing
Modern gyroscopic cameras, represented in Figure 1a, installed on racing motorcycles
motorcycles
motorcycles provide only a partial shape of the driver’s body, being limited to acquiring
provide only provide
a partial onlyshapea ofpartial shape of
the driver’s the driver’s
body, being limitedbody,tobeing limited
acquiring thetomovement
acquiring
the movement
the movement information
information regarding only of the upper part of the rider’s body (as shown
information regarding only regarding
of the upper only of the
part of the upper partbody
rider’s of the(asrider’s
shown body (as shown
in Figure 1b).
in Figure
in Figure 1b). The
1b).part, missing
The missing part, mainly
part, mainlylegs comprising
comprising legs and the bottom part of the torso,
The missing mainly comprising and thelegs and the
bottom partbottom
of the part torso,of the torso,
becomes
becomes necessary
becomes
necessary
necessary
to correctly
to evaluate
to correctlythe
correctly evaluate
evaluate
CoG ofthe
the CoG
theCoG
of the
of
rider’s
the rider’s
rider’s
body
body [21–24].
body
[21–24].
[21–24].

Figure 1.
Figure 1. (a)
(a) Gyroscopic
Gyroscopic camera;
camera; (b)
(b) rear
rear shot
shot with
with gyroscopic
gyroscopic camera.
camera.
Figure 1. (a) Gyroscopic camera; (b) rear shot with gyroscopic camera.

2.1. Acquisition
2.1.
2.1. Acquisition of
Acquisition of the
of the Video
the Video Frames
Video Frames
Frames
A series
A
A series of
series of simulations
of simulations were
simulations were carried
were carriedout
carried out via
out via MotoGP19
via MotoGP19on
MotoGP19 ondifferent
on differenttracks
different tracksin
tracks inorder
in orderto
order to
to
explore
explore as
as many
many dynamic
dynamic conditions
conditions as
as possible
possible concerning
concerning the
the motorcycle
motorcycle
explore as many dynamic conditions as possible concerning the motorcycle behavior and behavior
behavior and
and
the driver’s
the
the driver’s body
driver’s body configurations.
body configurations.
configurations.
For
For each
For each track
each track under analysis,
track under
under analysis, the
analysis, the best
the best track
best track lap
track lapwas
lap wasselected
was selectedas
selected asaaareference
as referenceand
reference andwas
and was
was
recordedinin
recorded
recorded intwo
two
two different
different video
video
different acquisitions
acquisitions
video using
usingusing
acquisitions the twothe two different
different
the two different points
points ofpoints of view
view
view available
of
available
in the in
simulatorthe simulator
game:
available in the simulator game: game:
••• Rear
Rear view
Rear viewcamera
view camera111(Figure
camera (Figure2a);
(Figure 2a);
2a);
••• Rear
Rear view
view camera
camera 22 (Figure
(Figure 2b).
Rear view camera 2 (Figure 2b).2b).

Figure2.
Figure
Figure 2. (a)
2. (a) Rear
(a) Rear view
Rear view camera
view camera 1;
camera 1; (b)
1; (b) rear
(b) rearview
rear viewcamera
view camera2.
camera 2.
2.

The video was subsequently processed via the OpenPose package as described later.
Vehicles 2021, 3, FOR PEER REVIEW 4

Vehicles 2021, 3 380

The video was subsequently processed via the OpenPose package as described later.

2.2. OpenPose
2.2. OpenPose Processing
Processing
OpenPose is
OpenPose is an
an open
open source
source software
software package
package for
for the
the detection
detection of of multiperson
multiperson key key
pointsin
points inreal
realtime
timestarting
startingfrom
from video
video frame
frame acquisitions.
acquisitions. It isItable
is able to jointly
to jointly detectdetect the
the key
points of the
key points ofhuman
the humanbody,body,
namely the hands,
namely face and
the hands, face feet
andon feetindividual
on individualimages, up toup
images, a
total of 135
to a total ofkey
135 points [25]. [25].
key points
It
It is
is capable
capable of of processing,
processing, in in real
real time,
time, single
single frames
frames or or direct
direct videos
videos in in the
the input,
input,
providing, in output, the same images and videos with a further
providing, in output, the same images and videos with a further level of physics level of physics represented
by the key points
represented by the detected anddetected
key points added toand the added
input frames.
to the input frames.
For
For thetherecognition
recognitionofofkey points,
key points, OpenPose
OpenPose uses a preadded
uses a preadded neural convolution
neural net-
convolution
work
network(CNN) (CNN)called Vggnet
called [26,27].
Vggnet The The
[26,27]. network accepts
network a color
accepts image
a color as input
image (Figure
as input 3a),
(Figure
returning 2D positions of the key points for each person in the frame
3a), returning 2D positions of the key points for each person in the frame (Figure 3b). The (Figure 3b). The
additional
additional processing
processing layer,
layer,representing
representingthe theextracted
extractedpoints,
points,isisrepresented
representedin inFigure
Figure3c.3c.

Figure 3.
Figure 3. (a)
(a) Original
Original frame;
frame; (b)
(b) postprocessed
postprocessed frame
frame by
byOpenPose;
OpenPose;(c)
(c)plot
plotof
ofthe
thekey
keypoints.
points.

3.
3. Training
Training Dataset
Dataset
The
The output data
output data obtained
obtained by
by means
means ofof the
the OpenPose
OpenPose package
package were
weredivided
dividedinto
intotwo
two
key points sets depending on their positions towards the human body to constitute
key points sets depending on their positions towards the human body to constitute suitable
datasets
suitable of input and
datasets targetand
of input indispensable for the machine
target indispensable for thelearning
machinetraining.
learningIntraining.
particular,
In
the processed key points have been split as follows:
particular, the processed key points have been split as follows:
•• Input:
Input: 10
10 points
points for
for the
the upper
upper body
body part
part (Figure
(Figure 4a);
4a);
•• Target:
Target: 10 points for the lower body part (Figure4b).
10 points for the lower body part (Figure 4b).
The neural network is set to evaluate 10 points relative to the lower part of the body,
The neural network is set to evaluate 10 points relative to the lower part of the body,
starting from the 10 key points of the upper part acquired through camera 2 (Figure 2b).
starting from the 10 key points of the upper part acquired through camera 2 (Figure 2b).
Due to the different framings between the two chosen cameras, it was necessary to
Due to the different framings between the two chosen cameras, it was necessary to
scale the shapes obtained from the two different sets of key points to the same proportions.
scale the shapes obtained from the two different sets of key points to the same proportions.
Furthermore, the data were divided into x and z coordinates because it is necessary
to train two distinct neural networks for the x and z coordinates, respectively, to optimize
direction. Four distinct matrices of points have thus been prepared for the training
procedure:
• x coordinates of the upper points;
• z coordinates of the upper points;
Vehicles 2021, 3 • x coordinates of the lower points; 381
• z coordinates of the lower points.

Figure4.4.(a)(a)The
Figure Thetop
top1010points
pointsasasnetwork
networkinput;
input;(b)
(b)the
thebottom
bottom1010points
pointsasasa anetwork
networktarget.
target.

Furthermore,
3.1. Assessment of thethe dataofwere
Centre divided
Gravity of theinto x andBody
Driver’s z coordinates because it is necessary to
train two distinct neural networks for the x and z coordinates, respectively, to optimize the
Assuming
calibration that the response.
algorithms body density is y-direction,
In the constant and theallhypothesis
the body parts
of thecan beCoG
fixed described
coordi-
bynate
simple geometrical
has been features,
introduced, the lower
due to the calculation of the
variability center
of the of motion
rider’s gravityincan
saidbedirection.
easily
achieved geometrically
Four distinct matrices [28].
of points have thus been prepared for the training procedure:
However, it has to be taken into account that the human body is characterized by a
• x coordinates of the upper points;
nonuniform distribution of density between each couple of defined markers. With such
• z coordinates of the upper points;
reference, specific methodologies and more accurate methods for calculating the center of
• x coordinates of the lower points;
gravity could be employed [29].
• z coordinates of the lower points.
Schematizing the human body as a series of discrete mass parts, the center of gravity
can be assessed as
3.1. Assessment of follows:
the Centre of Gravity of the Driver’s Body
Assuming that the body density ∑ 𝑚 𝑥𝑖
𝑥 is constant and all the body parts can be described by
simple geometrical features, the calculation𝑀of the center of gravity can be easily achieved
(1)
geometrically [28]. ∑ 𝑚 𝑦𝑖
However, it has to be taken into 𝑧 account that the human body is characterized by a
𝑀
nonuniform distribution of density between each couple of defined markers. With such
where:
reference, specific methodologies and more accurate methods for calculating the center of
• gravity
mi iscould
the mass of i-th element;
be employed [29].
• MSchematizing
is the mass ofthe
body (including
human body asclothing).
a series of discrete mass parts, the center of gravity
canIn
be order
assessed as follows:
to employ the described equivalent system method, it is necessary to
individuate the position of the CoG ofxGeach = ∑ individual
mi xi
M part of the human body, as
described in Table 1, obtained by processing data available from the literature. It was (1)
∑ mi yi
designed to describe body segment mass zG =as aMproportion of total body mass and the
location
where: k of each segment’s center of mass as a proportion of segment length, in the three
transverse, sagittal and longitudinal planes. Among the papers that provide such data,
• mi is the mass of i-th element;
• M is the mass of body (including clothing).
In order to employ the described equivalent system method, it is necessary to indi-
viduate the position of the CoG of each individual part of the human body, as described
in Table 1, obtained by processing data available from the literature. It was designed to
describe body segment mass as a proportion of total body mass and the location k of each
segment’s center of mass as a proportion of segment length, in the three transverse, sagittal
and longitudinal planes. Among the papers that provide such data, obtainable through
experimental tests, this works adopts the approach described by Zatsiorsky et al. in [30],
modified by de Leva in [31,32].
Vehicles 2021, 3, FOR PEER REVIEW 6

Vehicles 2021, 3 382


obtainable through experimental tests, this works adopts the approach described by
Zatsiorsky et al. in [30], modified by de Leva in [31,32].
In particular, two procedures to evaluate the CoG are compared: the geometric
Table 1. Percentage values method
of mass and
and position of the method
the kinematic center of[13].
gravity in adult men and women [30,31] (copyright has
been permitted).
Table 1. Percentage values of mass and position of the center of gravity in adult men and women [30,31] (copyright has
been permitted). Mass CM Sagittal k Transverse k Longitudinal k
Segment (% Mass) (% Length) (% Length) (% Length) (% Length)
Mass CM Sagittal k Transverse k Longitudinal k
Female Male Female Male Female Male Female Male Female Male
Segment (% Mass) (% Length) (% Length) (% Length) (% Length)
Head Female 6.68 Male 6.94Female58.94 Male 59.76Female30.1 Male 33.2Female 32.7Male 34.5 Female 28.8Male 28.6
Trunk 42.57 43.46 41.51 44.86 34.6 36 33.2 33.3 16.2 18.1
Head 6.68 6.94 58.94 59.76 30.1 33.2 32.7 34.5 28.8 28.6
Upper Trunk 15.45 15.96 20.77 29.99 60 60.57 41.1 38.7 58.6 55.9
Trunk
Mid Trunk 42.5714.65 43.46 16.33 41.51 45.12 44.86 45.02 34.6 43.3 36 48.2 33.2 35.4 33.3 38.3 16.2 41.5 18.1 46.8
Upper
Lower Trunk
Trunk 15.4512.47 15.96 11.17 20.77 49.2 29.99 61.15 60 43.3 60.57 61.5 41.1 40.2 38.7 55.1 58.6 44.4 55.9 58.7
Mid Trunk
Upper Arm 14.652.55 16.33 2.71 45.12 57.54 45.02 57.72 43.3 27.8 48.2 28.5 35.4 26 38.3 26.9 41.5 14.8 46.8 15.8
Forearm
Lower Trunk 12.471.38 11.17 1.62 49.2 45.59 61.15 45.74 43.3 26.2 61.5 27.7 40.2 25.8 55.1 26.6 44.4 9.45 58.7 12.15
Hand
Upper Arm 2.55 0.56 2.71 0.61 57.54 74.74 57.72 79 27.8 35.4 28.5 45.2 26 32.7 26.9 36.9 14.8 23.4 15.8 29
Thigh 14.78 14.16 36.12 40.95 36.9 32.9 36.4 32.9 16.2 14.9
Forearm 1.38 1.62 45.59 45.74 26.2 27.7 25.8 26.6 9.45 12.15
Shank 4.81 4.33 44.16 44.59 27.1 25.4 26.8 24.2 9.3 10.3
Hand
Foot
0.56 1.29 0.61 1.37 74.74 40.14 79 44.15 35.4 29.9 45.2 25.7 32.7 27.9 36.9 24.5 23.4 13.9 29 12.4
Thigh 14.78 14.16 36.12 40.95 36.9 32.9 36.4 32.9 16.2 14.9
Shank 4.81 4.33 44.16 44.59 27.1 25.4 26.8 24.2 9.3 10.3
Foot 1.29 1.37
In 40.14
particular,44.15
two procedures
29.9
to evaluate
25.7
CoG are compared:
the27.9 24.5
the geometric
13.9 12.4
method
and the kinematic method [13].
The overall
The overallprocedure
procedureconsists,
consists, therefore,
therefore, of the
of the following
following steps:
steps:
1.
1. Captureofofvideo
Capture videoframes
frames from
from thethe MotoGP19
MotoGP19 simulator
simulator (rear(rear camera
camera 2); 2);
2.
2. Data processing with OpenPose software to evaluate the center
Data processing with OpenPose software to evaluate the center of gravity of gravity
of theof the
individualbody
individual bodyelements;
elements;
3.
3. Evaluationofofthe
Evaluation thecenter
center
of of gravity
gravity of the
of the whole
whole body
body usingusing the in
the data data in Table
Table 1. 1.

3.2. Applicationon
3.2. Application onthe
theAcquired
Acquired Data
Data
The recorded
The recorded video
video was
wasprocessed
processedwithwiththe OpenPose
the OpenPose software
software forfor
both rearrear
both cameras,
available in the MotoGP19
cameras, available simulator,simulator,
in the MotoGP19 as illustrated in Figurein
as illustrated 5 (rear
Figurecamera
5 (rear1,camera
including
1, data
regarding the bottom
including data part
regarding theof the rider’s
bottom part ofbody) and body)
the rider’s in Figure
and 6in(rear camera
Figure 6 (rear2, simulating
camera
the capabilities
2, simulating theof a commonofonboard
capabilities a common camera):
onboard camera):

Vehicles 2021, 3, FOR PEER REVIEW 7


Figure 5.
Figure 5. OpenPose
OpenPosepostprocessing
postprocessingof of
MotoGP19 rearrear
MotoGP19 camera 1. 1.
camera

Figure 6.
Figure 6. OpenPose
OpenPosepostprocessing
postprocessingof of
MotoGP19 rearrear
MotoGP19 camera 2. 2.
camera

The processing of the acquisitions performed with camera 1 has a good quality with
little presence of corrupt frames and undetected key points. On the contrary, the
processing of the acquisition with camera 2 presents several corrupt frames, in which the
algorithm is not able to recognize parts of the body shape, as illustrated in Figure 7.
Vehicles 2021, 3 383
Figure 6. OpenPose postprocessing of MotoGP19 rear camera 2.
Figure 6. OpenPose postprocessing of MotoGP19 rear camera 2.

The processing of the acquisitions performed with camera 1 has a good quality with
The
The processing
processing ofof the
theacquisitions
acquisitionsperformed
performedwithwithcamera
camera11has
hasaagood
goodquality
qualitywith
with
little presence of corrupt frames and undetected key points. On the contrary, the
little
littlepresence
presenceof of
corrupt
corruptframes and undetected
frames key points.
and undetected key On the contrary,
points. On the the processing
contrary, the
processing of the acquisition with camera 2 presents several corrupt frames, in which the
of the acquisition
processing of the with camerawith
acquisition 2 presents
cameraseveral corrupt
2 presents frames,
several in which
corrupt the algorithm
frames, is
in which the
algorithm is not able to recognize parts of the body shape, as illustrated in Figure 7.
not able to recognize parts of the body shape, as illustrated in Figure 7.
algorithm is not able to recognize parts of the body shape, as illustrated in Figure 7.

Figure 7. Errors in the OpenPose processing of the MotoGP19 frame regarding rear camera 2.
Figure7.
Figure 7. Errors
Errors in
in the
the OpenPose
OpenPoseprocessing
processingof
ofthe
theMotoGP19
MotoGP19frame
frameregarding
regardingrear
rearcamera
camera2.2.

3.3. MachineLearning
3.3. Learning Technique
3.3. Machine
Machine Learning Technique
Technique
The
The data used to train theneural
neural networkconsistconsist ofpoint
point arraysfrom from OpenPose
The data
data used
used to to train
train the
the neural network
network consist of of point arrays
arrays fromOpenPoseOpenPose
processing.
processing. Machine learning algorithms employing the MATLAB neural fittingtool
tool [33]
processing. Machine learning algorithms employing the MATLAB neural fitting tool[33]
Machine learning algorithms employing the MATLAB neural fitting [33]
have
have been used to train the neural network. In particular, 20 different runs, each one
have been
been used
used to to train
train the
the neural
neural network.
network. In In particular,
particular, 20 20 different
different runs,
runs, each
each one
one
comprising
comprising about 10 laps, for a global acquired time of 20,000 s, with an acquisition
comprising about
about 10 10 laps,
laps, for
for aa global
global acquired
acquired time
time ofof 20,000
20,000 s,s, with
with an an acquisition
acquisition
frequency of
frequency of 20
20 Hz,
Hz, have
havebeen
beenused usedtotobuild
buildthe
theglobal
globaldataset.
dataset. The
The data
data have
have been
frequency of 20 Hz, have been used to build the global dataset. The data have been been
organizedinto
organized into 10
10input
input points
points ofof the
the upper
upper body
body andand 1010 target
target points
points of of the
the lower
lower body,
body,
organized into 10 input points of the upper body and 10 target points of the lower body,
while thehidden
while hidden andthe the outputlayerslayers ofthe
the neuralnetwork
network havethe the dimensionsof of 6
while the
the hidden and and the output
output layers of of theneural
neural network have have thedimensions
dimensions of66
and
and 10 neurons, respectively [33,34], as reported in Figure 8.
and 10
10 neurons,
neurons,respectively
respectively[33,34],
[33,34],asasreported
reportedininFigure
Figure8.8.

Figure 8.Neural
Neural networklayout.
layout.
Figure
Figure8.8. Neural network
network layout.

Thedesigned
The designedneural
neuralnetwork
networkisisaatwo-layered
two-layeredfeed-forward
feed-forwardnetwork
networkwith
withsix
sixhidden
hidden
The designed neural network is a two-layered feed-forward network with six hidden
neurons
neurons based
based on
on the sigmoid (activation
(activationfunction
functionofofthe
thenonlinear
nonlinear“neuron”)
“neuron”)and
and with
with 10
neurons based on the sigmoid (activation function of the nonlinear “neuron”) and with 10
10 linear output neurons (linear regression output function).
linear output neurons (linear regression output function).
linear output neurons (linear regression output function).
The training process of the neural networks is substantially based on a trial-and-error
approach. Therefore, it is usually necessary to train the network several times varying its
parameters until converging to the desired results. The dataset was divided into training,
validation, and testing sets, assigning 60%, 35% and 5% of data to the three subsets,
respectively, obtaining the datasets showed in Figure 9. Such figure shows the results of
the training process, highlighting the convergence obtained for both x and z coordinates of
the target points belonging to the lower part of the driver’s body.
The neural network outputs in terms of the entire body representation are illustrated
in Figure 10, focusing in particular on the validation of the lower points calculated by
means of the described technique, which are in a good agreement with the ones acquired
in the same frames from a different point of view. The plot reports the best performance
obtained, defined as the lowest validation error.
The training process of the neural networks is substantially based on a trial-and-error
approach. Therefore, it is usually necessary to train the network several times varying its
parameters until converging to the desired results. The dataset was divided into training,
validation, and testing sets, assigning 60%, 35% and 5% of data to the three subsets,
respectively, obtaining the datasets showed in Figure 9. Such figure shows the results of
Vehicles 2021, 3 384
the training process, highlighting the convergence obtained for both x and z coordinates of
the target points belonging to the lower part of the driver’s body.

Vehicles 2021, 3, FOR PEER REVIEW 9

Figure
Figure9.9.Best
Besttraining
trainingperformance
performance(a)
(a)x xcoordinates;
coordinates;(b)
(b)z zcoordinates.
coordinates.

The neural network outputs in terms of the entire body representation are illustrated
in Figure 10, focusing in particular on the validation of the lower points calculated by
means of the described technique, which are in a good agreement with the ones acquired
in the same frames from a different point of view. The plot reports the best performance
obtained, defined as the lowest validation error.
Two distinct acquisitions were made, relating to the same lap, through the use of the
two cameras:
• Acquisition 1 with camera 1: number of frames acquired 3596 (Figure 5);
• Acquisition 2 with camera 2: number of frames acquired 3583 (Figure 6).
Using the formulations described in Equation (1), an example of the points position,
in terms of the center of gravity, obtained thanks to the OpenPose processing and thanks
to the machine learning techniques (starting from the OpenPose estimated data) are
represented in Figures 11 and 12.
In such figures, the confidence ellipse (or sway area) is depicted. It represents the
surface that contains (with 86% probability) the positions of the calculated centers of
gravity [35].
Table 2 evidences the further similarity, in terms of standard deviation, relating to
the x and z coordinates, between the OpenPose calculations and the results of the machine
learning techniques, starting from the same OpenPose raw dataset.
Figure 10.
Figure Comparison between
10. Comparison between estimated
estimated and
and acquired
acquired key
key points
points in
in different
different body
body configurations.
configurations.
Table 2. Standard deviation value comparison highlighting consistency between the points acquired
Two distinct acquisitions were made, relating to the same lap, through the use of the
and processed by means of OpenPose and estimated by means of neural network.
two cameras:
• Acquisition 1 with camera 1: number of frames Standard
acquiredDeviation [cm]5);
3596 (Figure
• X
Acquisition 2 with camera 2: number of frames acquired 3583 (Figure 6). Z
OpenPose 5.43 8.19
Using the formulations described in Equation (1), an example of the points position, in
Neural
terms of the centerNetwork 5.01
of gravity, obtained thanks to the OpenPose 9.1 to the
processing and thanks
KCOMz [cm]

machine learning techniques (starting from the OpenPose estimated data) are represented
in Figures 11 and 12.
In such figures, the confidence ellipse (or sway area) is depicted. It represents the
surface that contains (with 86% probability) the positions of the calculated centers of
gravity [35].
Table 2 evidences the further similarity, in terms of standard deviation, relating to
the x and z coordinates, between the OpenPose calculations and the results of the machine
learning techniques, starting from the same OpenPose raw dataset.

Figure 11. Dispersion of points obtained thanks to OpenPose software compared with the neural
network results (for acquisition 1).
Vehicles 2021, 3 385
Figure 10. Comparison between estimated and acquired key points in different body configurations.

Vehicles 2021, 3, FOR PEER REVIEW 10

Figure11.
Figure Dispersionofofpoints
11.Dispersion pointsobtained
obtainedthanks
thanks
toto OpenPose
OpenPose software
software compared
compared with
with thethe neural
neural
network
networkresults
results(for
(foracquisition
acquisition1).
1).

Figure
Figure12.
12.Dispersion
Dispersion ofofpoints
pointsobtained
obtainedthanks
thankstotoOpenPose
OpenPosesoftware
softwarecompared
comparedwith
withthe
theneural
neural
network
networkresults (for
results acquisition
(for acquisition2).2).

Table
3.4. 2. Standard
Correlation deviation
with value comparison highlighting consistency between the points acquired
Roll Angle
and processed by means of OpenPose and estimated by means of neural network.
The capacity to predict the motorcycle rider’s behavior becomes crucial when it
comes to correctly define and design the dynamicStandard characteristics of the
Deviation [cm]entire motorcycle
rider system [36,37]. The center of gravity of a motorcycle body can be determined
X Z
through geometric and dynamic parameters, usually already available during the design
OpenPose
phase and partly extrapolated through data 5.43 acquisition systems [38,39]. 8.19
Neural Network 5.01 9.1
Regarding the driver inertia system, it varies instant by instant depending on the
driving style and the specific dynamic maneuver [40]. For this reason, one of the aims of
3.4.work
this Correlation with Roll Angle
is to understand if there is any correlation between the driver’s configuration
The capacity to predict
and the main telemetry channels, the motorcycle
the rollingrider’s
angle behavior
being amongbecomes
them. crucial when has
The study it comes
re-
garded the relative movement between the motorcycle and rider systems, calculatedrider
to correctly define and design the dynamic characteristics of the entire motorcycle as
system
the [36,37].
minimum The center
distance of gravity
“d” between the of a motorcycle
driver’s body
center of can and
gravity be determined
the vehicle through
rolling
geometric
axle, and dynamic
the straight parameters,
line belonging to theusually
symmetricalready availableplane
geometrical during the design
ISO-xz of thephase
movingand
partly
frame ofextrapolated
the vehicle, as through data in
illustrated acquisition
Figure 13.systems [38,39].
3.4. Correlation with Roll Angle
The capacity to predict the motorcycle rider’s behavior becomes crucial when it
comes to correctly define and design the dynamic characteristics of the entire motorcycle
Vehicles 2021, 3 386
rider system [36,37]. The center of gravity of a motorcycle body can be determined
through geometric and dynamic parameters, usually already available during the design
phase and partly extrapolated through data acquisition systems [38,39].
Regarding the
Regarding the driver
driver inertia
inertia system,
system, itit varies
varies instant
instant by
by instant
instant depending
depending on on the
the
driving style and the specific dynamic maneuver [40]. For this reason,
driving style and the specific dynamic maneuver [40]. For this reason, one of the aims ofone of the aims of
this work is to understand if there is any correlation between the driver’s
this work is to understand if there is any correlation between the driver’s configuration configuration
and the
and the main
main telemetry
telemetry channels,
channels, thethe rolling
rolling angle
anglebeing
beingamong
amongthem.them. The
The study
study has
has
regardedthe
regarded therelative
relativemovement
movementbetween
betweenthe themotorcycle
motorcycleandandrider
ridersystems,
systems,calculated
calculatedas as
the minimum distance “d” between the driver’s center of gravity and
the minimum distance “d” between the driver’s center of gravity and the vehicle rolling the vehicle rolling
axle, the
axle, the straight
straight line
line belonging
belongingto to the
the symmetric
symmetricgeometrical
geometricalplane
planeISO-xz
ISO-xzofofthe
themoving
moving
frame of the vehicle, as illustrated in Figure
frame of the vehicle, as illustrated in Figure 13. 13.

Figure13.
Figure 13. Distance
Distance dd between
between the
the driver’s
driver’s center
center of
of gravity
gravity and
and the
the vehicle’s
vehicle’s rolling
rollingaxle.
axle.

The explicit
The explicit equation
equationof ofthe
thedistance
distance“d”
“d”isisdefined
definedininEquation
Equation(2).
(2).
Thevalue
The valueofofd was
d was calculated
calculated as minimum
as the the minimum distance
distance betweenbetween
a point aand
point and a
a straight
straight
line usingline
theusing the equation
equation in explicitinform
explicit form (Equation
(Equation (2)) per
(2)) per each each
video video frame:
frame:
| |
𝑑 𝑃, 𝑟|z − (mx P + q)| (2)
d( P, r ) = P √ (2)
1 + m2
where:
where:
• xp ,yp represent the coordinates of point P;
• m is the angular coefficient of the straight line r;
• q is the intercept on the ordinate.
Figure 14a shows the trend of the quantity “d” as a function of the roll angle relative to
the points acquired and processed by OpenPose. Figure 14b, in analogy, shows the trend of
the quantity “d” as a function of the roll angle relative to the points estimated by means of
the machine learning technique. In both cases, steady state conditions have been selected
and reproduced with a third order polynomial to fit the main trends, highlighting similar
shapes. The low availability of acquired vehicle channels did not allow us to produce
a clear fitting and to provide further correlations, but the qualitative results encourage
persisting with following studies, able to involve also other variables and a wider dataset.
The negative and positive values of the roll angle represent the vehicle cornering on
the left and on the right, respectively.
Performing a specific data processing technique, consisting in removing the nonphys-
ical outliers and transient stages with tresholds on “d” at 50 cm and on the roll angle
derivative, filtering the data with a 1Hz low-pass filter, reporting the rolling angle values
in the positive quadrant, and performing a linear regression, a preliminary trend of the
distance d with the roll angle could be pointed out, as highlighted in Figure 15.
It can be clearly seen how, in order to better tackle the corners, the riders move
their bodies internally in order to achieve a lower centre of gravity of the entire driver–
motorcycle inertial system, therefore allowing the vehicle to achieve a greater forward
speed during cornering for the level of the roll angle. All this is strictly related to the travel
speed and relative forces, longitudinal and lateral, exchanged between the tire and the
asphalt, designed to ensure the balance of the motorcycle when cornering.
Finally, the points where the roll angle values are high and the “d” values are very
small are due to the intrinsic characteristics of the rider’s movements during the direction
 m is the angular coefficient of the straight line 𝑟;
 q is the intercept on the ordinate.
Figure 14a shows the trend of the quantity “d” as a function of the roll angle relative
to the points acquired and processed by OpenPose. Figure 14b, in analogy, shows the
Vehicles 2021, 3 trend of the quantity “d” as a function of the roll angle relative to the points estimated by 387
means of the machine learning technique. In both cases, steady state conditions have been
selected and reproduced with a third order polynomial to fit the main trends, highlighting
similar shapes. The low availability of acquired vehicle channels did not allow us to pro-
change
duce maneuvers.
a clear Points
fitting and (0,0) of the
to provide diagrams,
further expected
correlations, buttothe
be qualitative
physical (the rider encour-
results position
should be symmetrical at zero roll angle) in Figure 15, do not belong to the linear regression
age persisting with following studies, able to involve also other variables and a wider
because it interpolates the data giving priority to the linear part of the dataset at roll
dataset. ◦
angles > 3 .

Vehicles 2021, 3, FOR PEER REVIEW 12


Figure
Figure14.
14.Correlation
Correlationofof
roll angle
roll vs.vs.
angle “d”“d”
with Fourier
with fitting
Fourier curve:
fitting OpenPose
curve: (on left)
OpenPose (a) and neural
neural
network (on
network (b).right).

The negative and positive values of the roll angle represent the vehicle cornering on
the left and on the right, respectively.
Performing a specific data processing technique, consisting in removing the non-
physical outliers and transient stages with tresholds on “d” at 50 cm and on the roll angle
derivative, filtering the data with a 1Hz low-pass filter, reporting the rolling angle values
in the positive quadrant, and performing a linear regression, a preliminary trend of the
distance d with the roll angle could be pointed out, as highlighted in Figure 15.
It can be clearly seen how, in order to better tackle the corners, the riders move their
bodies internally in order to achieve a lower centre of gravity of the entire driver–motor-
cycle inertial system, therefore allowing the vehicle to achieve a greater forward speed
during cornering for the level of the roll angle. All this is strictly related to the travel speed
and relative forces, longitudinal and lateral, exchanged between the tire and the asphalt,
designed to ensure the balance of the motorcycle when cornering.
Figure Finally, the points where the roll angle valuesneural
are high and the “d” values are very
Figure15.15.Linear
Linearregression
regressionline:
line:OpenPose
OpenPose(on(onleft)
left)and
and neuralnetwork
network(on
(onright).
right).
small are due to the intrinsic characteristics of the rider’s movements during the direction
4.change maneuvers. Points (0,0) of the diagrams, expected to be physical (the rider position
4.Conclusions
Conclusions
shouldTheThebe symmetrical
objective
objective at zero roll
ofofdetermining
determining angle)
the
the in of
motion
motion Figure 15, do not
ofaamotorcycle
motorcycle belong
rider
rider tomachine
using
using the linear
machine regres-
learning
learning
sion because
algorithms it interpolates
processing
algorithms processing images images the data
through giving
CNN priority
motion to the
capture linear part
techniques of the
has dataset
been
motion capture techniques has been pursued at roll
pursued in
angles
inthis >3°.
thispaper,
paper,due
due totothe
thecomplexity
complexity ofofdeveloping
developing and run
and physical-based
run physical-based algorithms,
algorithms, and the
and
thedifficulty
difficultyin parameterization,
in parameterization, vehicle design
vehicle and performance
design and performanceoptimization applications.
optimization applica-
tions. The CoG parameter plays, in fact, a fundamental role in vehicle dynamics simulations
andThein the
CoG design phase of
parameter the motorcycle,
plays, since the rider
in fact, a fundamental roleand the motorcycle
in vehicle dynamics aresimula-
not two
tions and in the design phase of the motorcycle, since the rider and the motorcycle arepoint
separate systems, but fully integrated bodies whose deep understanding is a starting not
twoto achieve
separatemaximum
systems, but performance both inbodies
fully integrated terms whose
of safety andunderstanding
deep racing competitiveness.
is a starting
point The application
to achieve maximumof a technique based both
performance on theinuse of neural
terms networks
of safety has made
and racing it possible
competitive-
to identify the position of several key points belonging to the human body, starting from
ness.
the video frames acquired
The application at the rear
of a technique edgeon
based of athe
motorcycle fromnetworks
use of neural a gaminghas simulator.
made itSuchpos-a
sible to identify the position of several key points belonging to the human body, starting
from the video frames acquired at the rear edge of a motorcycle from a gaming simulator.
Such a choice was made because the reliability of the video data is not a main focus of the
work, which aims to set a methodology that will be then replicated with real vehicle video
Vehicles 2021, 3 388

choice was made because the reliability of the video data is not a main focus of the work,
which aims to set a methodology that will be then replicated with real vehicle video data.
The quality of the results obtained is closely linked with the OpenPose software
potential, which, as illustrated, could have significant limits in the recognition of key
points in particular positions. Despite such an aspect, the presented activity presents a
methodologic approach which could be further improved in terms of data quality, thanks
to the availability of a more reliable acquisition system, retaining its feasibility.
The training of a neural network, even applied to frames reproducing partial visibility
of the driver, allowed us to determine the key points not visible to the camera, thus also
guaranteeing the calculation of the center of gravity in conditions in which such a task
could hardly be achievable.
Finally, a preliminary function, linking the relative displacement of the driver’s center
of gravity towards the vehicle rolling axis as a function of the roll angle, has been proposed.
The determination of the driver’s center of gravity plays a fundamental role in the
overall dynamics of the system. Video analysis techniques represent a novel and under
development discipline, through which it will be increasingly possible to better understand
the motorcycle–rider relationship.
The practical implications of the presented study will involve the use of the developed
algorithms in activities regarding vehicle design and motorsport analysis, for which the
continuous and correct information on the rider’s CoG is an element of crucial interest as
concerns the effect of the body motion on vehicle dynamics and the ride/handling attitude
of the vehicle to be virtually prototyped.

Author Contributions: Data curation, F.C.; Funding acquisition, G.R.; Methodology, D.S.; Software,
D.D.; Supervision, F.F.; Validation, A.S. All authors have read and agreed to the published version of
the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Kourtzi, Z.; Shiffrar, M. Dynamic representations of human body movement. Perception 1999, 28, 49–62. [CrossRef] [PubMed]
2. Kanda, T.; Ishiguro, H.; Imai, M.; Ono, T. Body movement analysis of human-robot interaction. IJCAI 2003, 3, 177–182.
3. Panjan, A.; Sarabon, N. Review of Methods for the Evaluation of Human Body Balance. Sport Sci. Rev. 2012, 19, 131. [CrossRef]
4. Catena, R.D.; Chen, S.H.; Chou, L.S. Does the anthropometric model influence whole-body center of mass calculations in gait? J.
Biomech. 2017, 59, 23–28. [CrossRef] [PubMed]
5. Cheli, F.; Mazzoleni, P.; Pezzola, M.; Ruspini, E.; Zappa, E. Vision-based measuring system for rider’s pose estimation during
motorcycle riding. Mech. Syst. Signal Process. 2013, 38, 399–410. [CrossRef]
6. Cimolin, V.; Galli, M. Summary measures for clinical gait analysis: A literature review. Gait Posture 2014, 39, 1005–1010. [CrossRef]
7. Durkin, J.L.; Dowling, J.J.; Andrews, D.M. The measurement of body segment inertial parameters using dual energy X-ray
absorptiometry. J. Biomech. 2002, 35, 1575–1580. [CrossRef]
8. Munoz, F.; Rougier, P.R. Estimation of centre of gravity movements in sitting posture: Application to trunk backward tilt. J.
Biomech. 2011, 44, 1771–1775. [CrossRef]
9. Jaffrey, M.A. Estimating Centre of Mass Trajectory and Subject-Specific Body Segment Parameters Using Optimisation Approaches; Victoria
University: Melbourne, Australia, 2008; pp. 1–389.
10. Mündermann, L.; Corazza, S.; Andriacchi, T.P. The evolution of methods for the capture of human movement leading to
markerless motion capture for biomechanical applications. J. Neuro Eng. Rehabil. 2006, 3, 1–11. [CrossRef]
11. Hasler, N.; Rosenhahn, B.; Thormählen, T.; Wand, M.; Gall, J.; Seidel, H.P. Markerless motion capture with unsynchronized
moving cameras. IEEE Conf. Comput. Vis. Pattern Recognit. 2009, 224–231. [CrossRef]
12. Bakhtiari, A.; Bahrami, F.; Araabi, B.N. Real Time Estimation and Tracking of Human Body Center of Mass Using 2D Video
Imaging. In Proceedings of the 1st Middle East Conference on Biomedical Engineering 2011, Sharjah, United Arab Emirates,
21–24 February 2011. [CrossRef]
Vehicles 2021, 3 389

13. Cronin, N.J.; Rantalainen, T.; Ahtiainen, J.P.; Hynynen, E.; Waller, B. Markerless 2D kinematic analysis of underwater running:
A deep learning approach. J. Biomech. 2019, 87, 75–82. [CrossRef]
14. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21–26 July 2017; pp. 1302–1310. [CrossRef]
15. D’Andrea, D.; Cucinotta, F.; Farroni, F.; Risitano, G.; Santonocito, D.; Scappaticci, L. Development of Machine Learning Algorithms
for the Determination of the Centre of Mass. Symmetry 2021, 13, 401. [CrossRef]
16. Rice, R.S. Rider skill influences on motorcycle maneuvering. SAE Trans. 1978. [CrossRef]
17. Liu, T.S.; Wu, J.C. A Model for a Rider-Motorcycle System Using Fuzzy Control. IEEE Trans. Syst. Man Cybern. 1993, 23, 267–276.
[CrossRef]
18. Wang, Q.; Kurillo, G.; Ofli, F.; Bajcsy, R. Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft
Kinect. In Proceedings of the 2015 International Conference on Healthcare Informatics, Dallas, TX, USA, 21–23 October 2015.
[CrossRef]
19. Kirk, A.G.; O’Brien, J.F.; Forsyth, D.A. Skeletal Parameter Estimation from Optical Motion Capture Data. In Proceedings of the
IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June
2005. [CrossRef]
20. Zordan, V.B.; van der Horst, N.C. Mapping Optical Motion Capture Data to Skeletal Motion Using a Physical Model. In Proceedings
of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, San Diego, CA, USA, 26–27 July 2003.
21. Cossalter, V.; Lot, R.; Massaro, M. Motorcycle Dynamics. In Modelling, Simulation and Control of Two-Wheeled Vehicles; Wiley &
Sons: London, UK, 2014.
22. Boniolo, I.; Savaresi, S.M.; Tanelli, M. Roll angle estimation in two-wheeled vehicles. IET Control Theory Appl. 2009, 3, 20–32.
[CrossRef]
23. Schlipsing, M.; Schepanek, J.; Salmen, J. Video-Based Roll Angle Estimation for Two-Wheeled Vehicles. In Proceedings of the 2011
IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011. [CrossRef]
24. Farroni, F.; Mancinelli, N.; Timpone, F. A real-time thermal model for the analysis of tire/road interaction in motorcycle
applications. Appl. Sci. 2020, 10, 1604. [CrossRef]
25. Czart, W.R.; Robaszkiewicz, S. Openpose. Acta Phys. Pol. A 2004. [CrossRef]
26. Martinez, G.H. Single-Network Whole-Body Pose Estimation. In Proceedings of the IEEE/CVF International Conference on
Computer Vision, Seoul, Korea, 28 October 2019. [CrossRef]
27. Osokin, D. Real-time 2D multi-person pose estimation on CPU: Lightweight OpenPose. arXiv 2018, arXiv:1811. [CrossRef]
28. Frosali, G.; Minguzzi, E. Meccanica Razionale per l’Ingegneria; Esculapio: Lucca, Italy, 2015.
29. Yoganandan, N.; Pintar, F.A.; Zhang, J.; Baisden, J.L. Physical properties of the human head: Mass, center of gravity and moment
of inertia. J. Biomech. 2009, 42, 1177–1192. [CrossRef]
30. Zatsiorsky, V.M.; King, D.L. An algorithm for determining gravity line location from posturographic recordings. J. Biomech. 1997,
31, 161–164. [CrossRef]
31. de Leva, P. Adjustments to zatsiorsky-seluyanov’s segment inertia parameters. J. Biomech. 1996, 29, 1223–1230. [CrossRef]
32. Bova, M.; Massaro, M.; Petrone, N. A three-dimensional parametric biomechanical rider model for multibody applications. Appl.
Sci. 2020, 10, 4509. [CrossRef]
33. Demuth, H.; Beale, M. Neural Network Toolbox—For Use with MATLAB. MathWorks 2002. [CrossRef]
34. Pan, J.; Sayrol, E.; Giro, I.; Nieto, X.; McGuinness, K.; O’connor, N.E. Shallow and Deep Convolutional Networks for Saliency
Prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 30 June 2016.
[CrossRef]
35. Schubert, P.; Kirchner, M. Ellipse area calculations and their applicability in posturography. Gait Posture 2014, 39, 518–522.
[CrossRef]
36. Sforza, A.; Lenzo, B.; Timpone, F. A state-of-the-art review on torque distribution strategies aimed at enhancing energy efficiency
for fully electric vehicles with independently actuated drivetrains. Int. J. Mech. Control 2019, 20, 3–15.
37. Sharifzadeh, M.; Farnam, A.; Timpone, F.; Senatore, A. Stabilizing a Vehicle Platoon with the Unidirectional Distributed Adaptive
Sliding Mode Control. Int. Conf. Mechatron. Technol. ICMT 2019. [CrossRef]
38. Pleß, R.; Will, S.; Guth, S.; Hofmann, M.; Winner, H. Approach to a Holistic Rider Input Determination for a Dynamic Motorcycle
Riding Simulator. In Proceedings of the Bicycle and Motorcycle Dynamics Conference, Milwaukee, WI, USA, 21–23 September 2016.
39. Cossalter, V.; Doria, A.; Fabris, D.; Maso, M. Measurement and identification of the vibration characteristics of motorcycle riders.
In Proceedings of the Noise and Vibration Engineering: Proceedings of ISMA 2006, Leuven, Belgium, 18–20 September 2006.
40. Nagasaka, K.; Ichikawa, K.; Yamasaki, A.; Ishii, H. Development of a Riding Simulator for Motorcycles; SAE Technical Paper; SAE
International: Warrendale, PA, USA, 2018. [CrossRef]

You might also like