Professional Documents
Culture Documents
Modeling and Prediction of Competitive Performance in Swimm
Modeling and Prediction of Competitive Performance in Swimm
Introduction
The analysis of training processes is one of the most important issues of training
science with respect to assisting coaches in elite sports to monitor training and peak
athletic performances in crucial competitions. The performance in swimming is
closely connected to physiological adaptations that are induced by the athlete’s
training program. Several studies focused on adaptation in swimming (1–5, 8, 12,
1
2 / Edelmann-Nusser, Hohmann, and Henneberg
14) are based on linear mathematical concepts like linear differential equations or
regression analysis. But biological adaptation is a complex non-linear problem
because the adaptation of a biological system leads to changes in the system itself—
that is, the adaptive behavior can change. Further, it is commonly known that double
training input does not lead to double performance output. Hence, linear models can
only approximate the non-linear adaptive behavior in a very small range of the
modeled performance output.
The purpose of this paper is to demonstrate that the adaptive behavior of an elite
female swimmer can be modeled by means of the non-linear mathematical method
of artificial neural networks. The developed model was used to predict the competi-
tive performance (200-m backstroke) at the Olympic Games in Sydney in 2000.
Methods
Data Collection
The training process lasted a total 95 weeks from week 01/1998 to week 39/2000.
According to the system of Fry, Morton, and Keast (9), the training process was
divided into different preparation macrocycles, including final competitions. The
macrocycles consisted of 6–14 weeks (microcycles) of training preparation and 1–3
weeks of competitions.
The data consisted of 19 competitive performances (200-m backstroke) and
documented training loads in three zones of swim training intensity and two catego-
ries of dryland training. The three zones of training intensity were controlled by
frequent lactate testing in the course of the training process. Table 1 shows the
documented categories of training. For each week and each category, the training
input was quantified according to the third column of Table 1.
Data Analysis
Three analyses were conducted:
• The first analysis determined the influence of the 2-week taper phase prior to
the 19 competitions (model A, see Figure 1). The function of the taper is to
allow the athlete to recover from the high training loads before and to peak his
performance.
• The second analysis determined the influence of the high load training phase 3
and 4 weeks prior to the 19 competitions (model B, see Figure 1). This “crash”
cycle normally contains very intense and exhaustive training, and functions to
create a state of slight overreaching (14) in the athlete. That state of transient
fatigue allows the athlete to reach an accumulated, and thus optimal,
supercompensation after the later taper.
• The third analysis resulted in an overall model to determine the influence of a
4-week phase prior to the 19 competitions (overall model).
Validation
All three models were validated by the “leave-one-out” procedure (see Figure 3).
The procedure used 18 of the 19 data sets of the participant in the Olympic games to
train the network. Hence, during pre-training, 46 data sets (28 + 18) were used; after
pre-training, 18 data sets were used. The athlete’s training input of the remaining
data set is used to “predict”/model the competitive performance of this data set.
Afterwards the “predicted”/modeled competitive performance is compared with
the real competitive performance, and the error is computed (error = modeled per-
formance – real performance). This procedure was used for each data set and each
model—that is, 2 3 19 modeled performances were computed to validate the neural
models. Then the overall model was computed as the mean value of the correspond-
ing modeled performances of model A and model B. Table 3 shows the mean error
and standard deviation of the modeled performances in this procedure.
Finally, the results of Table 3 were compared to the results of multiple linear
regression analyses.
According to the leave-one-out method, multiple linear regression analyses
were used to compute each of the 19 competition performances for model A as well
as model B. Eighteen of the 19 data sets were used to compute the coefficients of the
equation of Figure 4. Each time, the coefficients were used to compute the 19th
competition performance. Then the overall model was computed as the mean value
of the corresponding modeled performances of model A and model B. Table 4
shows the mean error and standard deviation of the modeled performances upon
multiple linear regression analysis. It is evident that the results derived from the
neural networks (see Table 3) are much better. After this validation, the Olympic
competitive performance was predicted.
For model A, a neural network was trained for 5000 training steps with all 19
data sets of the Olympic participant and 28 data sets of the other swimmer (pre-
training). Then the neural network was trained with the 19 data sets of the Olympic
participant for the 5000 remaining training steps (main-training). Next, the Olympic
competitive performance was predicted on the basis of the training loads of the taper
phase before the Olympic competition (prediction of model A).
For model B, a neural network was trained for 1000 training steps with all 19
data sets of the Olympic participant and 28 data sets of the other swimmer (pre-
training). Then the neural network was trained with the 19 data sets of the Olympic
participant for the 9000 remaining training steps (main-training). Next, the Olympic
Table 3 Mean Error and Standard Deviations of the Error of the Modeling
of the 19 Competitive Performances (LEN-Points) Upon Neural Networks
Note. The mean error of 12.02 of the overall model is equivalent to differences of +0.62 s
or –0.61 s in the mean time of all nineteen 200-m backstroke races of 2:12.94 min:s.
Table 4 Mean Error and Standard Deviations of the Error of the Modeling of
the 19 Competitive Performances (LEN-Points) Upon Multiple Linear Regres-
sion Analyses
competitive performance was predicted on the basis of the training loads of the crash
cycle before the Olympic competition (prediction of model B).
In the final step, the prediction of the overall model was computed as the mean
value of the prediction of model A and the prediction of model B.
Results
Figure 5 compares real competitive performances and modeled competitive perfor-
mances of the overall model. The overall model predicted an Olympic competitive
performance of 2:12.59 min:s (871.24 LEN-points), while the real competitive
performance was 2:12.64 min:s (870 LEN-points).
Discussion
The results demonstrate that neural networks are excellent at modeling and predict-
ing competitive performances on the basis of training data. The problem, of neural
networks generally requiring many data sets for training, was overcome by using
data sets of another athlete. But this was only possible because the documented
training loads were the same for both athletes. It is necessary that the adaptive
behavior of both athletes is similar. We assumed this, but we did not know it before
the modeling; hence, there is no guarantee that such modeling and prediction can be
done with data sets of any other athletes. The validation procedure must be con-
ducted for each athlete and, for each set of results, one must decide whether the
neural network is a good or poor model of the adaptive behavior of the athlete. To
establish a good model, it may also be necessary to change some of the training
parameters (see Table 2). Currently, however, there are no rules for changing these
parameters, apart from trial and error.
A good model is not only able to predict competitive performance; it may also
be used to calculate a simulation of the prospective performance responses of the
athlete under the influence of a slightly changed structure of training loads. Thus,
after some training analysis, the trained neural network allows the coach to simulate
8 / Edelmann-Nusser, Hohmann, and Henneberg
References
1. Banister EW. 1982. Modeling elite athletic performance. In: MacDougall JD, Wenger
HW, Green HJ, editors. Physiological testing of elite athletes. Champaign, IL: Human
Kinetics. p. 403-25.
2. Banister EW, Calvert TW. 1980. Planning for future performance: implications for long
term training. Canadian Journal of Applied Sport Sciences 5:170-76.
Swimming and Neural Networks / 9
Acknowledgment
This study was funded by the Federal Institute of Sports Science (Bundesinstitut für
Sportwissenschaft; reference no.: VF 0407/16/02/2000), Bonn, Germany.