Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

European Journal of Sport Science, vol.

2, issue 2 Swimming and Neural Networks / 1


©2002 by Human Kinetics Publishers and the European College of Sport Science

Modeling and Prediction of Competitive


Performance in Swimming Upon Neural Networks

Jürgen Edelmann-Nusser, Andreas Hohmann,


andBerndHenneberg
The purpose of the paper is to demonstrate that the performance of an elite
female swimmer in the finals of the 200-m backstroke at the Olympic Games
2000 in Sydney can be predicted by means of the nonlinear mathematical
method of artificial neural networks (Multi-Layer Perceptrons). The data con-
sisted of the performance output of 19 competitions (200-m backstroke) prior to
the Olympics and the training input data of the last 4 weeks prior to each
competition. Multi-Layer Perceptrons with 10 input neurons, 2 hidden neuron,
and 1 output neuron were used. Since the data of 19 competitions are insuffi-
cient to train such networks, the training input and competition data of another
athlete were used in the training processes of the neural networks to pre-train the
neural networks. The neural models were validated by the “leave-one-out”
method, then the neural models were used to predict the Olympic competitive
performance. The results show that the modeling was very precise; the error of
the prediction was only 0.05 s, with a total swim time of 2:12.64 min:s.
Key Words: swimming, forecasting, physiological adaptation, Multi-Layer
Perceptron
Key points:
• The Olympic competitive performance of a single female elite swimmer is
modeled very precisely using neural networks.
• The problem of a small number of data sets is overcome by pre-training with
data sets of another swimmer.
• The results support a synergetic approach of training adaptation.

Introduction
The analysis of training processes is one of the most important issues of training
science with respect to assisting coaches in elite sports to monitor training and peak
athletic performances in crucial competitions. The performance in swimming is
closely connected to physiological adaptations that are induced by the athlete’s
training program. Several studies focused on adaptation in swimming (1–5, 8, 12,

Jürgen Edelmann-Nusser <juergen.edelmann-nusser@gse-w.uni-magdeburg.de> is


with the Department of Sports Science at Otto-von-Guericke-University Magdeburg, 39104
Magdeburg, Germany. Andreas Hohmann is with the Department of Sports Science at the
University of Potsdam, 14469 Potsdam, Germany. Bernd Henneberg, SC Magdeburg,
Brandenburgerstr. 6, 39104 Magdeburg, Germany.

1
2 / Edelmann-Nusser, Hohmann, and Henneberg

14) are based on linear mathematical concepts like linear differential equations or
regression analysis. But biological adaptation is a complex non-linear problem
because the adaptation of a biological system leads to changes in the system itself—
that is, the adaptive behavior can change. Further, it is commonly known that double
training input does not lead to double performance output. Hence, linear models can
only approximate the non-linear adaptive behavior in a very small range of the
modeled performance output.
The purpose of this paper is to demonstrate that the adaptive behavior of an elite
female swimmer can be modeled by means of the non-linear mathematical method
of artificial neural networks. The developed model was used to predict the competi-
tive performance (200-m backstroke) at the Olympic Games in Sydney in 2000.

Methods
Data Collection
The training process lasted a total 95 weeks from week 01/1998 to week 39/2000.
According to the system of Fry, Morton, and Keast (9), the training process was
divided into different preparation macrocycles, including final competitions. The
macrocycles consisted of 6–14 weeks (microcycles) of training preparation and 1–3
weeks of competitions.
The data consisted of 19 competitive performances (200-m backstroke) and
documented training loads in three zones of swim training intensity and two catego-
ries of dryland training. The three zones of training intensity were controlled by
frequent lactate testing in the course of the training process. Table 1 shows the
documented categories of training. For each week and each category, the training
input was quantified according to the third column of Table 1.

Table 1 Documented Categories of Training

Category of training Abbreviation Quantification

Compensation and maintenance aerobic End I Kilometers


endurance training at and slightly above the per week
aerobic threshold (2–3 mmol/L blood lactate)
Developmental and overload aerobic End II Kilometers
endurance training at and slightly above per week
the anaerobic threshold
( 4–6 mmol/L blood lactate)
Anaerobic power training, speed training, End III Kilometers
and competitions (6–20 mmol/L blood lactate) per week
Dryland strength training Strength h Hours
of training
Dryland general conditioning training Conditioning h Hours
of training
Swimming and Neural Networks / 3

The competitive performances in the 200-m backstroke events were trans-


formed according to the pointage system of the Ligue Européenne de Natation into
LEN-points. Therefore, we used the LEN-point table for 1997–2000 that reaches
from 1 to 1200 points, and where the actual World Record (e.g., in the female 200-m
backstroke: 2:06.62 min:s) serves as a reference value of 1000 points.

Data Analysis
Three analyses were conducted:
• The first analysis determined the influence of the 2-week taper phase prior to
the 19 competitions (model A, see Figure 1). The function of the taper is to
allow the athlete to recover from the high training loads before and to peak his
performance.
• The second analysis determined the influence of the high load training phase 3
and 4 weeks prior to the 19 competitions (model B, see Figure 1). This “crash”
cycle normally contains very intense and exhaustive training, and functions to
create a state of slight overreaching (14) in the athlete. That state of transient
fatigue allows the athlete to reach an accumulated, and thus optimal,
supercompensation after the later taper.
• The third analysis resulted in an overall model to determine the influence of a
4-week phase prior to the 19 competitions (overall model).

For models A and B, Multi-Layer Perceptrons consisting of 10 input neurons,


2 hidden neurons, and 1 output neuron were used (see Figure 2). Ten input neurons
were necessary to account for 2 weeks with each of 5 training loads for both models.
The overall model was computed as the mean of models A and B.
To train the Multi-Layer Perceptron of Figure 2 at least 40 data sets each
consisting of one competitive performance and the accompanying 10 training loads
of 2 weeks are necessary. But only 19 were available; therefore, data of a second
athlete were used: For a second elite female swimmer (400-m freestyle) who is no
longer active, 28 equivalent data sets were available. These data sets were used to
pre-train the neural networks.
The training parameters of the neural nets are shown in Table 2.

Figure 1 — Temporal relationships of models A, B, and competition. Both models are


used to compute the competitive performance on the basis of the training input of 2
weeks. Hence, for each model, we compiled 19 data sets, each consisting of 10 training
loads of 2 weeks and 1 competitive performance.
4 / Edelmann-Nusser, Hohmann, and Henneberg

Figure 2 — Multi-Layer Perceptron with 10 input neurons, 2 hidden neurons, and 1


output neuron (circles: neurons; lines: connections between the neurons). Correspond-
ing to the data, each input neuron represents one documented training load of 1 week.
For model A, i substitutes 1 and ii substitutes 2; for model B, i substitutes 3 and ii
substitutes 4. Since the competitive performance shall be computed, the output layer
consists of one neuron that represents the competitive performance.

Table 2 Training Parameters of the Multi-Layer-Perceptrons

Variable Model A Model B

Transfer function, input layer Linear Linear


Transfer function, hidden layer Tanh (hyperbolic Tanh (hyperbolic tangent
tangent function) function)
Transfer function, output layer Linear Linear
Total number of training steps 10,000 10,000
Pre-training, number of training 5000 1000
steps
Initialization of synapses Randomized Randomized [–0.1 . . .
[–0.1 . . . +0.1] +0.1]
Learning rate 0.1 0.1
Decay of learning rate (per step) 0.999999 0.999999
Presentation of training data sets Randomized Randomized
Swimming and Neural Networks / 5

Validation
All three models were validated by the “leave-one-out” procedure (see Figure 3).
The procedure used 18 of the 19 data sets of the participant in the Olympic games to
train the network. Hence, during pre-training, 46 data sets (28 + 18) were used; after
pre-training, 18 data sets were used. The athlete’s training input of the remaining
data set is used to “predict”/model the competitive performance of this data set.
Afterwards the “predicted”/modeled competitive performance is compared with
the real competitive performance, and the error is computed (error = modeled per-
formance – real performance). This procedure was used for each data set and each
model—that is, 2 3 19 modeled performances were computed to validate the neural
models. Then the overall model was computed as the mean value of the correspond-
ing modeled performances of model A and model B. Table 3 shows the mean error
and standard deviation of the modeled performances in this procedure.
Finally, the results of Table 3 were compared to the results of multiple linear
regression analyses.
According to the leave-one-out method, multiple linear regression analyses
were used to compute each of the 19 competition performances for model A as well
as model B. Eighteen of the 19 data sets were used to compute the coefficients of the
equation of Figure 4. Each time, the coefficients were used to compute the 19th
competition performance. Then the overall model was computed as the mean value
of the corresponding modeled performances of model A and model B. Table 4
shows the mean error and standard deviation of the modeled performances upon
multiple linear regression analysis. It is evident that the results derived from the

Figure 3 — “Leave-one-out” procedure. For model A as well as model B, 19 runs were


performed, with each model “predicting” the 19 competitive performances.
6 / Edelmann-Nusser, Hohmann, and Henneberg

neural networks (see Table 3) are much better. After this validation, the Olympic
competitive performance was predicted.
For model A, a neural network was trained for 5000 training steps with all 19
data sets of the Olympic participant and 28 data sets of the other swimmer (pre-
training). Then the neural network was trained with the 19 data sets of the Olympic
participant for the 5000 remaining training steps (main-training). Next, the Olympic
competitive performance was predicted on the basis of the training loads of the taper
phase before the Olympic competition (prediction of model A).
For model B, a neural network was trained for 1000 training steps with all 19
data sets of the Olympic participant and 28 data sets of the other swimmer (pre-
training). Then the neural network was trained with the 19 data sets of the Olympic
participant for the 9000 remaining training steps (main-training). Next, the Olympic

Table 3 Mean Error and Standard Deviations of the Error of the Modeling
of the 19 Competitive Performances (LEN-Points) Upon Neural Networks

Model A: Model B: Overall


Variable “taper cycle” “crash cycle” model

Mean error 14.78 20.16 12.02


Standard deviation 15.76 17.73 15.82

Note. The mean error of 12.02 of the overall model is equivalent to differences of +0.62 s
or –0.61 s in the mean time of all nineteen 200-m backstroke races of 2:12.94 min:s.

Figure 4 — Linear equation to compute competitive performance. For model A, i


substitutes 1 and ii substitutes 2; for model B, i substitutes 3 and ii substitutes 4.

Table 4 Mean Error and Standard Deviations of the Error of the Modeling of
the 19 Competitive Performances (LEN-Points) Upon Multiple Linear Regres-
sion Analyses

Model A: Model B: Overall


Variable “taper cycle” “crash cycle” model

Mean error 39.55 37.22 34.19


Standard deviation 29.93 25.72 18.72
Swimming and Neural Networks / 7

Figure 5 — Comparison of the real competitive performances, the modeled perfor-


mances (overall model), and the prediction of the Olympic competitive performance.
The error of the prediction is 1.24 points and 0.05 s.

competitive performance was predicted on the basis of the training loads of the crash
cycle before the Olympic competition (prediction of model B).
In the final step, the prediction of the overall model was computed as the mean
value of the prediction of model A and the prediction of model B.

Results
Figure 5 compares real competitive performances and modeled competitive perfor-
mances of the overall model. The overall model predicted an Olympic competitive
performance of 2:12.59 min:s (871.24 LEN-points), while the real competitive
performance was 2:12.64 min:s (870 LEN-points).

Discussion
The results demonstrate that neural networks are excellent at modeling and predict-
ing competitive performances on the basis of training data. The problem, of neural
networks generally requiring many data sets for training, was overcome by using
data sets of another athlete. But this was only possible because the documented
training loads were the same for both athletes. It is necessary that the adaptive
behavior of both athletes is similar. We assumed this, but we did not know it before
the modeling; hence, there is no guarantee that such modeling and prediction can be
done with data sets of any other athletes. The validation procedure must be con-
ducted for each athlete and, for each set of results, one must decide whether the
neural network is a good or poor model of the adaptive behavior of the athlete. To
establish a good model, it may also be necessary to change some of the training
parameters (see Table 2). Currently, however, there are no rules for changing these
parameters, apart from trial and error.
A good model is not only able to predict competitive performance; it may also
be used to calculate a simulation of the prospective performance responses of the
athlete under the influence of a slightly changed structure of training loads. Thus,
after some training analysis, the trained neural network allows the coach to simulate
8 / Edelmann-Nusser, Hohmann, and Henneberg

the effects of certain modifications of the training program on the competitive


performance of the athlete. This makes the planning and monitoring of a training
process more effective.
To investigate the applicability and limitations of neural networks to model
and predict competitive performances, we used Multi-Layer Perceptrons (Figure 2)
to model 28 competitive performances of another athlete who was no longer active
(see 6). This neural modeling resulted in a mean value of the error of 12.8 LEN-
points. As this is very close to the mean error of 12.02 LEN-points in Table 3, we can
assume that our modeling would be successful for other athletes, provided there are
at least 40 data sets and the documented categories of training are comparable. To
reduce mean error, more information about the training process (i.e., more docu-
mented categories of training as shown in Table 1) is necessary. But increasing the
number of categories in Table 1 means also increasing the number of input neurons
in Figure 2: A sixth category increases the number of input neurons to 12 and would
require at least 50 data sets for the training of the net. Another way to decrease the
error is to increase the number of neurons in the hidden layer, but this requires many
more data sets. For instance, a third neuron in the hidden layer would require at least
60 data sets. (The minimum number of data sets is about double the number of
connections between neurons.) More single case studies using larger data sets are
necessary to gain experience about this method.
The accurate results of the neural modeling compared with the poor results of
the linear regression analysis emphasizes a very important aspect of training sci-
ence: The adaptive behavior of the system athlete is quite a complex, non-linear
problem. This supports a synergetic approach of training adaptation. The synergetic
approach is to be seen as a metaphor for the adaptive behavior of the system athlete
in which the athlete enters a certain stable state of performance (the attractor) in a
self-organized way under the influence of the training load as a control parameter.
To analyze the synergetic behavior of a complex dynamic system, tools are needed
that predict what states of the system are more or less likely to occur. This question is
precisely addressed through neural networks, because neural networks are comput-
ing devices that are able to recognize or distinguish different kinds of input and
output patterns of the behavior of interest. Hence, from a synergetic point of view, a
successful neural modeling may be (but is not required to be) interpreted as a
representation of deviations of the different states of the system from equi-probabil-
ity, in our case the identification of stable states of the athletic performance (see 7,
pp. 211-12). This synergetic point of view is a very interesting, but still-hypotheti-
cal, aspect of neural modeling a competitive performance, because the non-linear
dynamical systems perspective is rapidly emerging as one of the dominant
metatheories in the natural sciences (10, 11), and there is reason to think that it will
provide integrative understanding in training science as well.

References
1. Banister EW. 1982. Modeling elite athletic performance. In: MacDougall JD, Wenger
HW, Green HJ, editors. Physiological testing of elite athletes. Champaign, IL: Human
Kinetics. p. 403-25.
2. Banister EW, Calvert TW. 1980. Planning for future performance: implications for long
term training. Canadian Journal of Applied Sport Sciences 5:170-76.
Swimming and Neural Networks / 9

3. Busso T, Häkkinen K, Pakarinen A, Carasso C, Lacour J-R, Komi PV, Kauhanen H.


1990. A systems model of training responses and its relationship to hormonal responses
in elite weight lifters. Eur J Appl Physiol 61:48-54.
4. Busso T, Denis C, Bonnefroy R, Geyssant A, Lacour JR. 1997. Modelling of adaptations
to physical training by using a recursive least squares algorithm. J Appl Physiol 5:1685-
93.
5. Chatard JC, Mujika IT. 1999. Training load and performance in swimming. In: Keskinen
KL, Komi PV, Hollander AP, editors. Biomechanics and medicine in swimming VIII.
Jyväskylä: University Press (Gummerus Printing). p. 429-34.
6. Edelmann-Nusser J, Hohmann A, Henneberg B. 2001. Modellierung von
Wettkampfleistung im Schwimmen mittels neuronaler Netze. In: Perl J, editor. Sport
und informatik VIII. Cologne: Sport und Buch Strauss. p. 11-20.
7. Eiser JR. 1994. Toward a dynamic conception of attitude consistency and change. In:
Vallacher RR, Nowak A, editors. Dynamical systems in social psychology. San Diego,
CA: Academic Press. p. 197-218.
8. Fitz-Clarke JR, Morton RH, Banister, EW. 1991. Optimizing athletic performance by
influence curves. J Appl Physiol 71:1151-58.
9. Fry AC, Morton RH, Keast D. 1991. Overtraining in athletes: an update. Sports Medicine
12:32-65.
10. Haken H. 1983. Synergetics. An introduction. Nonequilibrium phase transitions in phys-
ics, chemistry and biology. Berlin: Springer.
11. Haken H. 1995. Erfolgsgeheimnisse der natur. Reinbek bei Hamburg: RoRoRo.
12. Hohmann A. 1992. Analysis of delayed training effects in the preparation of the west-
German water polo team for the Olympic games 1988. In: MacLaren D, Reilly T, Lees A,
editors. Swimming science VI. London: E & F Spon. p. 213-17.
13. Hooper SL, Mackinnon LT. 1999. Monitoring regeneration in elite swimmers. In:
Lehmann M, Foster C, Gastmann U, Kaizer H, Steinacker JM, editors. Overload, perfor-
mance, incompetence and regeneration in sport. New York: Kluwer Academic Plenum.
p. 139-48.
14. Kreider RB, Fry AC, O’Toole ML. 1998. Overtraining in sports: terms definitions, and
prevalence. In: Kreider RB, Fry AC, O’Toole ML, editors. Overtraining in sport.
Champaign, IL: Human Kinetics. p. vii-ix.
15. Mujika IT, Busso T, Geyssant A, Chatard JC, Lacoste L, Barale F. 1996. Modeling the
effects of training in competitive swimming. In: Troup JP, Hollander AP, Strasse D,
Trappe SW, Cappaert JM, Trappe TA, editors. Biomechanics and medicine in swim-
ming VII. London: E&F Spon. p. 221-28.

Acknowledgment
This study was funded by the Federal Institute of Sports Science (Bundesinstitut für
Sportwissenschaft; reference no.: VF 0407/16/02/2000), Bonn, Germany.

About the Authors


Dr. Jürgen Edelmann-Nusser studied sports science and electrical engineering. He
worked at the University of Stuttgart in the Department of Sports Science (1995–
1998) and at the Department of Parallel and Distributed High Performance Systems
10 / Edelmann-Nusser, Hohmann, and Henneberg

(1998–1999). In Magdeburg, his main fields of research are computer science in


sports and sports equipment.
Prof. Dr. Andreas Hohmann is with the Institute of Sport Science in the University of
Potsdam, Germany. He was a assistant coach of the German water-polo national team at the
Olympic Games 1988 in Seoul. Today, he is a member of the scientific advisory board for the
German Swimming Federation.
Bernd Henneberg is a coach at SC Magdeburg and assistant coach of the German
national swim team.

You might also like