Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Survey of Nature Inspired Optimization

Algorithms Applied to Cooperative Strategies in


Robot Soccer
Asma S. Larik Sajjad Haider
Artificial Intelligence Lab, Faculty of Computer Science, Artificial Intelligence Lab, Faculty of Computer Science,
Institute of Business Administration, Garden Road, Karachi- Institute of Business Administration, Garden Road, Karachi-
74400, Pakistan 74400, Pakistan
asma.sanam@khi.iba.edu.pk sajjad.haider@khi.iba.edu.pk

Abstract— Nature inspired optimization algorithms are cooperative strategies in the context of robot soccer and (4) a
known for their inherent ability to solve complex problems where mapping of the applications of nature-inspired optimization
multiple agents interact to perform a task at hand. This paper algorithms for evolving cooperative strategies in the domain
presents a survey of how these optimization algorithms have been of RoboCup simulation leagues.
applied in determining cooperative strategies for a team of soccer
The organization of paper is as follows: Section2
playing agents. The survey discusses the contributions made by
researchers and makes an effort to map nature inspired
provides a brief technical background of the design and
optimization algorithms with the type of cooperative strategy classification of various nature inspired optimization
evolved. A categorization of cooperative study is also presented algorithms while Section 3 provides a brief overview of the
specifically in the domain of RoboCup Soccer Simulation League domain of RoboCup. The types of cooperative strategy
where agents cooperate in a virtual environment to develop a deployed in RoboCup Soccer domain is discussed in Section 4.
strategy without any issue of physical wear and tear. This study Section 5 presents a comprehensive study of the mapping of
would serve as a starting point for teams participating in approaches proposed in literature along with their limitations.
RoboCup competitions to enhance their strategy utilizing ideas Lastly, Section number 6 concludes the paper along with
from nature inspired algorithms. providing future research directions.

Keywords— RoboCup Soccer, optimization, nature inspired, II. NATURE-INSPIRED OPTIMIZATION ALGORITHMS AND THEIR
strategy, evolutionary approach, swarm approach CATEGORIZATIONS
I. INTRODUCTION Nature-inspired optimization algorithms come under the
umbrella of computational intelligence [15]. A broader
Nature is full of instances where flock of birds, group of ants,
categorization is given in Fig 1.
set of honey bees’ etc. exhibit cooperative strategies to
execute their daily tasks. Humans have taken ideas from
nature to solve their real life problems where cooperation is
the key to success. These meta-heuristics[1] have made
significant progress in various domains [2] and have proven to
be successful in finding an optimal solution. Robot Soccer is
one of the many application fields where nature inspired
algorithms have been applied successfully. This survey
focuses on RoboCup Soccer Simulation Leagues[3] that is
organized under the umbrella of RoboCup competitions [4]. In
these leagues, a group of agents cooperates in an adversarial
environment. Fig.1. Categorization of Nature inspired Algorithms
The term cooperative strategy also varies from scenario to It is a family of algorithms that uses iterative approach over a
scenario and depends on the context in which it has been set of population, guided by heuristic and use trial and error
applied. In some studies [5]–[8], strategy denotes learning a methods to optimize a given problem. They are further
set of low level behaviors exhibits by each agent while in the categorized into various classes based on the model, problem
other strategy takes the form of high level roles[9]–[14] that representation and implementation consideration.
are associated with agents in the field. The main contributions Evolutionary Algorithms [16] are based on the theory of
of this paper are as follows. (1) A categorization of nature evolution which defines how species adapted over time to
inspired algorithms, (2) discussion of some seminal works and survive in the environment. In evolutionary algorithms, a
recent researches in the domain, (3) a categorization of population of individuals is used to find a solution for a given

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.
problem. Each individual has a number of genes that represent which do not involve physical robots and instead simulated
a possible solution. Then, from the current population, robots play soccer located in the virtual field. In the simulation
offspring are created using mutation and crossover. Mutation 2D league, the ball and the players are represented by circles
adds noise to the genes of an individual which modifies it. on the plane of soccer field, while in the simulation 3D
Crossover combines the genes of two individuals. The fitness League, the players are represented as articulated, rigid bodies
of each individual is calculated by a fitness function. having 22 hinges. In 2D league, commands such as move,
According to different authors, Evolutionary Strategy, dash, turn and kick are available, however, in 3D these
Genetic Programming and Genetic Algorithm are all the commands do not exist and locomotion is a big challenge
examples of algorithms that use the biological mechanism there. Fig. 2 depicts the images of 2D and 3D leagues starting
from left to right respectively.
from population to survival. The difference between each of
them lies in their implementation [17].
Swarm Intelligence [1] is the collective and unsupervised
behavior of animals that interact with each other as well as the
environment. There are numerous swarm-intelligence based
algorithms namely Ant Colony Optimization [18], Particle
Swarm Optimization [19], Bee Colony Algorithm[1],
Bacterial Foraging Algorithm [1], Firefly algorithm[1] etc. In
the context of robot soccer, we would discuss two of them Fig.2. Two teams playing
namely PSO and ACO that have been used for learning RoboCup Soccer Simulation
cooperative strategies. PSO imitates the behavior of a flock of 2D league and 3D league
birds searching for food [19]. For each particle (bird), a record
is kept of its position, velocity, and the best position found so IV. COOPERATIVE STRATEGIES IN ROBOCUP SOCCER
far, and the overall best position among the flock. Instead of AND THEIR CATEGORIZATION
using evolutionary operators of mutation and crossover, the
velocity of each particle is adjusted according to its own flying There have been numerous studies on the application of nature
experience and the other particles’ flying experience. ACO is inspired optimization algorithms in RoboCup Soccer for
widely used by computer scientists to solve path problem and motion control including walk or kick optimization [23]–[25]
it basically models the foraging behavior of ants. and path planning problems[26], [27]. However, there is a
Cultural Algorithms[20] is a class of algorithms derived growing trend of its application at the level of strategy
from the cultural evolution process in nature. The significant building[5], [6], [12], [13], [28]–[30]. The novelty of this
feature of this algorithm is that it has a knowledge component survey is a discussion of techniques that are applying nature
other than the population that is known as the belief space. inspired algorithms to build coordination among a team of
Actions are performed by individuals in the population space agents. This coordination is actually the cooperative strategy
and each individual in the population is evaluated using the employed by teams. In some cases, cooperative strategy is the
objective function. After individual’s fitness values are scored, high level decision to shoot the ball, kick the ball or pass the
an acceptable function is used to determine which of these ball to a team mate while others treat it is a set of low-level
individuals should update the belief space. The experiences of points each player needs to approach. Thus there is no
these accepted individuals will then be added to the belief consensus on a single definition and the notion of strategy
space contents. varies significantly from one application to another. In Fig.3
Artificial Immune System [21] is the subfield of nature we propose a loosely coupled classification of cooperative
inspired optimization algorithms motivated by immunology. strategy based on the literature survey.
They mimic the adaptive nature of biological immune
functions. Their ability to adapt to varying pathogens makes
such systems a suitable choice for various robotic applications
but they are not a part of this study.
III. ROBOCUP SOCCER DOMAIN
Fig.3. Categorization of Cooperative Strategy
RoboCup Soccer [22] is a scientific venture that serves as an
exciting platform for the advancement of research in artificial A. Strategy governed by a centralized coach
intelligence as well as robotics. The competition has a goal
that, by the middle of the 21st century, a team of robots would In this approach strategy is the particular region where an
agent needs to be at a particular time instance[5], [30]. This
defeat a team of humans in soccer. This competition is held
strategy is communicated by a coach agent that sends the
every year, and several leagues have been designed to cater for
respective message to all the agents in the field and only the
different problems and challenges in robot coordination,
intended one executes the action. This approach has the
locomotion, etc. There are two simulation leagues in this event
limitation of developing a coach agent as well as the extra

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.
communication overhead involved. Also, the approach A. EVOLVING COOPERATIVE STRATEGIES VIA EVOLUTIONARY
imposes the division of the entire field into sub regions and ALGORITHMS
computation of low-level points for each agent.
B. STRATEGY AS TEAM FORMATION (STF) T. Nakashima et al. [5], [31] proposed the use of Genetic
a strategy based on team formation denotes the placement of Algorithms to learn a team strategy in the domain of 2D
agents other than the goal keeper in the field[6], [28]. this simulation league. They divided the field into 48 sub-regions
formation can be either attacking or defensive depending upon and the 10 players of both the teams excluding the goalkeeper
the scenario faced. a formation like 4-3-3 denotes an attacking could exist in any of the regions. The basic code comprised of
formation in which there are four defenders, three players in two action modes the ball handling mode and the positioning
the middle of the field and three attacking players while a mode. If the player was near the ball then the ball handling
formation 4-5-1 is a very defensive formation in which only a mode was invoked else vice versa. They created a repository
single player attacks. The limitations of these approaches are of action rules and these action rules represented the strategy.
the decision of switching formation from one case to another The action rules were defined as:
and its frequency.
Rj: If agent in area Aj and the nearest opponent is Bj then
C. STRATEGY AS A SET OF HIGH LEVEL ROLES (SHR) the action is Cj j=1,…,N
This is a distributed approach where strategy denotes where Rj denoted the rule index,
assigning roles to agents in the field [29]. These roles are Aj was the index of area,
computed depending upon the position of the ball in the field. Bj was index of opponent,
The player closest to the ball becomes the attack leading Cj was the consequent action and
player and all the others take up their positions with respect to N was total action rules.
that particular player. These roles are dynamic and allocated
via algorithm proposed by P. Stone and M.Veloso [14]. The The chromosome comprised of the action to be executed in a
bottleneck of this approach is that each player has a role that certain situation and the action was propagated via a
needs to be communicated in a timely manner. Secondly, this centralized coach agent. The fitness of strategy was computed
role switching needs to be controlled as frequent switching by running simulated matches and taking average goal
could lead to a chaotic situation in the field. difference. They learnt strategy against a single fixed
opponent.
M. Lekavy [13] applied evolutionary algorithm for evolving
D. STRATEGY AS A SET OF HIGH LEVEL BEHAVIORS (SHB) passes in 2D simulation league. The chromosome denoted
sequence of actions that need to be carried out as a reaction to
In this category, a cooperative strategy is considered as a
some standard situation by a single player. The sequence of
set of high level behaviors exhibited by agents in the field.
actions was passes to teammates. The chromosome comprised
They include shooting the ball, passing the ball or dribbling of three parts namely number of player to pass to, offset of the
along with the ball etc. pass and activity time. Only the passes were controlled by the
Some approaches [13] have taken strategy as only the chromosome. When the agent did not have the ball, it reacted
success on a good pass execution by one player to its team according to the default logic. For population, they used two
mate. They have trained the agents to exhibit set of passes in approaches: Classical evolutionary; and Co-evolutionary. In
different scenarios and concluded that this is critical for team the first approach, each individual had 11 chromosomes - one
success. The limitations of this approach are that the pass for each agent. Each individual was evaluated as a whole. In
prediction is affected by noise in the simulated environment the latter approach, one individual was equal to one
and in most of the cases a shot to goal can be more fruitful as chromosome. New individuals were created by copying a
compared to a pass. randomly selected parent and mutating it. For standard
In more recent studies proposed by Chen et al. [12] , evolutionary approach, one individual was selected from the
behavior selection is made by learning shooting model, population and the game was played for 45 cycles. For co-
passing model and dribbling model separately by nature evolutionary approach, one individual from each of 11
inspired algorithm. This approach has the advantage that it is populations was selected which jointly represents a game
robust and real time strategy execution is performed strategy. For fitness evaluation the function was combination
instantaneously utilizing the parameters available. of the number of cycles for which the team controlled the ball
and the distance between the opponent goal and the last ball
V. LITERATURE REVIEW MAPPING NATURE-INSPIRED position. Each strategy was evaluated twice and final fitness
OPTIMIZATION CATEGORIZATION TO COOPERATIVE was the minimum of the two. Experiments with learning
strategies were carried out against a single fixed opponent.
STRATEGIES
This section gives a comprehensive literature review of the B. Evolving cooperative strategies via swarm algorithms
studies in light of the above categorization of cooperative
strategies and nature inspired model.

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.
H.Okada et al. [6] utilized Particle Swarm Optimization to new technique with existing value based and reinforcement
evolve team formation in the domain of RoboCup Soccer learning based techniques. The results demonstrated the proof
Simulation 2D league. Cartesian coordinates of the ten players of concept that the evolved team performed better.
and 15 possible positions of the ball were modeled as elements
of an individual solution. The initial population was randomly
C. Evolving cooperative strategies via cultural algorithms
generated for each N particle.
The authors used both the single-objective and two-objective Cultural Algorithms (CA) were applied by Salhieh et al. [30]
PSO. In the first one, the fitness function was the total number in 2D league to find the best action to executed depending
of goals scored (or faced). In the latter, the two objectives upon the position of agent in the field and its relation to the
were to score more and face fewer goals. The results reported nearest opponent. A chromosome contained the action of each
how well formations for various team performances (e.g., agent in the region that represented an action rule. The
offensive, defensive, balanced) could automatically be strategy learnt was the action rule that contained two choices
obtained. The offensive formation of the teams was improved of when to dribble the ball and when to pass the ball to the
utilizing two-objective PSO. teammate. This was a centralized approach where a coach
Geetha et al. [28] presented the idea of mapping ant agent was developed that sent messages to all the agents and
behavior such as nest protecting and pheromone foraging for only the intended agent executed the action. Goal difference
generating team strategies in Robot Soccer. The tendency of was used as the evaluation function. The authors claimed
nest protecting behavior exhibited by ants was applied to improvement during the first five generations. The simulated
strategy of goal keeper while the pheromone following results also showed enhancements in terms of goals scored.
technique was used as an indirect signaling for cooperation M. Ali et al. [29] contributed the use of simplified and
among group of players. A player who moves towards the ball
adaptive version of CA to develop defensive and offensive
lays pheromone for its neighboring players to take relative
plays in simulated robot soccer. The agents were able to
positions thus maximizing the chances to score a goal. The
develop the most suitable team formation by utilizing a set of
authors developed an evolved team namely PUTeam using a
java based application TeamBots. Experiments were finite state machines (FSM ) in the field. The number of goals
conducted by executing matches of PUTeam against 20 other scored was used for fitness. They used evolutionary
teams. The results demonstrated that the evolved team was programming to model the solution space. For each individual
successful in winning 16 matches out of 20 matches played. in the population they defined a set of actions and a set of
The strategy learnt was the formation of team at a particular regions. At the start of CA all the individuals were initialized
time instance. randomly. The goal was to train the team to choose the best
Multi-group Ant Colony Optimization algorithm was states depending upon the scenario faced. In case of offense,
proposed by Chen et al. [12] that used ant intelligence to learn different regions dictated the role the player would acquire
offensive strategies in 2D soccer simulation league. that consequently led to a particular state of the player. The
In this paper they created three models namely shooting belief space comprised of general behaviors associated with
model, passing model and dribbling model. In order to deal overall plan of the team. Experiments were conducted to test
with continuous data they divided the field into regions and the effectiveness of the approach for 150 generations. After
used the success of foraging behavior of ants as an effective every 15 generations some statistics were computed that
cooperative strategy. The mechanism of pheromone indicated the best regions and the best states. These were
evaporation was used to count the preference value of each recorded to be used for future generations to serve as a starting
attack in a tree structure namely Attack Information Tree. The point for online team coordination. Learning was carried out
decision module of the attacker then selected the best attack in two stages in which the output of first stage became the
action according to the preference value. The learning process input of the second one and ten runs of 150 generation each
starts when an agent receives a set of scene information and were conducted. For performance evaluation matches were
translates this information into equivalent states. The agent executed with multiple opponent teams that either played
then searches the dataset of the state-action preference values,
offensively or defensively and the team that used CA played
selecting an action that has the most similar state and the
better in terms of defense as well as offense. Table 1 presents
highest preference value. Finally the agent makes the selected
a comparison of all the approaches.
action, evaluates the result of the action after a given time and
sends the result to the training data set as a new sample. They
simulated the attack environment by considering two
attackers, two defenders and one goal keeper in the field. The
positions of all the five players were set randomly and the
training was conducted till 15000 times in a single match. For
evaluation of proposed strategy matches were conducted
amongst the baseline team and the evolved team concluding
that the evolved MACO team enjoyed a 100% winning
advantage. Experiments were also conducted to evaluate this

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.
Table 1. Comparison of evolved cooperative strategies

Review Paper Strategy Learnt Fitness Opponent Strategy Type

1.T. Nakashima[5] Action Rules Average goals Single Centralized


2.M. Lekavy[13] Pass model Composite Single SHB
3 H.Okada[6] Team formation Goals scored /faced Single STF
4.Geetha et al.[28] Team formation Goals scored Multiple STF
5.Chen at al. Pass Model, Shoot Model, Preference value Multiple STB
Kick Model
6.Salhieh et al. Dribble/ Pass Goal difference Single Centralized
7.M.Ali Team formation Goals scored Multiple SHR

VI. CONCLUSION [3] “RoboCup 3D Soccer Simulation League,” Wikipedia, the


This paper presented a survey of various applications of nature free encyclopedia. 09-Aug-2016.
[4] M. A. D. Darab and M. Ebrahimi, “RoboCup 3D Soccer
inspired optimization algorithms in devising cooperative
Simulation Server: A Progressing Testbed for AI Researchers,” in
strategy. Three categories of optimization algorithms namely Applications and Innovations in Intelligent Systems XIV, R. E. Bs.
algorithms inspired by evolution process, algorithms inspired MSc, D. T. Allen, and D. A. T. M. MBCS MSc, Eds. Springer
by swarm and algorithms inspired by cultural evolution have London, 2007, pp. 228–232.
been discussed. Recently, many works have been reported on [5] T. Nakashima, M. Takatani, M. Udo, and H. Ishibuchi, “An
designing cooperative strategy in the domain of RoboCup evolutionary approach for strategy learning in RoboCup soccer,” in
Soccer Simulation League. This survey specifically focused 2004 IEEE International Conference on Systems, Man and
on the seminal literature that discussed the issue of strategy Cybernetics, 2004, vol. 2, pp. 2023–2028 vol.2.
evolution. The paper contributed a categorization of [6] H. Okada, T. Wada, and A. Yamashita, “Evolving Robocup
Soccer Player formations by particle swarm optimization,” in 2011
cooperative strategy that has been done based on centralized
Proceedings of SICE Annual Conference (SICE), 2011, pp. 1950–
vs. distributed strategy. Moreover, a comparative study of 1953.
different nature inspired models in each category along with [7] V. Svatoň, J. Martinovič, K. Slaninová, and V. Snášel,
their limitations was discussed. This study is novel as it has “Improving Rule Selection from Robot Soccer Strategy with
not been conducted so far to the best of our knowledge and Substrategies,” in Computer Information Systems and Industrial
could serve as a benchmark for teams to develop their strategy Management, K. Saeed and V. Snášel, Eds. Springer Berlin
based upon the discussed categorization. This survey also Heidelberg, 2014, pp. 77–88.
proposes some future research directions in cooperative [8] R. P. Sałustowicz, M. A. Wiering, and J. Schmidhuber,
strategy evolution. Many approaches have discussed strategy “Learning Team Strategies: Soccer Case Studies,” Mach. Learn., vol.
33, no. 2–3, pp. 263–282, Nov. 1998.
at the level of team formation rather than at the level of
[9] K. Yasui, K. Kobayashi, K. Murakami, and T. Naruse,
behavior models and this area needs to be explored. Secondly, “Analyzing and Learning an Opponent’s Strategies in the RoboCup
the survey shows that most of the teams have evolved strategy Small Size League,” in RoboCup 2013: Robot World Cup XVII, S.
under some basic assumptions such as taking a fixed opponent Behnke, M. Veloso, A. Visser, and R. Xiong, Eds. Springer Berlin
or taking a small size of teams; optimal strategy evolution for Heidelberg, 2013, pp. 159–170.
large team of agents, thus, is still a challenging task. Lastly, [10] G. c Luh, C. y Wu, and W. w Liu, “Artificial Immune
although we presented simulated leagues as a whole, the System based Cooperative Strategies for Robot Soccer Competition,”
domain of 2D league is simpler and much work has been done in 2006 International Forum on Strategic Technology, 2006, pp. 76–
in that domain. However, to the best of our knowledge there is 79.
[11] C. H. Messom and M. G. Walker, “Evolving cooperative
no application of nature inspired model for evolving
robotic behaviour using distributed genetic programming,” in 7th
cooperative strategy in the domain of 3D Simulation league. International Conference on Control, Automation, Robotics and
Thus, there is a growing need to develop cooperative strategy Vision, 2002. ICARCV 2002, 2002, vol. 1, pp. 215–219 vol.1.
for 3D league that caters to the issues of humanoid locomotion [12] S. Chen, G. Lv, and X. Wang, “Offensive Strategy in the
as well as localization along with strategy evolution. 2D Soccer Simulation League Using Multi-group Ant Colony
Optimization,” Int. J. Adv. Robot. Syst., vol. 13, p. 1, Feb. 2016.
[13] M. Lekavy, “Optimising Multi-agent cooperation using
REFERENCES Evolutionary Algorithm,” in Proceedings of IIT, Bratislava, 2011,
[1] X.-S. Yang, “Nature-Inspired Optimization Algorithms,” in pp. 49–56.
Nature-Inspired Optimization Algorithms, Oxford: Elsevier, 2014, p. [14] P. Stone and M. Veloso, “Task Decomposition, Dynamic
iii. Role Assignment, and Low-bandwidth Communication for Real-time
[2] R. K. Arora, “Optimization: Algorithms and Applications,” Strategic Teamwork,” Artif Intell, vol. 110, no. 2, pp. 241–273, Jun.
CRC Press, 06-May-2015. [Online]. Available: 1999.
https://www.crcpress.com/Optimization-Algorithms-and- [15] A. Atyabi and S. Nefti-Meziani, Applications of
Applications/Arora/p/book/9781498721127. [Accessed: 29-Dec- Computational Intelligence to Robotics and Autonomous Systems.
2016]. 2016.

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.
[16] T. Bäck, Evolutionary Algorithms in Theory and Practice:
Evolution Strategies, Evolutionary Programming, Genetic
Algorithms. Oxford University Press, 1996.
[17] “Wiley: Computational Intelligence: An Introduction, 2nd
Edition - Andries P. Engelbrecht.” [Online]. Available:
http://www.wiley.com/WileyCDA/WileyTitle/productCd-
0470035617.html. [Accessed: 29-Dec-2016].
[18] M. Dorigo and T. Stutzle, “Ant Colony Optimization,” MIT
Press. [Online]. Available: https://mitpress.mit.edu/books/ant-
colony-optimization. [Accessed: 29-Dec-2016].
[19] J. Brownlee, “Particle Swarm Optimization - Clever
Algorithms: Nature-Inspired Programming Recipes.” [Online].
Available: http://www.cleveralgorithms.com/nature-
inspired/swarm/pso.html. [Accessed: 29-Dec-2016].
[20] R. G. Reynolds and B. Peng, “Cultural algorithms:
modeling of how cultures learn to solve problems,” in 16th IEEE
International Conference on Tools with Artificial Intelligence, 2004,
pp. 166–172.
[21] J. Brownlee, “Immune Algorithms - Clever Algorithms:
Nature-Inspired Programming Recipes.” [Online]. Available:
http://www.cleveralgorithms.com/nature-inspired/immune.html.
[Accessed: 12-May-2017].
[22] H. Kitano et al., “The RoboCup synthetic agent challenge
97,” in RoboCup-97: Robot Soccer World Cup I, H. Kitano, Ed.
Springer Berlin Heidelberg, 1998, pp. 62–73.
[23] T. Uchitane and T. Hatanaka, “Applying evolution
strategies for biped locomotion learning in RoboCup 3D Soccer
Simulation,” in 2011 IEEE Congress on Evolutionary Computation
(CEC), 2011, pp. 179–185.
[24] P. MacAlpine, S. Barrett, D. Urieli, V. Vu, and P. Stone,
“Design and Optimization of an Omnidirectional Humanoid Walk: A
Winning Approach at the RoboCup 2011 3D Simulation
Competition.,” in AAAI, 2012.
[25] S. Haider, S. R. Abidi, and M. Williams, “On evolving a
dynamic bipedal walk using Partial Fourier Series,” in 2012 IEEE
International Conference on Robotics and Biomimetics (ROBIO),
2012, pp. 8–13.
[26] Y. Xiang, L. Zhiwei, W. Zhipeng, and C. Xuanyu,
“Dynamic path planning in RoboCup rescue simulation competition,”
in Control and Decision Conference (CCDC), 2015 27th Chinese,
2015, pp. 4341–4344.
[27] H. Burchardt and R. Salomon, “Implementation of Path
Planning using Genetic Algorithms on Mobile Robots,” in IEEE
Congress on Evolutionary Computation, 2006. CEC 2006, 2006, pp.
1831–1836.
[28] R. Geetha, R. Subramanian, and P. Viswanath, “Genetic
Programming Method of Evolving the Robotic Soccer Player
Strategies with Ant Intelligence,” Int. J. Adv. Robot. Syst., p. 1, 2009.
[29] M. Z. Ali, A. Morghem, J. Albadarneh, R. Al-Gharaibeh, P.
N. Suganthan, and R. G. Reynolds, “Cultural Algorithms applied to
the evolution of robotic soccer team tactics: A novel perspective,” in
2014 IEEE Congress on Evolutionary Computation (CEC), 2014, pp.
2180–2187.
[30] A. Salhieh, A. Mostafa, I. Mahmoud, and R. Reynolds,
“Evolving Effective Multi-Robot Coordination Strategies for
Dynamic Environments Using Cultural Algorithms,” in Proceedings
of the 12th WSEAS Internaional Conference on System Theory and
Scientific Computation, Istanbul, Turkey, 2012.
[31] T. Nakashima, M. Takatani, H. Ishibuchi, and M. Nii, “The
Effect of Using Match History on the Evolution of RoboCup Soccer
Team Strategies,” in 2006 IEEE Symposium on Computational
Intelligence and Games, 2006, pp. 60–66.

Authorized licensed use limited to: University of Petrosani. Downloaded on January 18,2024 at 07:51:45 UTC from IEEE Xplore. Restrictions apply.

You might also like