Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Genetic Algorithms and Sudoku

Nilang Shah, Balajiganapathi Senthilnathan, Murukesh Mohanan November 28, 2013


Abstract Sudoku is one of the worlds most popular puzzles. It can be computationally solved in a number of ways. Here we attempt to solve it use genetic algorithms. Further, we use genetic algorithms to tune the sudoku solver.

Introduction

In a genetic algorithm, a population of candidate solutions (called individuals or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered. In our case, for the sudoku solver, the chromosomes are all the numbers currently in the puzzle. For tuning the solver, we would use various parameters of the solver as entries in the chromosome. The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the tness of every individual in the population is evaluated; the tness is usually the value of the objective function in the optimization problem being solved, which we call the score of that individual. For sudoku, we have set the score as a reprsentation of how far o the solution is from the correct one - i.e., how many conicts and unlled cells are present, and so son. For the tuner, we can consider the number of generations the solver takes to obtain a correct solution as the score. The more t individuals are stochastically selected from the current population, and each individuals genome is modied (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory tness level has been reached for the population. For satisfaction, we wish to minimize the score - the correct solution has a score of zero, and nely tuned solver should solve the puzzle in as few generations as possible. 1

Metaheuristics

In computer science and mathematical optimization, a metaheuristic is a higherlevel procedure or heuristic designed to nd, generate, or select a lower-level procedure or heuristic (partial search algorithm) that may provide a suciently good solution to an optimization problem, especially with incomplete or imperfect information or limited computation capacity. Metaheuristics may make few assumptions about the optimization problem being solved, and so they may be usable for a variety of problems. Compared to optimization algorithms and iterative methods, metaheuristics do not guarantee that a globally optimal solution can be found on some class of problems. Many metaheuristics implement some form of stochastic optimization, so that the solution found is dependent on the set of random variables generated. By searching over a large set of feasible solutions, metaheuristics can often nd good solutions with less computational eort than can algorithms, iterative methods, or simple heuristics. As such, they are useful approaches for optimization problems. Several books and survey papers have been published on the subject. Most literature on metaheuristics is experimental in nature, describing empirical results based on computer experiments with the algorithms. But some formal theoretical results are also available, often on convergence and the possibility of nding the global optimum. Enormously many metaheuristic methods have been published with claims of novelty and practical ecacy. Unfortunately, many of the publications have been of poor quality; aws include vagueness, lack of conceptual elaboration, poor experiments, and ignorance of previous literature. The eld also features high-quality research.

Conclusion

It was previously suggested that genetci algorithms are not well suited for sudoku. However, with a few optimizations, we have managed to get a reasonable solver using GA. Further, the metaheuristic tuning allowed us to create a reasonably fast solver. We observed that the solver and the tuner have similar and opposing characteristics: 1. The solver tended to do well with high elitism and mutation rates, whereas the tuner did well with low elitism. 2. Without elitism, neither could do well. They tended to oscillate wildly between good states and poor states.

40000 Gen #0 35000 30000 25000 20000 15000 10000 5000 0 0 1 2 3 4 5 6 7 8 9

Figure 1: Tuning of the solver, without eiltism, and low value of mutation.

Gen #0 40000 'meta.log' using 1:2 35000 30000 25000 20000 15000 10000 5000 0 0 5 10 15 20 25 30

Figure 2: Tuning of the solver, without eiltism, and high value of mutation.

30000 Gen #0 25000

20000

15000

10000

5000

0 0 1 2 3 4 5

Figure 3: Tuning of the solver, without eiltism, and medium value of mutation.

You might also like