Download as pdf
Download as pdf
You are on page 1of 9
INTRODUCTION TO ARTIFICIAL INTELLIGENCE Unit - 3: Randomized Search Randomized Search and Em Read C ergent Systems [Iterated Hill Climbing, cael Annealing, Genetic Algorithms, The Travelling Salesman jem, Neural Networks, Emergent Systems erated Hill Climbing Bill Climbing isa technique to solve certain optimization problems. In this technique, we start wil 1 a sub-optimal solution and the solution is improved repeatedly until some condition is maximized. mal solution is compared to starting from the base of the hill, The idea of starting with a sub-opti ill, and finally maximizing some condition improving the solution is compared to walking up the hi is compared to reaching the top of the hill. Hience, the hill climbing technique can be considered as the following phases — 1.’ Constructing a sub-optimal solution obeying the constraints of the problem 2. Improving the solution step-by-step 3, Improving the solution until no more Hill Climbing technique is mainly used for solving c at the current state and immediate future state. Hence, not maintain a search tree. ‘Algorithm: Hill Climbing Evaluate the initial state. Loop until a solution is found or th "Select and apply a new operator ~ Evaluate the new state: goal -—> quit better than current state —> new improvement is possible ‘omputationally hard problems. It looks only this technique is memory efficient as it does ere are no new operators left to be applied: current state, Iterative Improvement In iterative improvement meth optimal solution in every iteration. Ho Situation, there is no nearby state for a ‘This problem can be avoided by differe Random Restart : : ‘This is another method of | solving the problem of local optima. This technique conducts a series areearches, Every time, it starts from a randomly generated initial state, Hence, optima or nearly optimal solution can be obtained comparing the solutions of searches performed. Problems of Hill Climbing Technique «Local Maxima: If the heuristic is not convex, Hill Climbing may converge to local maxima, instead of global maxima. 4 Ridges and Alleys: Ifthe target function create’ narrow ridge, then the climber can only ida ahe ridge or descend the alley by zig-eageing, In this scenario, the climber needs to aeeeryery small steps requiring more time foreach the goal. nis achieved by making progress towards an 10d, the optimal solutio Jhnique may encounter local maxima. In this ywever, this tec better solution. vat methods, One of these methods is simulated annealing, a Fit Coure i Artic Iteligence*by "Prot, Deepak Kiemant TM with Thankr Niladri Dey, BURIT pare oft Content adopted er the Bek @ Scanned with OKEN Scanner Pi ently flat that the | + Plateau: A plateau is enicountefed when the'search oe “ dah Pigely seca value returned by the target function is indistinguishable from tele regions, due to the precision used by the machine fo represent ils value. a Complexity of Hill Climbing Technique . | This Technique does not suffer from space related issues, as it looks only at the current state. Previously explored paths are not stored, 5 ‘ For mos of the probleme in Random restart Hil Climbing technique, an optimal solution can be achieved in polynomial time, However, for NP-Complete problems, computational time can be exponential based on the number of local maxima, Applications of Hill ‘Climbing Technique Hill Climbing technique ean be used to solve many problems, where the current state allows for an accurate evaluation function, such as Network-Flow, Travelling Salesman problem, 8-Queens problem, Integrated Circuit design, etc, Hill Climbing is used in inductive learning methods too. This technique is used in robotics for coordination among multiple robots in a team, There are many other problems where this technique is used. Example This technique can be applied to solve the travelling salesman problem. First an initial solution is Getermined that visits all the cities exactly once. Hence, this initial solution is not optimal in most of the cases. Even this solution can be very poor. The Hill Climbing algorithm starts with such an initial solution and makes improvements to it in an iterative way. Eventually, a much shorter route is likely to be obtained. ‘Simulated Annealing Simulated Annealing is a stochastic global search optimization algorithm. ‘This means that it makes use of randomness as part of the search process. This makes the algorithm appropriate for nonlinear objective functions where other local search algorithms do not operate well. Like the stochastic hill climbing local search algorithm, it modifies a single solution and searches the relatively local area of the search space until the local optima is located. Unlike the hill climbing algorithm, it may accept worse solutions as the current working solution. The likelihood of accepting worse solutions starts high at the beginning of the search and decreases With the progress of the search, giving the algorithm the opportunity to fist locate the region for the global optima, escaping local optima, then hill climb to the optima itself. Step 1: We first start with an inital solution s = Se. This can be any solution that fits the criteria for an acceptable solution. We also start with an initial temperature t= ta, Step 2: Setup a temperature reduction function alpha, Each reduction rule reduces the temperature at a different rate and each method is better at optimizing a different type of model. For the 3rd rule, beta is an arbitrary constant, Step 3: Starting atthe initial temperature, loop through n iterations of Step 4 and then decrease the temperature according to alpha. Stop this loop until the termination conditions are reached. The termination conditions could be reaching some end temperature, reaching some acceptable threshold of performance for a given set of parameters, etc. The mapping of time to temperature and how fast the temperature decreases is called the Annealing Schedule. Step 4: Given the neighbourhood of solutions N(3), pick one of the solutions and calculate the difference in cost between the old solution and the new neighbour solution. The neighbourhood of a solution are all solutions that are close to the solution. For example, the neighbourhood of 2 set of 5 parameters might be if we were to change one ofthe five parameters but kept the remaining four the same. Step 5: Ifthe difference in cost between the old and new solution is greater than O (the new solution is better), then accept the new solution. If the difference in cost is less than 0 (the old solution rs better), then generate a random number between 0 and 1 and accept it if it’s under the calculated from the Energy Magnitude equation from before. Genetic Algorithms value ‘Port of ths Content is Adopted from the Book ‘A Fst Corse la Arb intligence*by “rok. Deepal Kheman. TH" with Pham Niladri Dey, BURIT @ Scanned with OKEN Scanner A genetic algorithm is aalition pe ie a adaptive heuristic search algorithm inspired by "Darwin's theory of fea is = to solve optimization problems in machine learning. It is one of ithms as it helps solve complex problems that would take a long time to solve. Genetic Al i eqampla Algoritims are being, widely ued in, diferent real-world applications, for Designing electronic circuits, code-breaking, image processing, and artificial creativity. Juding basic terminologies used in In this topic, we will explain Genetic algorithm in detail, inc! f genetic algorithm, etc. Genetic algorithm, how it works, advantages and limitations of Before understanding the Genetic algorithm, let's first understand basic terminologies to better ‘understand this algorithm: © Population: Population is the subset of | the given problem. Chromosomes: A chromosome is one of the solutions in the population problem, and the collection of gene generate a chromosome. ae) Nimgerceoine iz divided into a different gene, or it is an olement of the chromosome. ‘Allele: Allele is the value provided to the gene within a particular chromosome. Fitness Function: The fitness function is used to determine the individual's Sines! level jn the population. It means the ability of an individual to compete with other individuals. Tn every iteration, individuals are evaluated based on their fitness function. 5 Genetic Operators: In a genetic algorithm, the best individual mate, regenerate ffspring better than parents. Here genetic operators play a role in changing the genetic composition of the next generation. o Selection all possible or probable solutions, which can solve for the given tionary generational cycle to generate high-quality ‘The genetic algorithm works on the evolu! ions that either enhance or replace the population solutions. These algorithms use different operat roved fit solution. to give an imp Tt basically involves five phases to solve below: 1. Initialization 2. Fitness Assignment 3, Selection 4, Reproduction 5. Termination the complex optimization problems, which are given as 1. Initialization ‘The process of a genetic algorithm starts by generating the set of individuals, which is called population, Here each individual is the solution for the given problem. An individual contains or Potaracterized by a set of parameters called Genes, Genes are ‘combined into a string and generate see emmosomes, which is the solution to the problem. One of the most popular techniques for initialization is the use of random binary strings. Intetiyencby "Prof, Deapls Hernan HTM" with Thos Niladri Dey, BVRIT pure ott Catan Adopt am the Book re Cour in Ar > @ Scanned with OKEN Scanner Mm jojojojo Gene . Chromosome 93 [1jo[ijolifa fa [1 [a [o[i[1[o] | poputation 2. Fitness Assignment Fitness function is used to determine how fit an individual is? It means the ability of an individual to compete with other individuals. In every iteration, individuals are evaluated based on their fitness function. The fitness function provides a fitness score to cach individual. This score further determines the probability of being selected for reproduction. The high the fitness score, the more chances of getting selected for reproduction. 3. Selection ‘The selection phase involves the selection of individuals for the reproduction of offspring. All the selected individuals are then arranged in a pair of two to increase reproduction. Then these individuals transfer their genes to the next generation. ‘There are three types of Selection methods available, which are: © Roulette wheel selection © Tournament selection © Rank-based selection 4, Reproduction After the selection process, the creation of a child occurs in the reproduction step. In this step, the genetic algorithm uses two variation operators that are applied to the parent population. The two operators involved in the reproduction phase are given below: © Crossover: The crossover plays a most significant role in the reproduction phase of the genetic algorithm. In this process, a crossover point is selected at random within the genes. ‘Then the crossover operator swaps genetic information of two parents from the current generation to produce a new individual representing the offspring. ~«' HE ESEEEE- /RBEaE Offspring JRBBB: ‘Ives until the crossover point is met. The genes of parents are exchanged among themselves until the : These newly generated offspring are added to the population. This process is also called styles available: or crossover. Types of crossover = i cone att om te uh A rca nA! tight, are Rene IT a part oth Niladri Dey, BVRIT _ i @ Scanned with OKEN Scanner © One point crossover © Two-point crossover © Livery crossover © Inheritable Algorithms crossover © Mutation The mutati i i The mormon opera inserts random genes in the offspring (new child) to maintain the diversity in the population. Tt can be done by Mipping some bits in the chromosomes, ps in solving the issue of premature convergence and enhances diversification. a low image shows the mutation Process: 'ypes of mutation styles available, ©. Flip bit mutation © Gaussian mutation co Exchange/Swap mutation ere BEGGS After Mutation 5. Termination ‘After the reproduction phase, a stopping criterion is applied as a base for termination. The Glgorithm terminates after the threshold fitness solution is reached. It wil identify the final solution as the best solution in the population. Advantages of Genetic Algorithm The parallel capabilities of genetic algorithms are best © Tt helps in optimizing various problems such as diserete functions, multi-objective problems, and continuous functions, It provides a solution for a problem that improves over time. 'A genetic algorithm does not need derivative information. Limitations of Genetic Algorithms Genetic algorithms are not efficient algorithms for solving simple problems. It does not guarantee the quality of the final solution to a problem. values may generate some computational challenges. ‘o Repetitive calculation of fitness Difference between Genetic Algorithms and Traditional Algorithms GA search space is the set of all possible solutions to the problem. In the traditional algorithm, only one set of solutions is maintained, whereas, ina genetic algorithm, several sets of solutions in search space can be used. «Traditional algorithms need more information in order to perform a search, whereas genetic algorithms need only one objective function to caleulate the fitness of an individual. o Traditional Algorithms cannot w parallelly (calculating the fitness o ork parallelly, whereas genetic Algorithms can work f the individualities are independent). part of Content Adapted rm th Book Ft Cue in Atl tenance by "Prot Deepok Kheman ITM" with Tronts Niladri Dey, BURIT @ Scanned with OKEN Scanner © One big difference in genetic Algorithms is that rather of operating directly om seeker results, inheritable algorithms operate on their representations (or rendering), frequently. appertained to as chromosomes. © One of the big differences between traditional algorithm and genetic algorithm is that does not directly operate on candidate solutions. s x © Traditional Algorithms can_only generate one result in the end, whereas Genetic Algorithms can generate multiple optimal results from different generations. ‘The traditional algorithm is not more likely to generate optimal results, whereas Genetic algorithms do not guarantee to generate optimal global results, but also there is a great Possibility of getting the optimal result for a problem as it uses genetic operators such as Crossover and Mutation. Traditional algorithms are deterministic in nature, whereas Genetic algorithms are probabilistic and stochastic in nature. ‘Neural Networks =< 5 0 5 ae ‘The nine types of neural networks are: z : Perceptron Feed Forward Neural Network. Multilayer Perceptron Convolutional Neural Network Radial Basis Functional Neural Network . Recurrent Neural Network LSTM ~ Long Short-Term Memory Sequence to Sequence Models e Modular Neural Network. oo 0000 0.0 ‘Neural networks represent deep learning using artificial intelligence. Cértain application scenarios are too heavy or ott of scope for traditional machine learning algorithms to handle. As they are commonly known, Neural Network pitches in such scenarios and fills the gap. Also, enrol in the neural networks and deep leaning course and enhance your skills today. ‘Artificial neural networks are inspired by the biological neurons within the human body which activate under certain circumstances resulting in a related action performed by the body in response. Artificial neural nets consist of various layers of interconnected artificial neurons powered by activation functions that help in switching them ON/OFF. Like traditional machine algorithms, here too, there are certain values that neural nets learn in the training phase. Briefly, each neuron receives a multiplied version of inputs and random weights, which is then ‘added with a static bias value (unique to each neuron layer); this is then passed to an appropriate activation function which decides the final value to be given out of the neuron. There are various activation functions available as per the nature of input values. Once the output is generated from the final neural net layer, loss function (input vs output)is calculated, and backpropagation is performed where the weights are adjusted to make the loss minimum. Finding optimal values of weights is what the overall operation focuses around, Please refer to the following for better understanding- aint Course tn Artic Intelignnce by ‘re, Depa Hbemant TH" with Thank Niladri Dey, BURIT art this Contents Adopted om th Book @ Scanned with OKEN Scanner ‘Weights are numeric values that are multiplied by inputs. In backpropagation, they are modified {0 reduce the loss. In simple words, weights are machine leaned values from Neural Networks. They self-adjust depending on the difference between predicted outputs vs training inputs. Activation Function is a mathematical formula that helps the neuron to switch ON/OFF. Hidden, io @ C) @ / ei Me e + Input layer represents dimensions of the input vector. + Hidden layer represents the intermediary nodes that divide the input space into regions with (soft) boundaries. It takes in a set of weighted input and produces output through an activation function. ‘+ Output layer represents the output of the neural network. Types of Neural Networks There are many types of neural networks available or that might be in the development stage. They can be classified depending on their: Structure, Data flow, Neurons used and their density, Layers and their depth activation filters etc. Also, leam about the Neural network in R to further your learning. A. Perceptron Perceptron (P) C Vee verte Perceptron model, proposed by Minsky-Papert is one of the simplest and oldest models of Neuron. tis the smallest unit of neural network that does certain computations to detect features ot business intelligence in the input data. It accepts weighted inputs, and apply the activation function ‘0 obtain the output as the final result, Perceptron is also known as TLU(threshold logic unit) Part of hs Content Adopted tom the Book ‘A Fit Course i Artie lteligence” by "Pro. Deepak Khemant ITM with Thane Nilactri Dey, BURIT acid @ Scanned with OKEN Scanner i it ies, thus it is Perceptron is a supervised learning algorithm that classifies the data ia Bee acer tus i's a binary classifier. A perceptron separates the input space into two categ represented by the following equation: B. Feed Forward Neural Networks (usually 0) which is considered as -1. They are fairl deal with data which contains a lot of noise. Advantages of Feed Forward Neural Networks 1. Less complex, easy to design & maintain 2. Fast and speedy {One-way propagation] 3. Highly responsive to noisy data Disadvantages of Feed Forward Neural Networks: 1 Cannot be used for deep learning [due to absence of dense layers and back propagation] C. Multilayer Perceptron ers MuAN Ta Inputs are multiplied with weights and fed to the activation function are modified to reduce the loss. In simple words, Advantages on Multi-Layer Perceptron 1. Used for deep learning (due to the Presence of dense fully connected layers and back propagation] Disadvantages on Multi-Layer Perceptron: Part hi Contents Adopted om te Boo 'A Fit Coe Ariel tines y “rl Deep Kean Tt hen Niladri Dey, BURIT @ Scanned with OKEN Scanner Try 1. Comparatively complex to design and maintain 2. Comparatively slow (depends on number of hidden layers) it out by Yourself: Apply Iterative Hill Climbing Search 10 Important Subjective Questions i 2. 3. 4. Explain the Iterated Hill Climbing algorithm with suitable example. . Explain the Simulated Annealing algorithm with suitable example. Explain the concepts of Genetic Algorithms with example. - List and explain the types of Neural Networks with advantages and disadvantages END OF UNIT - it @ Scanned with OKEN Scanner

You might also like