Download as pdf
Download as pdf
You are on page 1of 68
| unsrv | Code Optimization | _—$__-_- +_______ —— [syllabus Principal Seurees of Optimization Veep hate yptinization DAG. Optimrzatin PHY Blacks Clohal Data Flows Anabyats Vffictent Data Hon Alygritban erent trends in Comper 1. Contents 5.1. Introduction to Code Optimization Dec.-09, Marks Z 52. Clausification of Optimization May-06, 5.4 Principal Sources of Optimization May-06, 06, 10, 11, 16, Doc.-06, 09, 10, 13, 14, 17,22 Marks 16 5.4 Poop-hole Optimization May-04, 07, 10,11, 12, Doc.-08, 10, 11, 12, 13, 14, 22, Marks 16 55 Basic Blocks and Flow Graphs Doc.-04, 07, 09, 10, May-05, 06, 08, 10, 11, 12, 15, 19, Marks 16 56 DAG Representation May-04, 05, 07, 08, 17, 19, Doc.-06, 07, 09, 13, 17, Marks 16 57 DAG - Optimization of Basic Block... May-07, 11, 14, Doc.-07, 10, 11, 13, 14, 18, © Marks 16 5.8 Loops in Flow Graph May-08, 11, 18, 19, Dec.-16, © Marks 16 5.9 Global Data Flow Analysis May-08, Dec.-13, Marks 16 5.10 Data Flow Propertios . May-07, 10, 13, 14, Dec-09, 11, Marks 16 Doc.-09, May-13, Marks, 16 5.11 Efficient Data Flow Algorithms 5.12 Recent Trends in Compilor Design 5.13 Two Marks Questions with Answers Compiier Design pee Osteo 5-2 ERI introduction to Code Optimization mo de optimization is a technique required to produce an efficient target ere are two impo: arget code, are two important issues that need to be considered while applying the techn for code optimization and those are - id 1. The wo eect The semantic equivalence of the source program must not be changed. 2. The improvement over the program efficiency must be achieved without changi } ng the algorithm of the program. The code becomes inefficient because of two factors; one is programmer and the othe; factor is compiler. In this chapter we will discuss the various transformations that a be applied on the code to improve the performance of the program. Sometime question arises that where exactly the code optimization can be applied whether it should be before code generation phase or after the code generation phase ? Here is a block diagram which itself suggests us the places where exactly code optimization techniques can be applied. Source Front end Intermediate Code Target program code generation 7 ms : ‘ /’ Use registers, Programmer /‘ Apply transformations ' \ {should make | “on loops, procedure | See ee man | use of efficient ‘calls and address} \ do peephole ' ‘algorithm calculations. \, optimization Code optimization Code optimization Fig. 5.1.1 Code optimization Thus it is the responsibility of both programmer and compiler to generate efficient code using proper usage of target code. Review Question tion ? Ea What is meant by code optim ZA Classification of Optimization ne endant The classificatio : and machine independent optimization n of optimization can be done in two categories machine dep optimization ‘an up-thrust for knowledge TECHNICAL PUBLICATIONS Code Optimization based on characteristics of the target machine for the instruction set use ‘d and addressing modes used for the instructions to produce the efficient target code. The machine independer i aes a optimization is based on the characteristics of the wea ming languages for appropriate programming structure and usage of efficient arithmetic properties in order to reduce the execution time. he machine di imizati T lependant optimization can be achieved using following criteria - Allocat aan 1. Allocation of sufficient number of resources to improve the execution efficiency of the program. : 2. Using immediate instructions wherever necessary 3. The use of intermix instructions. The intermixing of instruction along with the data increases the speed of execution. The machine independent optimization can be achieved using following criteria - 1. The code should be analyzed completely and use alternative equivalent sequence of source code that will produce a minimum amount of target code. fe the efficiency of target Rv Use appropriate program structure in order to improv code. 3, From the source program eliminate the unreachable code at one place and make use of the result 4. Move two or more identical computations instead of each time computing the expressions. Properties of Optimizing Compilers 1. The source code should be such that it should produce minimum amount of target code. ny unreachable code. 2. There should not be ai moved from source language, 3. Dead code should be completely re! fs should apply code improving transformations lers should apply 4. The optimizing, compil following, on source language: an up-thrust for knowledge Compiler Design ba Code Optimization 1) Common subexpression elimination ii) Dead code elimination iii) Code movement iv) Strength reduction uae) id 1. Lyplain the properties of machine dependant and machine independent code optimization [2 What are the properties of optimizing compilers ? CE EX] Principal Sources of Optimization io es aor The optimization can be done locally or globally. If the transformation is applied on the same basic block then that kind of transformation is done locally otherwise done globally. Generally the local transformations are done first. transformation While applying the optimizing transformations the semantics of the source program should not be changed Function preserving transformations There are a number of ways in which a compiler can improve a program without changing the function it computes. Common subexpression elimination, copy propagation, dead-code elimination, and constant folding are common examples of such function-preserving transformations. Compile Time Evaluation Compile time evaluation means shifting of computations from run time to compilation time. There are two methods used to obtain the compile time evaluation. 1. Folding In the folding technique the computation of constant is done at compile time instead of execution time. For example : length = (22/7) «d Here folding is implied by performing the computation of 22/7 at compile time instead of execution time. 2. Constant propagation In this technique the value of variable is replaced and computation of an expressio" is done at the compilation time. TECHNICAL PUBLICATIONS” - an up-thrust for knowledge exe Code Optimization pi*rer Here at the compilation time the value of ‘pi is replaced by 3.14 and r by 5 then computation of 3.14 * r*r is done during compilation [EFI Common Sub Expres: The common sub expression is an expression appearing repeatedly in the program which is computed previously. Then if the operands of this sub expression do not get changed at all then result of such sub expression is used instead of recomputing, it each mn Elimination time. For example : ths 44i tb = alt] ty = 44j tye 4a n; bltg]+ts, The above code can be optimized using the common sub expression elimination as, 4ei alt] 4xj n; bit ]+ts 4 * i is eliminated as its computation is already in The common sub expression ty t, and value of i is not been changed from definition to use EEE] copy Propagation Variable propagation means use of one variable instead of another. For example : =e Dy area = x er # y; an up-thrust for knowledge TECHNICAL PUBLICA ions” Compiler Design — a 5-6 Code Optinisation The optimization using variable propagation can be done as follows, area = pi wr % Here the variable y is eliminated. Here the necessary condition is that a variable mus, be assigned to another variable or some constant Code Movement There are two basic goals of code movement - 1. To reduce the size of the code i.e. to obtain the space complexity 2. To reduce the frequency of execution of code i.e. to obtain the time complexity, For example : k=z + 50; } The code for y * 5 will be generated only once. And simply the result of that computation is used wherever necessary. Loop invariant computation The loop invariant optimization can be obtained by moving some amount of code outside the loop and placing it just before entering in the loop. This method is alse called code motion. For example : while(i<=Max-1) { sum= sum+afi } The above code can be optimized by removing the computation of Max-I outside the loop. The optimized code can be, TECHNICAL PUBLICATIONS - an up-thrust for knowledge 5. 7 Code Optimization { sum=sum+ali); } po Strength Reduction The strength a certain operators is higher than others. For instance strength of higher than +. In strength reduction technique the higher strength operators ca replaced by lower strength operators. ft is in be For example = forli=1;i<=50i+ +) { count = ix 7; ; Here we get the count values as 7, 14, 21 and so on upto less than 50. This code can be replaced by using strength reduction as follows - temp=7 forli=1;i<=50;i+ +) { count = temp; temp = temp+7; } Thus we get th The induction variable is v = v +constant e value of count as 7, 14, 21 and so on upto less than 50. integer scalar identifier used in the form of t, Here v is a induction variable. The strength reduction is not applied to the floating point expressions because such a use may yield different results. EEA bead Code Elimination e live in a program if the ariable is said to be dead at a point in a program 'n used, The code containing such a variable 1 be performed by eliminating such A variable is said to bt value contained into it is used subsequently. On the other hand, the v if the value contained into it is never bee supposed to be a dead code. And an optimization can a dead code. = an up-thrist for knowledge PUBLICA TIONS” TECHNICAL eae Code Optim For example : x=i+10; The optimization can be performed by eliminating the This assignment statement is called dead assignment. gnment statement i= j Another example a=xt5, } Here if statement is a dead cod: + this condition will never get satistied hence, if statement can be eliminated:and optimization can be done. Loop Optimization The code optimization can be significantly done in loops of the program, Spe: ly inner loop is a place where program spends large amount of time. Hence if number of instructions are less in inner loop then the running time of the program will get decreased to a large extent. Hence loop optimization is a technique in which code optimization performed on inner loops. The loop optimization is carried out by following methods - 1. Code motion. 2. Induction variable and strength reduction 3. Loop invariant method. 4. Loop unrolling, 5. Loop fusion. Let us discu: ss these methods with the help of example. 1. Code motion Code motion is a technique which moves the code outside the loop. Hence is the name. If there lies some expression in the loop whose result remains unchanged even after executing the loop for several times, then such an expression should be placed just before the loop (ie. outside the loop). Here before the loop means at the entry of the loop. For example : while (i<=Max-1) { } sum=sum+talij; TECHNICAL PUBLICATIONS® - an up-thrust for knowledge - compier DESIO” 5 he above code can be opt “9 Code Optimization can zed by removing the computation of Max-1 outside the pop: Hence the optimized code can be - : n = Max while (i<=n) { iu sum=sum+aliJ; 2. induction variables and reduction in strength ‘A variable x is called an induction variable of loop L if the value of variable gets changed every time. It is either decremented or incremented by some constant. For example: Consider the block given below. if tz < 10 goto B1 In above code the values of i and t) are in locked state. That is, when value of i gets incremented by 1 then t; gets incremented by 4. Hence i and ty are induction variables. When there are two or more induction variables in a loop, it may be possible to get rid of all but one. Reduction in strength The strength of certain operators is higher than others. For example : Strength of + is higher than +. In strength reduction technique the higher strength operators can be replaced by lower strength operators. For example : Here we get values of count as 7, 14, 21 and so on upto 50. This code can be replaced by using strength reduction as follows - temp=7 for(i=1; i<=50; i++) { count = temp temp = temp +7; } TECHNICAL PUBLICATIONS® - an up-thrust for knowledge ‘pier Design 5-10 ode Optimization The replacement of multiplication by addition will speed up the object code Thus the rength of operation is reduced without changing the meaning of above code The strength reduction is not applied to the floating point expressions because such a se may yield different results. © Loop invariant method In this optimization technique the computation inside the loop is avoided and thereby > computation overhead on compiler is avoided. For example : for i: to 10 do begin i+a/b; can be written as 4, Loop unrolling In this method the number of jumps and tests can be reduced by writing the code Iwo times For example : int i=1; while(i<=50) Can be written as { alil=blil; i++; a 5. Loop fusion or Loop Jamming In loop fusion method several loops are merged to one loop. TECHNICAL PUBLICATIONS” - an up-thrust for knowledge 5-14 Code Optinnzet for i:=1 to nm do Can be written a ittenas aii): =10 pee accu’ Construct basic blocks and data flow graph and identify loop invari ‘statements : Solution : The loop invariant statement is, t while ( < =n) while (j < =n) { { A=B*C/p —Oetimized , A=B*t } By f isiet ) ed uy Rese Fig. 5.3.1 ig. 5. CEE) optimize the following code using various optimization techniques : i=1s fori=1ji<=3;i+ +) for j=Uij<=3iitr) ¢ [il i) = Fil Gil + 4 il Uj + 6 UL: TECHNICAL PUBLICATIONS an up-thrust for knowledge Compiler Design eae Code Optimization Solution : In the given code, there exists a two dimensional arrays. Hence we assume array of order 10 x 20. That is nl = 10 and n2 = 20. Before optimizing this code we need to generate 3 address code for corresponding code fragment. L2:-j:=1 Li: t=4ri thi=th+j tr=tlt4 13: = alt2] /* 8 = ali] Gil */ t4:=i* 20 Histtj sated t6 : = bit5) /* 6 = bli] i) */ 17: =13 +16 /*17 =a fi] il +b GG) */ 8: =i*20 18: = 18 +j 9: = 1844 t10 : = cft9] /* 10 = efi] fj] */ tll: =t7 + 10 cli] ff}: = #11 jisitl if j < =3 goto L1 i+] if i < = 3 goto L2 Now using common subexpression elimination we can optimize above code. After optimization we will get - 5:0 i:1 TECHNICAL PUBLICATIONS® - an up-hrust for knowledge geese 5-13 ‘ode Optimization wt : pineal use) pesttd 13: = alt2] tH: = bIt2] i: =t+t to: = ¢ [t2] 7 = th +06 ¢ Li] fil = 7 j:sitl ifj<=3 goto Ll i:si+l ifi<=3 © goto L2 Pee corakncabad 1. Discuss about the following : Copy propagation, Dead code elimination and code motion. Give the techniques used in loop optimization ES Explain - principle sources of optimization. _ {ORES SE ATES a SETS Explain briefly about the principal sources of optimization OSs rae eC Give any trvo examples for strength reduction What is dead code elimination ? What are principle sources of optimization ? appropriate examples. LOR AOE If we apply the statement by statement code generati target Bet code may contain many redundant instru Poor. To optimize such a target code target Bet code. These transformations ultimately improve, Provement over the running, time or space req © TECHNICAL PUBLICATIONS ERTS Oey AU : Dec.-06, 10, May-11. Marks 2 Explain the local optimization strategies with LO OSE 12, Dec.-05, 10, 11, 12, 13. 14, 22, Marks 16 on strategy then the generated ctions. The quality of such code is very certain transformations need to be applied on the will result in getting the significant juirement of the target progran an up-thrust for knowledge Compiler Design nae Code Optimization Definition Peephole optimization is a simple and effective technique for locally improving target code. This technique is applied to improve the performance of the target program by examining the short sequence of target instructions and replacing these instructions by shorter or faster sequence. [Note that : a peephole - a small window is moved over the target code and transformations can be made. And hence is the name!] Characteristics of Peephole Opti The peephole optimization can be applied on the target code using following characteristic, 1. Redundant instruction elimination. 2. Flow of control optimization. 3. Algebraic simplification. 4, Use of machine idioms. Let us discuss each one by one. 1, Redundant instruction elimination Especially the redundant loads and stores can be eliminated in this type of transformations, For example : MOV RO,x MOV x,RO We can eliminate the second instruction since x is already in RO. But if (MOV x, RO) is a label statement then we cannot remove it. We can eliminate the unreachable instructions. For example, following is a piece of C code. sum=0 if(sum) printf(""%d",sum); Now this if statement will never get executed hence we can eliminate such a unreachable code. Similarly int fun(int aint b) { c=atb; retum c; printf("% } 2. Flow of control optimization ©); /* unreachable code and hence can be eliminated «/ Using peephole optimization unnecessary jumps on jumps can be eliminated TECHNICAL PUBLICATIONS” - an up-thrust for knowledge y complet gor example, 5-15 Code Optimization snus we reduce one jump by this transformation, ea goto done test: goto done test: goto done sone done: Another example could be if acb goto test test: goto done done: 3, Algebraic simplification Peephole optimization is an effective technique for algebraic simplification. The statements such as x:=xt+0 or xisxel can be eliminated by peephole optimization, 4. Reduction in strength Certain machine instructions are cheaper than the other. In order to improve the performance of the intermediate code we can replace these instructions by equivalent cheaper instruction. For example, x° is cheaper than x * x. Similarly addition and subtraction is cheaper than multiplication and division. So we can effectively use equivalent addition and subtraction for multiplication and division. 5. Machine idioms The target instructions have equivalent machine instructions for performing some perations. Hence we can replace these target instructions by equivalent machine instructions in order to improve the efficiency. For example, some machines have auto-increment or auto-decrement addressing modes that are used to perform add or s ubtract operations. TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Compiler Design 5-18 Code Optimization w Q\ oid 1 Fyplan the characteristics of peephole optimization. 2. Explain peephole optimization and various code improving transformations. CORDS aTs 3. Explain the techniques: used in peephole optimization, COR OM CREE Me a eT) What is peephole ? Explain the optimizations that can be performed on a peephole. COR ed EE] Basic Blocks and Flow Graphs AU : Dec.-04, 07, 09, 10, May-05, 06, 08, 10, 11, 12, 15, 19, Marks 16 Definition : The basic block is a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching An example of the basic block is as shown below. GERI some Terminologies used in Basic Blocks * Define and use - The three address statement a use b and c. * Live and dead - The name in the basic block is said to be live at a given point if its value is used after that point in the program. And the name (variable) in the basic block is said to be dead at a given point if its value is never used after that point in the program. Algorithm for Partitioning into Blocks Any given program can be partitioned into basic blocks by using following algorithm. We assume that an intermediate code is already generated for the given program b+e is said to define a and to 1. First determine the leaders by u: (a) The first statement is a leader. (b) Any target statement of conditional or unconditional goto is a leader (co) Any statement that immediately follow a goto or unconditional goto is leader. ig following rules 2. The basic block is formed starting at the leader statement and ending just bet the next leader statement appearing - TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Le sion , a . <7 Code Optimization ia gs oo Consider the following program code , | Sa db of length 10 and partition it nto Me for computing dot produc of 100 prod =0; iets do { prod = prod + alil * bit); i+ }while(i< =10); gation First we will write the equivalent three address code for the above program 4 y= alty] /* computation of ali] */ 2 3. 4. 5. 4a 6. 7. in * ty 8. ty = prodtts 9, prod := te 10. ty = +1 Miss ty 12. if i< = 10 goto (3) According to the algorithm Statement 1 is a leader by rule 1(a)- Statement 3 is also leader by rule 1(b). Hence, statement 1 and 2 form the basic block. Similarly statement 3 to 12 form another basic block. Block 4 1. prod:=0 2. i=1 Compiler Design ame Code Optimization Block 2 12. if i<=10 goto (3) Flow Graph Definition : A flow graph is a directed graph in which the flow control information | is added to the basic blocks. ‘© The nodes to the flow graph are represented by basic blocks. ‘© The block whose leader is the first statement is called initial block. © There is a directed edge from block B, to block B, if B, immediately follows B, in the given sequence. We can say that B, is a predecessor of B. For example, consider the three address code as. 1. prod := 0 [t,] /* computation of afi] */ 4 5. tps 4ei 6. ty := blts] /* computation of b[i] */ 7. ty = ty * ty 8. ty = prodtts 9. prod := ts 12. if i<=10 goto (3) TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Me pier D059” aw Cade OplinniZagon co aph for the above code can be » flow Bt drawn as follows inthis Flow graph block By is a initial block Loop is a collection of nodes in the flow graph such that, LooP * i) All such nodes are strongly connected. ‘That means always there is a pall from any node to any other node hin that loop. ii) The collection of nodes has u jue entry. That means there is only one path from a node outside the loop to the node inside the loop. iii) The loop that contains no other loop is called inner loop. What are basic blocks and flow graphs ? LORI YA ne | What are the advantages of DAG representation ? Give example. (XOELETSESCITCTR How can you find leaders in the basic block ? URE ed Define the basic block. Write an algorithm to partition a sequence of three address statements into base blocks CO What is a basic block ? i May-05, 10; May.06, Mi Explain in detail the basic blocks and flow graphs roith example 6 7. Explain the terms with reference to basic blocks - i) Define and use ii) Live 8 Write and explain the algorithm for construction of basic bloks. CURIS Ce EXd DAG Representation COR Pe The directed acyclic graph is used to apply transformations on the basic block. To apply the transformations on basic block a DAG is constructed from three address statement. ADAG can be constructed for the following type of labels on nodes 1) Leaf nodes are labeled by identifiers or variable names or constants. Generally leaves represent r-values. 2) Interior nodes store operator values. The DAG and flow graphs are two different pictorial representations, Each node of flow basic bi graph can be represented by DAG because each node of the flow graph is a lock. © TECHNICAL PUBLICATIONS” - an up-thrust for knowledge Example for Understanding Construct DAG for - sum = 0; for (i = O;i<=10;i++) sum = sum+ali); Solution : The three address code for above code is (1) sum :=0 (2) @) t= 44 (4) b= alt] (5) ty := sum+t, (6) 7) (8) i (9) if i<=10 goto (3) Compiler Design 5-20 We can partition above code into basic blocks as follows : Now let us consider block B, for construction of DAG by numbering it. Fig. 5.6.1 Basic blocks TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Code Optimization 5- Ed ee Code Optimization 7) if i<10 goto (1) Continuing in this fashion, * 4 / \o 4 J \ = att) e i a ay /\ 4 i ‘At node <= we have specified the goto 1 condition by mentioning statement number (1). sum O” 4 i Fig. 5.6.2 DAG for block Bz CREED construct the DAG for the following Basic block i 2.12: = alt] : 4,14: = (03) > 6. 16 : = prod+t5 : 8.17: = i+] Ce 10. if i < = 20 goto (1) ® TECHNICAL PUBLICATIONS” - an up-thrust for knowledge Compiler Design Code Optimization Solution : Step 1: Fig. 5.6.3 Step 3: Xe \ ne aA Fig. 5.6.5 Step 5: 8 Code Optimization Step 8: 16 Prod (us 1 4 i 1 Fig. 5.6.10 [ESI Algorithm for Construction of DAG We assume the three address statement could of following types Case (i) x=y op Z Case (ii) x=op y Case (iii) With the help of following steps the DAG can be constructed. step 1: If y is undefined then create node(y). Similarly if 2 is undefined create a node(2). step 2: For the case(i) create a node(op) whose left child is node(y) and node(z) will be the right child. Also check for any common subexpressions For the case(ii) determine OO ——— = — — = node n will be node(y). Step 3: Delete x from list of ident identifiers for node n found in 2. ifiers for node(x). Append x to the list of attached EE¥ Applications of DAG The DAG is used in 1. determining the common sub-expressions (expressions computed more than once). 2. determining which names are used inside the block and computed outside the k could have their computed value determining which statements of the bloc outside the block by eliminating the common sub-expressions and simplifying the list of quadruple+ form x:=y unless and until it is a must Not performing the assignment of the TECHNICAL PUBLICATIONS” an up-thrust for knowledge Compiler Design Ae Code Optimization DAG based Local Optimization As mentioned earlier, a DAG can be constructed for a block and certain transformations such as common subexpression elimination; dead code elimination can be applied for performing the local optimization. For example : Construct the DAG for the following block bre de=b es dtc be f=bec gifted Step 1: ‘Step 2: ‘Step 3: KR 8 ‘ ve} vw ) ) © (OS) Fig. 5.6.11 Construction of DAG The optimized code can be generated by traversing the DAG. The local optimization on the above block can be done 1. A common subexpression e=d*c which is actually bte(since d:=b) is eliminated * 2. A dead code for b=e is eliminated ZN The optimized code and basic block is . dsb bd i" fate Fig. 5.6.12 Optimized f '9.5de and DAG gis f+d TECHNICAL PUBLICATIONS® - an up-thrust for knowledge 5-25 Code Optimization How would you represent the following equation using the DAG ? c+be /™. () JN \ ‘e Fig, 5.6.13 DAG ems Construct the DAG for the following basic block CEA solution ; The DAG can be constructed in following steps - Step 2: /\ LSS oe LN WSS AL is required DAG. Step 1: Step 3 : ‘an up-thrust for knowledge ® TECHNICAL PUBLICATIONS Code Opt, Compiler Design 5:26 2 Optimization i nd list EEEEIEEER cremate DAG representation of the follwing code ard list out applications of DAG representation : LET l_ gener Solution : Before constructing DAG for given code ee wn e oes an intermediate code for given code fragment. In this piece of code there a two dimensional array, For generating 3-address code for 2-D array we as: nl = 10, n2=20 ice. a [10] [20] array The lower bounds low; and low = 1. The computation of base address constant c is using following formula. © base — [(( low, * nj) + low2) x 4] base - (1 * 20) +1) *4 ¢ = base - 84 Now a [i] [i] can be converted into equivalent 3-address code form as : tli * 20 /n2=20*/ th:=tl+i isc 1 * base ~ 84 */ t3:=tl*4 4: = 22 [8] Hence 3 address code for the given code fragment will be 1) i 2) 3) 4) 5) 6) 7) 8) 9) : 10) t6:=i41 Wi t6 12) if i < 10 goto (3) ee TECHNICAL PUBLICATIONS‘ {an up-thrust for knowledge 5-27 Code Optimization | aw DAG can be constructed as follows - | Fig. 5.6.14 DAG on How would you represent the following equation using the DAG ? btc+b*-c The DAG can be saaton + Fig. 5.6.15 Construct DAG for the following basic block A+B T):=C+D E-T, 1:=1-13 Solution : The DAG will be - ZO KC ie ON T, a ON Cy Dg Fig. 5.6.16 __ >? TECHNICAL PUBLICATIONS® - an ups er knowledge Code Optimiza Compiler Design 5-28 = eG Draw the DAG for the statement a = (a*b +c) -(a*b +0). Solution ; oN “> aN vw Fig, 5.6.17 ; Construct the DAG for the following basic block x= ali] alil = y z= ali] Solution : Step 1: Step 2: fF =tx us a i a iy . Step 3: Fig. 5.6.18 DAG representation eur 1. What are the applications of DAG ? Dea 2. Explain the DAG representation of the basic block with an example Oe ea TECHNICAL PUBLICA Tions® > an up-thrust for knowledge ee comer DO? ed Cade Optinuzation gw DAG-Optimization of Ba: Da ar e flow of DRA shnow that the basic block is a sequence of statements in whieh thy J without halt or possibility of As we my enters at the beginning and leaves at the cont ranching: TWO ty 1, structure preserving pes of optimizations that can be done on basic blocks, nsformations 2 Use of algebraic identities tet us discuss these optimization techniques in detail 4, Structure preserving transformations © The structure preserving transformations canbe applied by applying seme ation, variable and principle techniques such ax common sub expression elim constant propagation, code movement, dead code elimination ‘The structure preserving transformation is a DAG based transformation. That ‘a DAG is constructed for the basic block then the above said me transformations can be applied, The common subexpression can be easily detected by observing the DAG for corresponding basic block. For example : Consider , m=n*p v=m+q pent p g=méq if we assume the values of nel then the expressions becomes mn * p — =1+2 m=2 = pnp — TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Code Optimization Compiler Design (Note that the values are given for understanding of common sub expressions), Thus the above sequence of instructions contain two common sub expressions such as n * p and m+q. But for the common sub expression n * p the value of n gets changed when it reappears. Hence n * p is not a common sub nq expression. But the expression m qo m+q gives the same result in repetitive appearance and values of m and q are consistent each time. Therefore m+q is supposed to be the common sub expression. Ultimately n and q are the same. The DAG is constructed and the common expressions can be identified. For us the common sub expressions means the expressions that are guaranteed to compute the same value. The DAG construction method for optimization is effective for dead code elimination. We can delete any node from the DAG that has no ancestors. This results in dead code elimination. "9 Po Fig. 5.7.1 DAG for identifying common sub expression 2. Use of algebraic identities * Algebraic identities are used in peephole optimization techniques. Also some simple transformations can be applied in order to. optimize the code. For example : at+0=a a*l=a a/l=a Thus corresponding identities can be applied on corresponding algebraic expressions. * The algebraic transformation can be* obtained using the strength reduction technique. nor DOP a code optimization For example = instead of using 2* a we can use a +a. instead of US hus use of lower strength operator instead of higher strength operator makes the ing a/2 we can use a ¥0.5 cde efficient | The constant folding technique can be applied to achieve the algebraic transformations. gor erample : Instead of using a= 2+ 54 we can use @ = 108. This saves the _ afcompiler in doing computations. w the use of common sub expressions elimination, use of associativity and commutativity is to apply algebraic transformations on basic blocks. For example : Consider a block Here commutative law can be applied as y * z = 2 * y and from second expression we can replace z * y by x hence optimized block becomes xy *Z t Review Questions 1. Give the primary structure preserving transformations on basic blocks. EE Te eee eee er Ae nization of basic blocks with example. 2. Describe in detail about optim Ae ae 3. Write short notes on structure preserving transformation of basic blocks. OSs example. COSC oe ME) EBY Loops in Flow Graph OEE CCC common terminologies being used for loops in flow 4. Illustrate optimization of basic blocks with an Let us get introduced with some Braph A. 1 ee ators: In a flow graph, a node d do I node goes through d only. This can be d dom jemates all the remaining, nodes in the flow minates n if every path to node n from noted as ‘d dom n’. Every initial node graph, Similarly every node dominates ee ® TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Compiler Design 5-92 Code Optimization For example : In flow graph as shown in Fig. 5.8.1, Node 1 is initial node and it dominates every node as it is a initial node. Node 2 dominates 3, 4 and 5 as there exists only one path from initial node to node 2 which is going, through 2 (it is 1-2-3). Similarly for node 4 the only path exists is 1-2-4 and for node 5 the only path existing is 1-2-4-5 or 1-2-3-5 which is going through Fig. 5.8.1 Flow graph node 2. Node 3 dominates itself similarly node 4 dominates itself. The reason is that for going to remaining node 5 we have another path 1-4-5 which is not through 3 (hence 3 dominates itself). Similarly for going to node 5 there is another path 1-3-5 which is not through 4 (hence 4 dominates itself). Node 5 dominates no node. 2. Natural loops Loop in a flow graph can be denoted by n> d such that d dom n. These edges are called back edges and for a loop there can be more than one back edges. If there is p > q then q is a head and p is a tail. And head dominates tail. For example : The loops in the graph shown in Fig. 5.8.2 can be denoted by 4 > 1 ie. 1 dom 4, Similarly 5 = 4 ie. 4 dom 5. The natural loop can be defined by a back edge nd such that there exists a collection of all the nodes that can reach to n without going through d and at the same time d also can be added to this collection. For example : 6 — 1 is a natural loop because we can reach to all the remaining nodes from 6. By observing all the predecessors of node 6 we can obtain a natural loop. Hence 2-3-4-5-6-1 is a collection in natural loop. TECHNICAL PUBLICA’ Tions® ~ an up-thrust for knowledge ra pr 0" 9-33 Coro Optimization jnnor HOOPS se inne foop i Toop that contains no other loop yp the inner Toop is 4 > 2 that means edge given by 23-4 1 pre-header the preheader is a new block created such that successor of NX i ays block is the header block, All the computations that can be == 0) jade before the header block can be made before the fig, 5.8.4 Flow graph for rebeader block. inner loop The pre-header can be as shown in \ / Fig, 585. Preheader s, Reducible flow graphs \ A I header header The reducible graph is a flow graph in which there are two types of edges forward edges and backward vw) By Bp edges. These edges have following properties, ° Fig, 5.8.5 Pre-header i) The forward edge form an acyclic graph. iy The back edges are such edges whose head dominates their tail. The flow graph is reducible as shown in Fig. 586. We can reduce this graph by removing the back edge from 3 to 2 edge. Similarly by removing the backedge from 5 to | we can reduce the above flow graph. And the resultant graph is a cyclic graph. The program structure in which there is exclusive use of ifthen, while-do or goto Statements generates a flow graph which is always reducible. Fig, 5.8.6 Flow graph TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Compiler Design 5-34 Code Optimization Se ney GERD) consiter the ftowoing fow graph and find 4) Dominators for each basic block. &) Detect all the loops in the graph. For each loop find the back edge and header block information. Fig. 5.8.7 Solution : In a flow graph node ‘d' dominates n if every path to node ‘n’ from initial node goes through ‘d’ node (Block) only. Every node (Block) dominates itself. B1 = As it is an initial node it dominates all the nodes. B2 = B3, B4, BS, B6, B7, B8, B9, B10, B11 B3 = B9 B4 = BS, B7, BB B5 = B7, BB B6 = Dominates itself because there exist another path also to go to other remaining nodes TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Code Optimization 87 = BS gs = Dominates itself pe = Dominates itself p10 = Dominates node B11 because to reach to B11 we have to g° through B10. ii = Dominates no node. jack edge : BO - BZ ea For the following program segment construct the flow graph and apply the zessible optimizations. i= i do sum := prod+All] * Bil ; health while I< = 20; Solution = Step 1: We will write a three address code. sum = 0 i=l Ly:t) = itd t, = addr (A) -4 2 (th) t, = addr (B)- 4 ts = ty lhl TECHNICAL PUBL! ust for knowledge ICATIONS” - an up-thr Code Optimization Compiler Design 5-36 = ifi<=20 goto Ly Step 2: The flow graph will be SHED = t= adara)—4 = addr(®)—4 i 1<=20got0B, Fig. 5.8.8 Step 3: We can eliminate ty, ts variables. t= addr(a)—4 = addr(B)— 4 yaied t= tlh) 's= 4h) fe brs sum = prod + tg izivt it <= 20 goto B, TECHNICAL PUBLICATIONS® oe Code Optimization jor DOS? minated. Similarly 4 * + Juction The variable i is induction variable, Hence it ean L can be reduced by 3 ce it can be can ble and stre! y addition operation, Hence after elimination of inc variable and strength reduction the optimized code will be sum =0 i=o t= addr(A) — 4 y= addr(B) - 4 Gah td eth ts = ty [th] tert ts sum = prod + ts atts ift <= 20 goto By Fig. 5.8.10 What are the sources of redundancy in code ? Give examples using flow graphs. When is a flow graph said to be ss reducible ? What are the properties of natural loops ? For the flow graph shown in Fig. 5.8.11, i) Compute the dominator relation ii) Find the immediate dominator of each node iii) Construct the dominator tree iv) Find one depth first ordering of the 10. 2=12+r1 11. 16 = (78 12teMn+d flow graph Is the flow graph reducible ? 2) Find the natural loops of the flow ree 16 graph Solution ; a subexpressions, loop - invariant expressions The commo : dundancy in code Re : ‘edundancy in code - e sources of re and Partially redundant code are th UBLICATIONS TECHNICAL PI ‘an up-thrust for knowledge Cote Optimasten Compiter Design 5-38 : — For example - “ invari Partially redundant Common subexpressions Loop-invariant : expression expression, In a given flow graph is reducible when there are exits two groups forward and backward edges with following properties in a flow O graph- 1. The forward edge form an acyclic graph | 2. The back edges having heads that dominates tail. Refer Fig. 5.8.12. | Eas Natural loops - Refer section 5.8 (2). o) We can draw the flow graph for the Fig. 5.8.11 \ i) The dominator relationship - \ Xs Node 1 dominates every node Node 2 dominates 3, 4 and 5. a) 123 b) 124, o) 12 “ dominates 6. a) 5-6 Fig. 5.8.12 ii) Immediate dominator 5 or 124-5 Node 5 © a) Node 2 is immediate dominator (idom) of 3 b) Node 2 is idom of 4 ©) Node 5 is idom of 6. TECHNICAL PUBLICATIONS’ > pesign 5-39 Code Optimization noe! ji) Dominator tree iv) Depth first ordering, OD 25-56 ” 8.13 © ® 2) Reducible g b > Q @ Fig. 5.8.15 Natural loops : 5 > 2 ie. 2 dom 5 Determine the basic blocks of instructions. Control Flow Graph (CFG) and the CFG dominator tree for the following code. o1 a=1 02 0 03 LO: a=a+l o4 bape 05 if (a > b) 06 LI a=3 07 if (b > a) goto L2 08 babel 09 goto L1 10 12 a=b nN b=p+q R if (a > b) goto LO B13: Heptg m=M+b return (2 LAU : May-18, Marks 7 | TECHNICAL PUBLICATIONS” - an up-thrust for knowledge Compiler Design 5-40 Code Optimization Bt 82 b=pe if (a>b) goto L3 Urs 83 if (b>a) goto L2 =o Jay Fig. 5.8.16 The CFG can be drawn as shown in Fig. 5.8.17. Here 1 dominates all the nodes i.e. 2, 3, 4, 5, 6. r® 2 dominates 3, 4, 5, 6 3 dominates 4 @) 4 dominates no node | 5 dominates no node ©) | 6 dominates no node L@ 6 Fig. 5.8.17 : TECHNICAL PUBLICATIONS” an up-thrust for knowledge > jer Desion 5-41 Code Optimization dom tree can be constructed as follows - pence the Fig. 5.8.18 BLED 4 simple matrix multiplication program is given below : for (i = 0, in; i++) for (j=0, jem; j+#) c li] [jl = 00; for (i=0, icn; i++) for (j=0, jen; j++) {for (k=0, ken; k++) cliJlj] = clil{j] + aliltk] * IKI]: ram into’ three-address statements. Assume the matrix entries are ’) Translate the prog’ row-major order. numbers that require 8 bytes, and that matrics are stored in ii) Construct the flow graph for the code from 1. iti) Identify the loops in the flow graph from 2. ‘AU : May-19, Marks 7 + 6 + 2 Solution : (i) Three address code for above given code is as follows - iso 2) ifi >= n goto(13) 3) j=0 4) if} >= n goto(11) Stony 6) Rau4j 7 B=2+R ae TECHNICAL PUBLICATIONS® - an upthnust for knowledge Compiler Design 5-42 Code Optimizatc 8) c{t3] = 0.0 9 j=j+l 10) goto(4) VW) isi+d 12) goto(2) 13) i=0 14) if i >= n goto(40) 15) j=0 16) if | >= n goto(38) 17) k=0 18) if k >= n goto(36) 19) t#=n*i 20) 5 =th4j 21) t6=t5*8 22) 7 = cf{t6] 23) t8=n*i 24) 19 =t8 +k 25) t10=19*8 26) t11 = a[ti0] 27) tl2=n*k 28) t13 = t12 +j 29) t14 = t13*8 30) 15 = b[ti4) 31) t16 = tll * t15 32) 17 = t7 + 16 33) c[t6] = t17 34) k=k41 35) goto(18) 36) j=jt+l TECHNICA! biics i... —_ - Design nor Das an gorott6) wizitl yy goto(l4) «iy The basic blocks can be allocated as follo ws = po oD i=0 p22) if i >= n goto(13) Bp 3) j=0 BE 4) if} >= n goto(11) BS 5)tl=n*i 6) t=th+j 7) 3 =t2*8 8) c{t3] = 0.0 9 j=j+l 10) goto(4) Be 11) +1 12). goto(2) By 13) i=0 BB 14) if i >= n goto(40) 15) j=0 8 1016) if j >= n goto(38) ‘N17 k=0 © TECHNICAL PUBLICATIONS’ ‘an up-thrust for knowledge Code Optimization Compiler Design B12 B13 Bi4 B15 18) if k >= n goto(36) 19) thenti 20) 21) 22) 23) 24) 19 25) 26) 27) 28) 29) 30) 31) 32) 33) 34) 35) = t+ j to=t5 "8 17 = c{t6] t=nti =t8+k to = 19 *8 t11 = a[ti0] tl2=n*k t13 = t2+j tl4 = t13*8 t15 = b[ti4] t16 = tl * t15 tl7 = t7 + t16 c[t6] = t17 k=k+1 goto(18) 36) j=j+l 37) goto(16) 38) i=it+] 39) goto(14) Code Optimization , ne vw graph can be constructed as follows - » flo re 5-45 Code Optimization pesia”” ENTRY | Bt EXIT Fig. 5.18.19 Compiler Design 46 _Code Optimization (ii) Loops in Flow graph are (B2, B3, B4, Bo} (Bd, BS) (BS, B9, B10, BIS) (B10, BLL, B12, Bld) © (B12, B13) Deroy 1, Write an algorithm to construct the natural loop of a back edge. AU : May-11, Marks 6, Dec.-16, Marks 8 2. Write short note on global data flow analysis. CO Sse 3. Discuss in detail about global data flow analysis. EX] Global Data Flow Analysis * The local optimization has a very restricted scope on the other hand the global optimization is applied over a broad scope such as procedure or function body. * For a global optimization a program is represented in the form of program flow graph. The program flow graph is a graphical representation in which each node represents the basic block and edges represent the flow of control from one block to another. © There are two types of analysis performed for global optimizations : control flow analysis and data flow analysis. [> Control flow analysis Global optimization (+ Data flow analysis EEX] contro! and Data Flow Analysis The control flow analysis determines the information regarding arrangement of graph nodes (basic blocks), presence of loops, nesting of loops and nodes visited befo! execution of a specific node. Using this analysis optimization can be performed. carefull Thus in control flow analysis the analysis is made on the flow of control b: examining the program flow graph. In data flow analysis the anal is is made on the data flow. That is the data flo} analysis determines the information regarding the d. ition and use of the data in # TECHNICAL PUBLICA’ TIONS® ~ an up-thrust for knowledge pier DESHI" in 5-47 Code Optimization on, Using this kind of analysis optin ‘sis mization can b ‘an be done. The data flow analysis is progr vy a process in which the values alues are computed using data fl data flow properties. ee Build the control flow fesro a ’ flow graph for following code L c.zd-b satb if (d>i) goto E f=b-4 ki =d-2 pizatf if(di) gotoE Fig. 5.9.4 1h ieATIONS”. ~ an up-thrust for knowledge Compiler Design 5M Cale Opinion LED yyy tet conte ctimination tothe input code given tn (CEG) conteal flaw graph 80 Liaseke2 co=d-b bowatb i (d > i) goto E a1 B2 Fig. 5.9.2 (a) Solution : In the given flow graph various expressions that we come across are - a+b, k+2, d-b, b-d, d-2, at+f, e+k, d+b In block BI we get c : = d~b but c is not used further. In block B3 we use c but in that block we freshly obtain c : = d + b. Hence we can eliminate a dead code c:=d~ b, Similarly in block B3, computing of f : = e + k has no meaning because there is not link from B3, further to any other block and value of 'f' is not used in B3 ate f: =e + k, Finally after eliminating the dead code we get a flow graph as shown in Fig. 5.9.2 (b) at all. Hence we can eli Liar=k+2 (40>) gotoE B3 Fig, 5.9.2 (b) TECHNICAL PUBLICATIONS” re - «an up-thrust for knowledge 5 wv Code Optimization Write short note on global data low analysis. | 4 prscuss in detail about glob data flow analysis rite short note on global data flow analysis. tail about global data flow analysis. CE po Data Flow Properties discussing the data flow properties consider some basic terminologies tl Before hat will gsed while giving the data flow property 4 program point containing the definition is called definition . point. |A program point at which a reference to a data item is made is called reference point +A program point at which some evaluating expression evaluation point. For example : is given is called Definition point Reference point Evaluation point Fig. 5.10.1 Program points 1. Available expression : An expression + '§ available at a program point w if and only if along, all paths are reaching to is said to be avail be available s its last evaluation alon; fore their use. 1. The expression x+y Jable at its evaluation point if no definition of any operand of ig the path. In other word, 2. The expression vty is said to en (here either x or y) follow if neither of the two operands get modified bel Compiier Design 5-50 Code Optinnzation For example : Fig. 5.10.2 The expression 4 * i is the available expression for B2, B3 and B4 because this expression is not been changed by any of the block before appearing in B4. Advantage of available expression * The use of available expression is to eliminate common sub expressions. 2. Reaching definitions Fig. 5.10.3 A definition D reaches at point p if there is a path from D to p if there is a pa from D to p along which D is not killed. A definition D of variable x is killed when there is a redefinition of x. For example : Fig. 5.10.4 The definition dl is said to a reaching definition for block B2. But the definition a is not a reaching definition in block B3, because it is killed by definition d2 in block B2 TECHNICAL PUBLICATIONS® - an up-thrust for knowledge pesia? we a 5-51 Code Optimization reaching definition | ae | we Reaching definitions are used in constant and variable propagation ve variable 5 a variable X is live at come point p if there is a path from p to the exit, along which pe ator of SS used before it is redefined. Otherwise the variable is said to be dead at ot point For example = xis live at block B1, B3, B4 but killed at B6 87 Fig. 5.10.5 Live variables Advantages of live variables + Live variables are useful in register allocation. * Live variables are useful for dead code elimination. 4. Busy expression An expression ¢ is said to be a busy 2 ealuation of e exists along some pat! te ore its evaluation along the path. expression along some path pj...p; if and only if h pp) and no definition of any operand exists ean expression ets evaluated ® TECHNICAL PUBLICATIONS™ - ant up-thrust for Knowledge Corde Optirn Compiler Design 5-52 ation Advantage of busy expression + Busy expressions are useful in performing code movement optimization CELE 2 prvmine da foo anatysis we got ene expressions, reaching definitions, live variables and busy variables and these are useful for applying various expression elimination, copy global optimizing transformations such as global common sub propagations, cade movement optimization, elimination of induction variables. Fea as caed | 1. Which variable is to be made live and why ? Write about data flow analysis of structural programs, Explain the dataflow analysis concept with suitable example. RE | 4. Write about data flow analysis of structural programs. 5. We have already discussed "what is common subexpression ?" we have also seen how to eliminate such an expression while performing local optimization. In this section’ we will learn how to perform global optimization using transformations such as common subexpression elimination, copy propagation and induction variable elimination The available expressions allow us to determine if an expression at point p in a flow graph is common subexpression. Using following algorithm we can eliminate common subexpressions. Algorithm - Algorithm for global common expression elimination. Input - A flow graph with available expression. Output - A flow graph after eliminating common subexpression. Method - For every statement s of the form a: = b+c such that b + ¢ is available expression (ie. b+c is available at the beginning of block containing s and neither b not is defined prior to statement s) then perform following steps - 1, Discover the evaluations of b + c that reach in the block containing statement 2. Create a new variable m TECHNICAL PUBLICATIONS® - an up-thust for knowledge What is data flow analysis ? Explain data flow abstraction with examples. EREI Efficient Data Flow Algorithms EREEI Redundant Common Subexpression Elimination 5 Code Optimization copy statement TECHNICAL PUBLICATIONS® - an up-thrust for knowledge Code Optimization compiler Design = 5-55 [BIE Induction Variable A variable i is called an induction variable of loop L if every time the variable i ges values i.e. everytime i either gets incremented or decremented. chan For example - jis an induction variable. For a for loop for i: = 1 to 10. While eliminating induction variables first of all we have to identify all the induction variables. Generally induction variables come in following forms a: =i*b a: = b*i itb fae Dia Where b is a constant and i is an induction variable, basic or otherwise. If b is a basic then a is in the family of j. The a depends on the definition of i. For example - a : = i * b then the triple for a is (j, b, 0). We will understand this concept of writing triple with the help of block. In the block B2 has basic induction variable i because i gets incremented each time of loop L by 1. The family of i contains t because there is an assignment t, : = 4 * i. Hence the triple for tp is Gi 4, 0) B2 if tz < 10 goto B2 L + Induction Matching with variable constant b. Let us see an algorithm for elimination of induction variables. Algorithm - Elimination of induction variables. - A loop L with reaching definition information, loop invariant Input computation and live variable information. Output - A flow graph without induction variables TECHNICAL PUBLICATIONS? = an up-thrust for knowledge os Compiler Design 5-56 20 nti ay Method - ‘ 1) Find the induction variable i with triple (i, c, d). Consider a test form if i relop x goto B where i is an induction variable and x is not an induction variahj. ti = c#x tis ttd if j relop t goto B. Where t is a new temporary. Finally delete all assignments to the eliminated induction variables from the loop because these induction variables will be useless. For example Here iin B2 and k in B3 are two induction variables because their values get changed at each iteration, we will create new temporary variables 1 and rp to which induction variables iiand k are assigned 124%k t= ally) + 10 ifts =k goto BE Fig. 5.14.4 TECHNICAL PUBLICATIONS® - an upnrut or 5-87 Code Optimization 81 Note these newly introduced variables B2___Note that now we can simply perform 11 + 40 got effect of next value of ti | 83 _ Note that now we can simply perform 124 to goteeect of next value of 4°k 86 ath reduction Fig, 5.11.2 Flow graph with strent fe eos Te CULL} sum = 0 do 10i= 1" 10 sum = sum + ali) * ali) Write the following (i) Three address code (iii) Local common subexpression (o) Reduction in strength. (ii) Control flow graph elimination (CSE) (iv) Invariant code motion ‘Compiler Design Solution : i) Three address code : sum = 0 i=l Lit) = 4%i ty = addr (a) -4 & = ttt t= 4ti ts = addr (a)-4 te = ts Th) by = ty tt, tg = sum +t, sum = ty tg = itl i= ty ifi <=n GOTO Ly Control flow graph : iti<=nGoTOB, Fig. 5.14.3, Local common subexpression elimination : ts = addr (a) - 4 and 4.* 1. are common subexpression TECHNICAL PUBLICATIONS® . an up-thrust for knowledge 5-59 Code Optimization Jiminate them. The 3 address code will th ss code will then be Hence we iv) Invariant code motion : There is no loop invariant code motion. But we can optimize the code by eliminating ts and ty, sum = 0 ya4ei heb teat tts sum = sum + ty 4 nn GOTO By Fig. 5.11.5 is induction variable. nd the operation 4 * Fig. 5.11.6 y) Reduction in strength : Variable i i i will be replaced by We will eliminate In addition operation. The optimiz duction variable a ed code will be Weber sum = sum + ty itt, <= GOTO Be Fig. 5.11.6 TECHNICA! pp-thnust for knowledge iL PUBLICATIONS” ~ an uj 5-60 | 1. Explain the concept of induction variable 2. Write an algorithm for eliminating the induction variable, Write global common subexpression elimination algorithin with exaniple. What do you understand by the terms copy propagation ? Write an algorithm for copy propagation ERE Recent Trends in Compil 1. Just In Time Compilation (JIT) : r Design * Just in Time compilation is a jynamic translation technique which is also called a, run Hime compilation. In this technique the task of compilation is done during thy execution of a program, JIT compiler has access to dynamic runtime information enabling, it to make beter optimizations. Examples of JIT compilers are - JVM(ava Virtual Machine) in Java, and CLR(Common Language Runtime) in C4. 2. Machine learr 1g based optimization : Machine learning is a branch of artificial intelligence (Al) and compute! which focuses on the use of data and algorithms to imitate the lear, gradually improving its accuracy. 1 science way that humans Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions, * Machine learning techniques are used to optimize code ‘generation and performance im the ‘machine learning based optimization process created to learn the patterns of the code. Using these is possible efficiently and automatically, » the training models are Patterns the code generation 3. Parallel compilation : * Modern computers make use of multicore CPUs compilers to parallelize the proc * The This facilitate the modern of compilation, effect of parallel compilation is in fast execu n of larger projects. Examples of parallel compilers are - Paradigm compiler, Polaris Compiler TECHNICAL PUBLICATIONS® «an Uup-thrust for knowledge jer Design 561 compl Code Optimization _panguage ~ independent compiler infrastructure : » The support for multiple progr. ‘ mi i. i ‘ogramming, languages en 8 languages make the modern compilers « This allows the developers to si mj ni ‘sto use the same compilati ows the ame jation tools to compile and optimize the code generation across different lanj sues pcan 5, Advanced compiler analysis and optimization techniques : « Today, as we embrace new technology with great changes in architectural design (we now have even smaller devices with embedded processors and micro-chips), highly parallel processing, registers allocations, cache hierarchy and high processing speed demands, new compiler analysis and optimization needs to be adopted in order to match with the increasing demand for better compilers. « Cross Linking Optimization : This method is commonly used in engine optimization. Today, this method has also been applied in compiler optimization. Cross-linking can be applied locally or globally to functions that 1 codes. Since recent computer Grchitectures are focused on code reduction, cross-linking has this as its meio" goal. When tail codes are spotted in a switch statement, cross-linking to factor out this codes thereby reducing the search contains switch statements with similar tai optimization algorithm is used actual size of the code. 5.19 | Two Marks Questions with Answers {1 What is the need of code optimization ? 1 optimization essential ? the efficient target code. 2 What are the criteria that needs to be consi ered while applying the code | optimization techniques ? | Ans. : There are two important issues that nee: or code optimization and those are, of the prograi OR — Why is compil ‘Ans. ; The code optimization is used to produc d to be considered while applying the techniques fo | 1. The semantic equivalence words the meaning of program must not be program effici m must not be changed. In simple changed. y must be achieved without 2. The improvement over the the algorithm of the program. changing; achine dependant optimization. 3 Give the criteria for achieving m: ned using following, criteria ~ jant optimization obt 5) Allocation of sufficient number of resources (0 improve the execution Ans. : The machine depend efficiency of the program. TECHNICAL PUBLICATIONS” an up-tnust for knowledge

You might also like