Download as pdf
Download as pdf
You are on page 1of 463
ee ae cs fa! eS ee SSS oS ae 5 5 a a e Oo eee iss ee COMPUTER LOGIC DESIGN M. MORRIS MANO Professor of Engineering California State University PRENTICE-HALL, INC., Englewood Cliffs, New Jersey Library of Congress Cataloging in Publication Data MANO,M. MORRIS ‘Computer logic design (Prentice-Hall series in automatic computation) Includes bibliographies. 1. Switching theory. 2. Electronic digital computers-Cireuits. 1. Title. TK7868.S9M275_621.3819°S35 72-668 ISBN 0-13-165472-1 © 1972 by PRENTICE-HALL, INC. Englewood Cliffs, New Jersey All rights reserved. No part of this book may be reproduced in any form or by any means without permission in writing from the publisher. 10987654321 Prentice-Hall International, Inc., London Prentice-Hall of Australia, Pty, Ltd., Sydney -e-Hall of Canada, Ltd., Toronto Prentice Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo to my wife CONTENTS PREFACE BINARY SYSTEMS 1 1-1 Digital Computers and Digital Systems 1-2 Binary Numbers 13 Number Base Conversions 1-4 Octal and Hexadecimal Numbers 1-5 Complements 16 Binary Codes 1-7 Binary Storage and Registers 1-8 Binary Logic BOOLEAN ALGEBRA 33 241 Basic Definitions 22 Axiomatic Definition of Boolean Algebra 23 Basic Theorems and Properties of Boolean Algebra 2-4 — Boolean Functions 25 Canonical Forms 26 Other Binary Operators SIMPLIFICATION OF BOOLEAN FUNCTIONS 57 3-1 The Map Method 3-2 Two- and Three-Variable Maps 3:3. Four-Variable Map 33 35 38 42 47 92 37 58 63 viii CONTENTS 34 35 3-6 37 38 3.9 3.10 Five- and Six-Variable Maps Product of Sums Simplification Don’tCare Conditions ‘The Tabulation Method Determination of Prime-Implicants Selection of Prime-Implicants Concluding Remarks COMBINATIONAL LOGIC 87 41 42 43 44 45 46 47 48 49 Introduction to Combinational Logic Circuits Design Procedure Adders Subtractors Code Conversion Comparators Analysis Procedure Decoders and Encoders Multiplexers or Demultiplexers GATE IMPLEMENTATION 120 Sa 5:2 53 5-4 5-5 5-6 57 58 Integrated Circuits Digital Logic Gates Positive and Negative Logic ‘Simplification Criteria NAND Logic NOR Logic Other Two-Level Implementations Exclusive-or and Equivalence Functions SEQUENTIAL LOGIC 165 6-1 62 63 64 65 66 Introduction Flip-Flops ‘Triggering of Flip-Flops Analysis of Clocked Sequential Circuits Counters Shift Register 65 68 2 74 14 82 87 89 95 97 103 108 113 120 122 124 126 129 13 148 154 165 168 174 178 185 192 7 vesign oF cLockeD SEQUENTIAL CIRCUITS 200 10 7 72 73 14 15 16 Introduction The State Reduction and Assignment Problems Flip-Flop Excitation Tables Design Procedure Design Examples Design of Counters OPERATIONAL AND STORAGE REGISTERS 237 8:1 Register Transfer 8&2 Register Operations 83 The Memory Unit 8-4 Examples of Random-Access Memories 8&5 Data Representation in Registers BINARY ADDITION AND THE ACCUMULATOR REGISTER 268 9-1 Introduction 9-2. Parallel Addition 93. Carry Propagation 9-4 — Serial Addition 9-5 Parallel Versus Serial Operations 9-6 Elementary Operations 9-7 Accumulator Design 9-8 Complete Accumulator 9-9 Accumulator Operations 9-10 Concluding Remarks COMPUTER ORGANIZATION 310 10-1 10-2 103 10-4 10-5 106 Stored Program Concept Organization of a Simple Digital Computer Large General Purpose Computers Classes of Machine Instructions Character Mode Addressing Techniques CONTENTS. ix 201 206 210 216 227 237 241 244 251 257 268 270 216 279 282 283 285 293 296 304 310 314 322 332 341 343 x CONTENTS 11 12 COMPUTER DESIGN 355, 1d 11-2 113 11-4 1S 11-6 17 118 119 Introduction System Configuration Machine Instructions ‘Timing and Control Execution of Instructions Register Operations Derivation of Boolean Functions Computer Console Concluding Remarks LOGIC DESIGN 390 121 122 123 12-4 125 126 Introduction Decimal Adder Control Logic Design Asynchronous Transfer Design Algorithms Binary Multiplication APPENDIX: Answers to Selected Problems 426 INDEX a4 355 356 362 368 a1 375 378 382 383 390 391 394 399 407 416 PREFACE A digital system is a system that processes discrete elements of informa- tion, The best known example of a digital system is the general purpose digital computer. Logic design is concerned with the interconnections between digital components and modules and is a term used to denote the design of digital systems. The purpose of this book is to present the basic concepts used in the design and analysis of digital systems and to introduce the principles of digital computer organization. The viewpoint of the book is primarily tutorial. It provides various methods and techniques from which the reader hopefully will develop a design philosophy applicable to any digital system problem. Electronic circuits used in digital systems are invariably manufactured in integrated circuit form. Many integrated circuit packages available commer- cially contain a large amount of interconnected components within a single silicon chip and provide a specific and complete digital function. It is very important that the logic designer be familiar with the various digital functions commonly encountered in integrated circuit packages. For this reason, many such circuits are introduced throughout the book and their logical function fully explained, Logic design and switching circuit theory are two interrelated branches of study and the two names are interchangeable when simple digital systems are considered, Both logic design and switching circuit theory are concerned with the analysis and synthesis of combinational and sequential circuits. Complex digital systems, however, have two different widely used interpretations. From the switching circuit theory point of view, a digital system is described by a state system which is a direct extention of the definition given to a sequential circuit. This point of view asserts that, at any instant of time, a digital system xi xii PREFACE is capable of being in one of many possible abstract states. The behavior of the system is then specified by a transition function that determines the next state from the current state and the inputs. A large system such as a digital computer may be represented, in principle, as a state system, but the number of states is far too large to make this representation practical. When dealing with the logic design of complex digital systems, it is convenient to formulate and define the system by means of register transfers. The components in this representation are computer registers and modules together with a set of functional transfers between registers. With this representation, the digital system is decomposed into register subunits and specified by the operations executed in each register. Chapter 1 to 7 of this book contain a practical approach to the subject which is sometimes classified under the heading of basic switching circuits. What makes this book different from a switching cireuit theory book is the subject matter covered in Chapters 8 through 12 where the register transfer concept is developed and used to describe and design digital systems. Chapter 1 introduces various binary systems such as binary numbers, binary codes, binary storage and binary logic. Chapter 2 covers Boolean algebra and Chapter 3 presents the map and tabulation methods for simplifying Boolean functions. The first three chapters serve as background information and contain basic material needed for the rest of the book. Chapter 4 presents design and analysis procedures for combinational circuits. Various functions that are available in integrated circuit packages such as adders, comparators, decoders and multiplexers are introduced as design or analysis examples. Chapter 5 starts with a discussion of integrated circuits and their impact on logic design. The chapter continues with a presentation of techniques for NOR and NAND logic implementation and of various other gate implementations. Chapter 6 on sequential logic starts by presenting the properties of various types of flip-flops such as RS, JK, T and D. The tools for analysing clocked sequential circuits; the state diagram, state table and state equations, are then presented. The logic diagrams of some common clocked sequential circuits such as binary and decimal counters, and registers are shown and their function explained. The examples given in this analysis chapter are chosen from those that have specific functions and, because of their widespread use, are commonly available in integrated circuit packages. Chapter 7 introduces procedures for the design of clocked sequential circuits. The topics of state reduction and state assignment are mentioned briefly without bringing any manipulative methods. The emphasis in this chapter is on the methods by which the Boolean functions to the inputs of flip-flops are derived from the state table and simplified by means of maps. Numerous examples are given including the design of counters. PREFACE Chapter 8 introduces a symbolic notation for register transfer operations. The memory unit is explained in terms of transfers among a collection of storage registers and the types of data represented in registers is discussed. The first half of Chapter 9 introduces methods for binary addition. The design of a general purpose accumulator register is then undertaken, starting with a specified set of elementary operations and culminating in a logic diagram. A discussion of various accumulator operations is then presented to emphasize the concept of a general purpose register and its place in large-scale integration design. Chapter 10 on computer organization presents a description of the most common type of digital system, ie., the stored program digital computer. The implementation of a simple general purpose computer is explained in terms of register transfer operations. The rest of the chapter discusses many important concepts found in general purpose computers A simple digital computer is designed in Chapter 11 starting from a specified set of machine instructions and hardware registers and culminating in a list of Boolean functions that specify the interconnections between the various digital components. The system description is formalized by register transfer symbology and a procedure is mechanized for translating the register transfer relations into Boolean functions that represent the combinational networks among the registers. Chapter 12 contains examples that demonstrate the design of control logic and the development of algorithms for the implementation of digital functions. In summary, the book covers the following main topics. Chapter I gives an introduction to binary systems. Chapters 2-5 are concerned with combina- tional circuits. Chapters 6-7 deal with synchronous clocked sequential circuits. Chapters 8-9 introduce the concept of registers, their function and design. Chapters 10-11 cover the organization and design of general purpose digital computers and Chapter 12 is a summary of the various design techniques introduced throughout the book. Answers to most of the problems appear in the back of the book to provide an aid for the student and to help the independent reader. A solutions manual is available for the instructor from the publisher. The book is suitable for a two-term course in computer logic design introducing the methods and techniques of analysis and design of digital systems. It can be used in an Electrical Engineering or in a Computer Science curriculum, The book is also suitable for self study by engineers or computer scientists who need to acquire a knowledge of logic design The book can also be used in a one-term course in a variety of ways. (1) As a first course in logic design or switching circuits covering Chapters 1 through 8. (2) As a course in computer organization and design with pre- requisite of a course in basic switching circuit theory by covering Chapters 8 xiv PREFACE through 12 with a review of Chapters 6 and 7. (3) As a course in com- puter organization in a Computer Science Department by covering Chapters 1 through 10 and omitting some of the design sections in Chapters 4, 5, 7 and 9. In fact, this book covers about 80% of the material of the com- puter organization course number 13 recommended by the report “ACM Curriculum 68” published in the March 1968 issue of the Communications of the ACM. I wish to express my thanks to Dean Eugene Kopp for his encouragement, Thanks also to Mrs. Ernestine Wellenstein and Mrs. Pat Anderson for typing parts of the manuscript. My greatest thanks go to my wife Sandra for editing the entire manuscript, for typing most of it and for her encouragement and support of the entire project. M, MORRIS MANO BINARY 1 SYSTEMS 1-1 DIGITAL COMPUTERS AND DIGITAL SYSTEMS Digital computers have made possible many scientific, industrial, and commercial advances that would have been unattainable otherwise. Our space program would have been impossible without teal time, continuous computer monitoring, and many business enterprises function efficiently only with the aid of automatic data processing. Computers are used in scientific calculations, commercial and business data processing, air traffic control, space guidance, the educational field, and many other areas. The most striking property of a digital computer is its generality. It can follow a sequence of instructions, called a program, that operates on given data. The user can specify and change programs and/or data according to the specific need. As a result of this flexibility, general purpose digital com- puters can perform a wide variety of information processing tasks. The general purpose digital computer is the best known example of a digital system. Other examples include: telephone switching exchanges, digital voltmeters, frequency counters, calculating machines, and teletype machines. Characteristic of a digital system is its manipulation of discrete elements of information. Such discrete elements may be electric impulses, the decimal digits, the letters of an alphabet, arithmetic operations, punctua- tion marks, or any other set of meaningful symbols. The juxtaposition of discrete elements of information represents a quantity of information. For example, the letters d, 0, and g form the word dog. The digits 237 form a number. Thus, a sequence of discrete elements forms a language: that is, a 2 BINARY SYSTEMS Chap. 1 discipline that conveys information. Early digital computers were used mostly for numerical computations. In this case the discrete elements used are the digits. From this application, the term digital computer has emerged. A more appropriate name for a digital computer would be a “discrete information processing system.” Discrete elements of information are represented in a digital system by physical quantities called signals. Electrical signals such as voltages and currents are the most common. The signals in all present-day electronic digital systems have only two discrete values and are said to be binary. The digital system designer is restricted to the use of binary signals because of the lower reliability of many-valued electronic circuits. In other words, a circuit with ten states, using one discrete voltage value for each state, can be designed, but it would possess a very low reliability of operation. In contrast, a transistor circuit that is either on or off has two possible signal values and can be constructed to be extremely reliable, Because of this physical restriction of components, and because human logic tends to be binary, digital systems that are constrained to take discrete values are further constrained to take binary values. Discrete quantities of information emerge either from the nature of the process or may be purposely quantized from a continuous process. For example, a payroll schedule is an inherently discrete process that contains employee names, social security numbers, weekly salaries, income taxes, etc. ‘An employee’s paycheck is processed using discrete data values such as letters of the alphabet (names), digits (salary), and special symbols such as $. On the other hand, a research scientist may observe a continuous process but record only specific quantities in tabular form. The scientist is thus quantizing his continuous data. Each number in his table is a discrete element of information. Many physical systems can be described mathematically by differential equations whose solutions as a function of time give the complete mathe- matical behavior of the process. An analog computer performs a direct simulation of a physical system. Each section of the computer is the analog of some particular portion of the process under study. The variables in the analog computer are represented by continuous signals, usually electric voltages that vary with time. The signal variables are considered analogous to those of the process and behave in the same manner. Thus measurements of the analog voltage can be substituted for variables of the process. The term analog signal is sometimes substituted for continuous signal because “analog computer” has come to mean a computer that manipulates continu- ous variables. To simulate a physical process in digital computer, the quantities must be quantized. When the variables of the process are presented by real time continuous signals, the latter are quantized by an analog to digital conver- sion device. A physical system whose behavior is described by mathematical See. 1-1 1-1 DIGITAL COMPUTERS AND DIGITAL SYSTEMS 3 equations is simulated in a digital computer by means of numerical methods. When the problem to be processed is inherently discrete, as in commercial applications, the digital computer manipulates the variables in their natural form. Both analog and digital computers have their advantages. Analog computers are used when problems require fast solutions with limited accuracy or when large numbers of repetitive calculations are required with variations of parameters. A digital computer is used when the data are in discrete form; when high accuracy, logical decision, and control capabilities are required; and when a general purpose machine is advantageous. A hybrid computer is an interconnection of both an analog and a digital computer. This combination possesses the advantages of both computers and is very useful for simulation studies of physical systems. A block diagram of a small digital computer is shown in Fig. 1-1. The ‘memory unit stores programs as well as input, output, and intermediate data. The arithmetic unit performs the required processing tasks on data obtained from the memory unit. The control unit supervises the flow of information between the various units. The input and output devices accept programs and data (either prepared by the user or generated by the computer) and transfer them to and from the memory unit. The input and output devices are special purpose digital systems driven by an electro- mechanical mechanism and controlled by electronic digital circuits. Since they are special purpose machines, they will not be covered in this book. The operational characteristics of a memory unit are explained in Sec. 8-3 ‘The organization of a small digital computer, viewed especially from the Processor Arithmetic Unit ‘Storage ‘Memory Unit Control Unit Input Output L} Devices Devices and ‘and Control Control Se Figure 1-1 Block diagram of a small digital computer 4 BINARY SYSTEMS Chap. 1 operation of the processor and control units, is presented in some detail in Ch. 10. The processor, memory, and control units of a large digital computer are essentially the same as those shown in Fig. 1-1. On the other hhand, a large computer has a wide range of input and output equipment and uses special processors to control the information flow between the memory unit and external devices. Such a computer is introduced and discussed in more detail in Sec. 10-3. It has already been mentioned that a digital computer manipulates discrete elements of information and that these elements are represented in the binary form. Operands used for calculations may be expressed in the binary number system, Other discrete elements, including the decimal digits, are represented in binary codes. Data processing is carried out by means of binary logic elements using binary signals. Quantities are stored in binary storage elements. The purpose of this chapter is to introduce the various binary concepts as a frame of reference for further detailed study in the succeeding chapters. 1-2 BINARY NUMBERS A decimal number such as 7392 represents a quantity equal to 7 thousands plus 3 hundreds, plus 9 tens, plus 2 units. The thousands, hundreds, etc., are powers of 10 implied by the position of the coefficients. To be more exact, 7392 should be written as 7X 10° +3 X 10? +9X 10' +2X 10° However, the convention is to write only the coefficients and from their position deduce the necessary powers of 10. In general, a number with a decimal point is represented by a series of coefficients as follows: 954302049 , 0102s The a coefficients are one of the 10 digits (0,1,2,...,9), and the subscript value j gives the place value and hence, the power of 10 by which the coefficient must be multiplied 108as + 10%ay + 10%a3 + 10a, + 10'a, + 10a) + 10-'ay 5 4 + 10a. + 10703 The decimal number system is said to be of base, or radix, 10 because it uses 10 digits and the coefficients are multiplied by powers of 10. The binary system is a different number system. The coefficients of the binary number system have two possible values: 0 and 1. Each coefficient a is multiplied by 2 j. For example, the decimal equivalent of the binary number 11010.11 is 26.75, as shown from the multiplication of the coefficients by powers of 2: See, 1.2 BINARY NUMBERS § IX M+IX PHOX 2+ 1XM%+OX MH 1X DW +1X 2? = 26.75 In general, a number expressed in base r system has coefficients multiplied by powers of r: Gq + dy Ne ta bay Tt a Vag ret t ag tay The coefficients aj range in value from 0 to r — 1. To distinguish between numbers of different bases, we enclose the coefficients in parentheses and write a subscript equal to the base used (except sometimes for decimal numbers where the content makes it obvious that it is decimal). An example of a base 5 number is: (4021.2)s = 4X 5? +0X $7 +2X 5? 41X 5° 42X57 =(511.4)i0 Note that coefficient values for base 5 can be only 0, 1, 2, 3, and 4. It is customary to borrow the needed r digits for the coefficients from the decimal system when the base of the number is less than 10. The letters of the alphabet are used to supplement the 10 decimal digits when the base of the number is greater than 10. For example, in the hexadecimal (base 16) number system, the first 10 digits are borrowed from the decimal system. The letters 4, B, C, D, E, and F are used for digits 10, 11, 12, 13, 14, and 15, respectively. An example of a hexadecimal number is: (B65F)ig = 11 X 16? + 6 X 167 +5 X 16 + 15 = (56687)19 The first 16 numbers in the decimal, binary, octal, and hexadecimal systems are listed in Table 1-1. ‘Table 1-1 Numbers With Different Bases Decimal Binary Octal Hexadecimal (base 10) (base 2) (base 8) (base 16) 00 0000 00 ° o1 0001 o1 1 02 010 02 2 03 0011 03 3 04 0100 04 4 0s o101 0s 5 06, 110 06 6 07 oun o7 7 08. 1000 10 8 09. 1001 u 9 10 1010 12 A uw 1011 13 B 2 1100 14 c 1B 101 15 D 14 1110 16 E 1s uu 7 F 6 BINARY sysTeMs Chap. 1 Arithmetic operations with numbers in base r follow the same rules as for decimal numbers. When other than the familiar base 10 is used, one must be careful to use only the r allowable digits. Examples of addition, subtraction, and multiplication of two binary numbers are shown below: augend: 101101 , minuend: 101101 __multiplicand: 1011. addend: 100111” subtrahend: 100111 ~ multiplier: 101 sum: 1010100 difference: 000110 Tort 0000 Jolt product: oi The sum of two binary numbers is calculated by the same rules as in decimals, except that the digits of the sum in any significant position can be only 0 or I. Any “carry” obtained in a given significant position is used by the pair of digits one significant position higher. The subtraction is slightly more complicated. The rules are still the same as in decimal except that the “borrow” in a given significant position adds 2 to a minuend digit. (A borrow in the decimal system adds 10 to a minuend digit.) Multiplica- tion is very simple. The multiplier digits are always 1 or 0. Therefore, the partial products are equal either to the multiplicand or to 0. 1-3 NUMBER BASE CONVERSIONS ‘A binary number can be converted to decimal by forming the sum of the powers of 2 of those coefficients whose value is 1. For example: (1010.011), = 2° + 2! + 27 + 24 =(10375)10 The binary number has four 1's and the decimal equivalent is found from the sum of four powers of 2. Similarly, a number expressed in base r can be converted to its decimal equivalent by multiplying each coefficient with the corresponding power of r and adding. The following is an example of octal to decimal conversion: (630.4), = 6 X 87+ 3X 84+4X 8 = (408.5)10 The conversion from decimal to binary or to any other base r system is more convenient if the number is separated into an integer part and a fraction part and the conversion of each part done separately. The conversion of an integer from decimal to binary is best explained by example. EXAMPLE 1-1. Convert decimal 41 to binary. First, 41 is divided by 2 to give an integer quotient of 20 and a remainder of 1/2. The quotient is See. 1-3 NUMBER BASE CONVERSIONS 7 again divided by 2 to give a new quotient and remainder. This process is continued until the integer quotient becomes 0. The coefficients of the desired binary number are obtained from the remainders as follows: i eae eee ay ANSWER: (41)19 = (@s@4@3@2@1@9)2 = (101001), The arithmetic process can be manipulated more conveniently as follows: integer remainder 4a Ee 101001 = answer 20 10 5 2 ‘The conversion from decimal integers to any base r system is similar to the above example except that division is done by r instead of 2. EXAMPLE 1-2. Convert decimal 153 to octal. The required base r is 8. First, 153 is divided by 8 to give an integer quotient of 19 and a remainder of 1. Then 19 is divided by 8 to give an integer quotient of 2 and a remainder of 3. Finally, 2 is divided by 8 to give a quotient of 0 and a remainder of 2. This process can be conveniently manipulated as follows: 0 153 19 2 0 23D The conversion of a decimal fraction to binary is accomplished by a method similar to that used for integers. However, multiplication is used @ BINARY SYSTEMS Chap. 1 instead of division, and integers are accumulated instead of remainders. Again the method is best explained by example. EXAMPLE 1-3. Convert (0.68750 to. binary. First, 0.6875. is multiplied by 2 to give an integer and a fraction. The new fraction is multiplied by 2 to give a new integer and a new fraction. This process is continued until the fraction becomes 0 or until the number of digits have sufficient accuracy. The coefficients of the binary number are obtained from the integers as follows: integer fraction coefficient 0.6875 X 1 + 037800 aad 0.3750 X 0 + 07500 aa 0.7500 X 1 + 05000 aa 0.5000 X 2 = 1 + 0.0000 a4 ANSWER: (0.6875) io = (0-a-4a-20-30-4)2 = (0.1011)2 In converting a decimal fraction to a number expressed in base r, the procedure is similar. Multiplication is by r instead of 2, and the coefficients found from the integers may range in value from 0 to r ~ 1 instead of 0 and 1. EXAMPLE 1-4. Convert (0.513)i9 to octal. 0.513 X 8 = 0.104 X 8 = 0.832 0.832 X 8 = 6.656 0.656 X 8 = 5.248 0.248 X 8 = 1.984 0.984 X 8 = 7.872 ‘The answer, to seven significant figures, is obtained from the integer part of the products: (0.513)10 = (0.406517 .. Js The conversion of decimal numbers with both integer and fraction parts is done by converting the integer and fraction separately and then combining the two answers together. Using the results of Exs. 1-1 and 1-3 we obtain: (41.6875)y0 = (101001.1011), From Exs. 1-2 and 1-4, we have: (153.513)10 = (231406517) 104 See. 1-4 OCTAL AND HEXADECIMAL NUMBERS 9 1-4 OCTAL AND HEXADECIMAL NUMBERS The conversion from and to binary, octal, and hexadecimal plays an impor- tant part in digital computers. Since 2° = 8 and 2* = 16, each octal digit corresponds to three binary digits and each hexadecimal digit corresponds to four binary digits. The conversion from binary to octal is easily accom- plished by partitioning the binary number into groups of three digits, each starting from the binary point and proceeding to the left and to the right. The corresponding octal digit is then assigned to each group. The following example illustrates the procedure: 110,001, 101,011, + ,111, 100,000,110, )2 = (26153.7406). (10,110,001, 101,011, + 114 100,000, 26 15 3 740 6 Conversion from binary to hexadecimal is similar except that the binary number is divided into groups of four digits: ( 10,1100,0110, 1011, + 11110010), = (2C6BF2);6 40,1100,0110, 20 6 B F 2 The corresponding hexadecimal (or octal) digit for each group of binary digits is easily remembered after studying the values listed in Table 1-1. Conversion from octal or hexadecimal to binary is done by a Procedure reverse to the above. Each octal digit is converted to its three- digit binary equivalent. Similarly, each hexadecimal digit is converted to its four-digit binary equivalent. This is illustrated in the following examples: (673.124)q = (110,111,011, + ,001, 010,100, )2 673 teegees (306. D)16 = (0011, 00000110, - ,1101,), cI o 6 D Binary numbers are difficult to work with because they require three or four times as many digits as their decimal equivalent. For example, the binary number I11IIIIII111 is equivalent to decimal 4095. However, digital computers use binary numbers and it is sometimes necessary for the human operator or user to communicate directly with the machine by means of binary numbers. One scheme that retains the binary system in the computer but reduces the number of digits the human must consider utilizes the relationship between the binary number system and the octal or hexadecimal systems. By this method, the human thinks in terms of octal or hexadecimal numbers and performs the required conversion by inspection when direct communication with the machine is necessary. Thus the binary number I11[11111111 has 12 digits and is expressed in octal as 7777 (four digits) or in hexadecimal as FFF (three digits). During communication 10 BINARY SYSTEMS Chap. 1 between people (about binary numbers in the computer), the octal or hexadecimal representation is more desirable because it can be expressed more compactly with 1/3 or 1/4 the number of digits required for the equivalent binary number. When the human communicates with the machine (through console switches or indicator lights or by means of programs written in machine language), the conversion from octal or hexadecimal to binary and vice versa is done by inspection by the human user. 1-5 COMPLEMENTS Complements are used in digital computers for simplifying the subtraction operation and for logical manipulations. There are two types of comple- ments for each base r system: (a) the r’s complement and (b) the (r — 1)’s complement. When the value of the base is substituted, the two types receive the names: 2's and 1’s complement for binary numbers, or 10's and 9°s complement for decimal numbers. The r’s Complement: Given a positive number NV in base r with an integer part of » digits. The 7's complement of V is defined to be: r” — N for N # 0, and 0 for N = 0. The following numerical example will help clarify the definition. The 10's complement of (52520),o is 10° — $2520 = 47480. The number of digits in the number is n = S. The 10's complement of (0.3267), is 1 ~ 0.3267 = 0.6733 No integer part, so 10” = 10° = 1. The 10’s complement of (25.639);9 is 10? ~ 25.639 = 74,361 The 2's complement of (101100), is (2°)10 — (101100)2 = (1000000 — 101100), = 010100. The 2's complement of (0.0110). is (1 — 0.0110) 1.1010. From the definition and the examples it is clear that the 10's comple- ment of a decimal number can be formed by leaving all least significant zeros unchanged, subtracting the first nonzero least significant digit from 10, and then subtracting all other higher significant digits from 9. The 2's complement can be formed by leaving all least significant zeros and the first nonzero digit unchanged, and then replacing 1’s by 0's and 0's by 1’s in all other higher significant digits. A third simpler method for obtaining the r’s complement is given after-the definition of the (r — 1)'s complement. Sec. 15 COMPLEMENTS 11 From the definition and the examples it is clear that the 10’s comple- ment of a decimal number can be formed by leaving all least significant zeros unchanged, subtracting the first nonzero least significant digit from 19, and then subtracting all other higher significant digits from 9. The 2's Complement can be formed by leaving all least significant zeros-and the first nonzero digit unchanged, and then replacing I’s by 0's and 0's by 1's in. all other higher significant digits. A third simpler method for obtaining the 7's complement is given after the definition of the (r — 1)’s complement. The (7 - 1)’s Complement Given a positive number NV in base r with an integer part of n digits and a fraction part of m digits. The (r — 1)’s complement of 1V is defined to be: rf —r™ —N. Some numerical examples follow: The 9s complement of (52520)19 is (10° — 1 — $2520) = 99999 — 52520 = 47479. No fraction part, so 10°" = 10° = 1. The 9°s complement of (0.3267),0 is (1 ~ 10 — 0.3267) = 0.9999 — 0.3267 = 0.6732. No integer part, so 10” = 10° The 9's complement of (25.639); 0 is (10? — 10°*~ 25.639) = 99.999 — 25.639 = 74.360. The 1’s complement of (101100)2 is (2° — 1) ~ (101100) = (111111 = 101100), = 010011. The 1’s complement of (0.0110). is (1 ~ 2-49 ~ (0.0110), = (0.1111 ~ 0.0110): = 0.1001. From the examples, we see that the 9's complement of a decimal number is formed simply by subtracting every digit from 9. The I’s complement of a binary number is even simpler to form: the 1’s are changed into 0's and the 0's into I's. Since the (r — 1)’s complement is very easily obtained, it is sometimes convenient to use it when the r’s complement is desired. From the definitions, and from a comparison of the results obtained in the examples, it follows that the r’s complement can be obtained from the (r — 1)’s complement after the addition of r™ to the least significant digit, For example, the 2's complement of 10110100 is obtained from the I’s complement O1001011 by adding 1 to give 01001 100. It is worth mentioning that the complement of the complement restores the number to its original value. The 2’s complement of NV is r” — N and the complement of (r” — N) isr™ — (r" — N) = N; and similarly for the 1's complement. 12. BINARY SYSTEMS Chap. 1 Subtraction with r’s Complement The direct method of subtraction taught in elementary schools uses the borrow concept. By this method, we borrow a 1 from a higher significant position when the minuend digit is smaller than the corresponding sub- trahend digit. This seems to be easiest when people perform subtraction with paper and pencil. When subtraction is implemented by means of digital components, this method is found to be less efficient than the method that uses complements and addition as stated below. The subtraction of two positive numbers (M —N), both of base r, may be done as follows: (1) Add the minuend M to the r’s complement of the subtrahend N. (2) Inspect the result obtained in step (1) for an end carry: (@) If an end carry occurs, discard it. (b) If an end carry does not occur, take the r’s complement of the number obtained in step (1) and place a negative sign in front. The following examples illustrate the procedure: EXAMPLE 1-5. Using 10’s complement, subtract 72532 — 3250. M = 72532 4 72532 N = 03250 96750 10’s complement of N = 96750 end carry > _1_ /69282 ANSWER: 69282 EXAMPLE 1-6. Subtract: (3250 — 72532)i0 ‘M = 03250 + 03250 N = 72532 27468 10's complement of W = 27468 no cary 30718 ANSWER : ~ 69282 = — (10's complement of 30718) EXAMPLE 1-7. Use 2’s complement to perform M — N with the given binary numbers. (a) M = 1010100 _ + 1010100 N= 1000100 0111100 2's complement of N = 0111100 end carry > _1_/0010000 ANSWER: 10000 See, 1-5 COMPLEMENTS 13 () M = 1000100 4 1000100 N = 1010100 0101100 2's complement of N = 0101100 no carry / 1110000 ANSWER : -10000 = - (2’s complement of 1110000) The proof of the procedure is: The addition of M to the r’s complement of NV gives (M +r” — N). For numbers having an integer part of n digits, r” is equal to a 1 in the (m + Ith position (what has been called the “end carry”). Since both M and N are assumed to be positive, then: @) (tr -N)>r" ifMP>N, o () (+n -N) (a) Two-input AND gate (b) Two-input OR gate (c) NOT gate or inverter 4 4 Fasc 4 G=AtB+C+D . 5 (d) Three input AND gate __(e) Four -input OR gate Figure 1-6 Symbols for digital logic circuits The input signals x and y in the two-input gates of Fig. 1.6 may exist in one of four possible states: 00, 10, 11, or Ol. These input signals are shown in Fig. 1-7, together with the output signals for the AND and OR gates. The timing diagrams of Fig. 1-7 illustrate the response of each circuit to each of the four possible input binary combinations. The reason for the name inverter for the NOT gate is apparent from a comparison of the signal x (input of inverter) and that of x’ (output of inverter). » fT 71o 0 : of 1 10. anpix-y oof TLo_o ese ee e0 Peete NOT: 7 TLoeoft Figure 1-7. input-output signals for gates (a), (b), and (c) of Figure 1-6 30 BINARY SYSTEMS Chap. 1 AND and OR gates may have more than two inputs. An AND gate with three inputs and an OR gate with four inputs are shown in Fig. 1-6. The three-input AND gate responds with a logic-1 output if all three input signals are logic-1. The output produces a logic-0 signal if any input is logic-0. The four-input OR gate responds with a logic-1 when any input is a logic-1. Its output becomes logic-0 if all input signals are logic-0. The mathematical system of binary logic is better known as Boolean, or switching, algebra. This algebra is conveniently used to describe the opera- tion of complex networks of digital circuits. Designers of digital systems use Boolean algebra to transform circuit diagrams to algebraic expressions and vice versa. Chapters 2 and 3 are devoted to the study of Boolean algebra, its properties and manipulative capabilities. We shall return to the subject of logic gates in Ch.4 and show how Boolean algebra may be used to express mathematically the interconnections among networks of gates. PROBLEMS 1-1, Write the first 20 decimal digits in base 3. 1-2, Add and multiply the following numbers in the given base without converting to decimal. (a) (1230), and (23)4 (b) (135.4)g and (43.2)6 (©) (367)g and (715)s (d) (296)y2 and (S7).2 1-3, Convert the decimal number 250.5 to base 3, base 4, base 7, base 8, and base 16. 1-4. Convert the following decimal numbers to binary: 12-0625, 10*, 673°23, and 1998, 1-5. Convert the following binary numbers to decimal: 1010001, 1011100101, 1110101+110, 1101101-111. 1-6. Convert the following numbers from the given base to the required bases (a) decimal 225-225 to binary, octal, and hexadecimal (b) binary 11010111-110 to decimal, octal, and hexadecimal (c) octal 623+77 to decimal, binary, and hexadecimal (d) hexadecimal 24C5-D to decimal, octal, and binary 1-7. Convert the following numbers to decimal. (a) (1001001011), (e) (0°342)¢ (b) (12121) ©) (50), (©) (1032-2)4 (e) (8-3)9 (a) (4310)5 (h) (198)12 Sec. 1.8 BINARY LOGIC 31 18, 19, 1.12 121. 1.22, 10. Lu. Obtain the 1’s and 2’s complement of the following binary numbers: 1010101, 0111000, 0000001, 10000, 00000. Obtain the 9's and 10's complement of the following decimal num- bers: 13579, 09900, 90090, 10000, 00000 Find the 10's complement of (935)11. Perform the subtraction with the following decimal numbers using, (1) 10’s complement and (2)9’s complement. Check the answer by straight subtraction. (a) 5250 - 321 (©) 753 ~ 864 (b) 3570 — 2100 (a) 20 ~ 1000 Perform the subtraction with the following binary numbers using (1) 2’s complement and (2)1°s complement. Check the answer by straight subtraction (@) 11010 ~ 1101 (©) 10010 — 10011 (b) 11010 ~ 10009 (a) 100 — 110000 |. Prove the procedure stated in Sec. 1-5 for the subtraction of two numbers with (r — 1)'s complement. |. For the weighted codes (a) 3,3,2,1 and (b) 4,4,3,-2 for the decimal digits, determine all possible tables so that the 9°s complement of each decimal digit is obtained by changing 1's to 0's and 0's to 1's. Represent the decimal number 8620 (a) in BCD, (b) in excess-3 code, (©) in 2,4,2,1 code, and (d) as a binary number. A binary code uses 10 bits to represent each of the 10 decimal digits. Each digit is assigned a code of nine 0's and a 1. The code for digit 6, for example, is 0001000000. Determine the binary code for the remaining decimal digits. . Obtain the weighted binary code for the base 12 digits using weights of 5421 . Determine the odd parity bit generated when the message is the 10 decimal digits in the 8,4,-2,-1 code, Determine two other combinations for a reflected code other than the one shown in Table 1-4 Obtain a binary code to represent all base 6 digits so that the S’s complement is obtained by replacing 1's by 0's and 0's by 1's in the bits of the code. Assign a binary code in some orderly manner to the 52 playing cards. Use the minimum number of bits. Write your first name, middle initial, and last name in an eight-bit code made up of the seven ASCII bits of Table 1-5 and an even parity bit in the most significant position. Include blanks between names and a period after the middle initial. 32 BINARY SYSTEMS Chap. 1 1-23. Show the bit configuration of a 24-cell register when its content represents (a) the number (295);o in binary, (b) the decimal number 295 in BCD, (c) the characters XYS in EBCDIC. 1-24. The state of a 12-cell register is 010110010111, What is its content if it represents (a) three decimal digits in BCD, (b) three decimal digits in excess-3 code, (c) three decimal digits in 2,4,2,1 code, (4) two characters in the internal code of Table 1-5? 1-25. Show the contents of all registers in Fig. 1-3 if the two binary numbers added have the decimal equivalent of 257 and 1050. 1.26. Express the following switching circuit in binary logic notation. 4 L € Q B Voltage 1.27. Show the signals (by means of a diagram similar to Fig. 1-7) of the outputs F and G in Fig. 1-6. Use arbitrary binary signals for the inputs A, B, C, and D BOOLEAN ALGEBRA 2-1 BASIC DEFINITIONS Boolean algebra like any other deductive mathematical system may be defined with a set of elements, a set of operators, and a number of unproved axioms or postulates, A set of elements is any collection of ‘objects having a common property. If S is a set, and x and y are certain ‘objects, then x€S denotes that x is a member of the set S, while y¢S denotes that y is not an element of S. A set with a denumerable number of elements is specified by curly brackets: A= {1,2,3,4}; ie., the ele- ments of set A are the numbers 1, 2, 3, and 4. A binary operator defined on a set S of elements is a rule that assigns to each pair of elements from S a unique element from S. As an example, consider the relation a* b=c, we say that * is a binary operator if it specifies a rule for finding ¢ from the pair (@, b) and also if a, b, c€ S. However, « is not a binary operator if a, b © S, while the rule finds c¢ S. The postulates of a mathematical system form the basic assumptions from which it is possible to deduce the rules, theorems, and properties of the system. The most common postulates used to formulate various alge- braic structures are: 1, Closure. A set S is closed with respect to a binary operator if for every pair of elements of S, the binary operator specifies a rule for obtaining a unique element of S. For example, the set of natural numbers N = {1, 2, 3, 4, ...} is closed with respect to the binary operator plus (+) by the rules of arithmetic addition, since for any a, EN we obtain a unique cEN by the operation a +b=c. The set of 3 BOOLEAN ALGEBRA Chap. 2 natural numbers is not closed with respect to the binary operator minus (—) by the rules of arithmetic subtraction because 2 — 3 = —1 and 2,3 © N, while (-1) €N. 2. Associative law, A binary operator * on a set S is said to be associative whenever: (ee y)ez=xe(y ez) forallx,y,2€8 3. Commutative law. A binary operator * on a set S is said to be commutative whenever: xeysyex forallx,yeS 4, Identity element. A set S is said to have an identity element with respect to a binary operation * on S if there exists an element e © S with the property: etxexte for every x ES Example. The element 0 is an identity element with respect to opera- tion + on the set of integers 1={. . . —3, -1,0, 1, 2, 3,..} since x+0=04x=x foranyx€l The set of natural numbers N has no identity element since 0 is excluded from the set. 5. Inverse, A set S having the identity element’ e with respect to a binary operator * is said to have an inverse whenever, for every x © S, there exists an element y € S$ such that xeyte Example. In the set of integers 1 with e = 0, the inverse of an element (@) is (a) since a + (a) = 0. 6. Distributive law. If « and + are two binary operators on a set S, # is said to be distributive over - whenever x* (+ z)=@*y)+ eZ) ‘An example of an algebraic structure is a field. A field is a set of elements, together with two binary operators, each having properties I to 5 and both operators combined to give property 6. The set of real numbers together with the binary operators + and * form the field of real numbers. The field of real numbers is the basis for arithmetic and ordinary algebra. ‘The operators and postulates have the following meanings: The binary operator + defines addition. ‘The additive identity is 0. 0c. 24 BASIC DEFINITIONS 35 ‘The additive inverse defines subtraction. ‘The binary operator * defines multiplication. The multiplicative identity is 1. The multiplicative inverse of a = 1/a defines division; i. a + 1a = ‘The only distributive law applicable is that of * over +: a+ @te)=@-b)+@-0) 2-2 AXIOMATIC DEFINITION OF BOOLEAN ALGEBRA George Boole (1) in 1854 introduced a systematic treatment of logic and developed for this purpose an algebraic system now called Boolean algebra. C.E. Shannon (2) in 1938 introduced a twovalued Boolean algebra called switching algebra, in which he demonstrated that the properties of bistable electrical switching circuits can be represented by this algebra. For the formal definition of Boolean algebra, we shall employ the postulates formu- lated by E. V. Huntington (3) in 1904. These postulates or axioms are not unique for defining Boolean algebra. Other sets of postulates have been used.* Boolean algebra is an algebraic structure defined on a set of ele- ments B together with two binary operators + and + provided the following (Huntington) postulates are satisfied: 1(@) Closure with respect to the operator +. (b) Closure with respect to the operator -. 2(a) An identity element with respect to +, designated by 0: x +0= 0 +x x, (b) An identity element with respect to +, designated by 1: x + 1 = 1 tex 3(a) Commutative with respect to +: x +y = y +x. (b) Commutative with respect to +: x + y= y+ x. (a) + is distributive over +: x + (y +z) = (b) + is distributive over +: x + (y + z)=(& ty) + & +2). 5 For every element x © B, there exists an element x’ € B (called the complement of x) such that () x +x =1and(b) x- x'=0 6 There exists at least two elements x, y € B such that x # y. Comparing Boolean algebra with arithmetic and ordinary algebra (the field of real numbers) we note the following differences: *See for example Birkoff and Bartee (4), Chap. 5. 36 BOOLEAN ALGEBRA Chap. 2 (@) Huntington postulates do not include the associative law. However, this law holds for Boolean algebra and can be derived (for both operators) from the other postulates. (b) The distributive law of + over *;ie.,x + (y+ z)=@ty)- @& + 2), is valid for Boolean algebra, but not for ordinary algebra. (©) Boolean algebra does not have additive or multiplicative inverses; therefore, there are no subtraction or division operations. (@) Postulate 5 defines an operator called complement which is not available in ordinary algebra. (©) Ordinary algebra deals with the real numbers, which constitute an infinite set of elements. Boolean algebra deals with the as yet undefined set of elements B, but in the two-valued Boolean algebra defined below (and of interest in our subsequent use of this algebra), B is defined to be a set with only two elements, 0 and 1. Boolean algebra resembles ordinary algebra in some respects. The choice of symbols + and - is intentional to facilitate Boolean algebraic manipula- tions by persons already familiar with ordinary algebra. Although one can use some of his knowledge from ordinary algebra to deal with Boolean algebra, the beginner must be careful not to substitute the rules of ordinary algebra where they are not applicable. It is important to distinguish between the elements of the set of an algebraic structure and the variables of an algebraic system. For example, the elements of the field of real numbers are numbers, while variables such as a, b, ¢, etc. used in ordinary algebra, are symbols that stand for real numbers. Similarly in Boolean algebra, one defines the elements of the set B and variables such as x, y, z are merely symbols that represent the elements. At this point, it is important to realize that in order to have a Boolean algebra, one must show: (@) the elements of the set B, (b) the rules of operation for the two binary operators, and (©) that the set of elements B together with the two operators satisfy the six Huntington postulates, One can formulate many Boolean algebras, depending on the choice of elements of B and the rules of operation.* In our subsequent work, we shall deal only with a two-valued Boolean algebra; i.e., one with only two elements. Two-valued Boolean algebra has applications in set theory (the algebra of classes) and in propositional logic. Our interest here is with the application of Boolean algebra to gate-type circuits. *See for example Hohn (6), Whitesitt (7), or Birkhoff and Bartee (4). Sec. 22 AXIOMATIC DEFINITION OF BOOLEAN ALGEBRA 37 ‘Two-Valued Boolean Alge ‘A two-valued Boolean algebra is defined on a set of two elements, B {0, 1}, with rules for the two binary operators + and + as shown in the following operator tables (the rule for the complement operator is for verifica- tion of postulate 5): x fx o}i 1}o0 These mules are exactly the same as the AND, OR, and NOT operations, respectively, defined in Table 1-6. We must now show that the Huntington postulates are valid for the set B = { 0, 1} and the two binary operators defined above. 1. Closure is obvious from the tables since the result of each operation is either 1 or 0 and 1, 0€B. 2. From the tables we see that (a) 0+0 = OFL=1+0= () le1=1 1-0=0-1=0 which establishes the two identity elements 0 for + and 1 for + as defined by postulate 2. 3. The commutative laws are obvious from the symmetry of the binary operator tables. 4. (@) The distributive law x + (y + z) = (e+ y) +(e © 2) can be shown to hold true from the operator tables by forming a truth table of all possible values of x, y, and z. For each combination we derive x + (y + 2) and show that the value is the same as @s yt: 2). x» #pt[*-or>] x G+ es ooo o 0 0 0 0 ooi} 1 0 0 0 ) oro} i 0 0 0 a ori} a 0 0 0 0 100] o 0 0 0 0 roi} a 1 0 1 1 110} 4 1 1 ° 1 riifit 1 1 1 1 38 BOOLEAN ALGEBRA Chop. 2 (b) The distributive law of + over + can be shown to hold true by means of a truth table similar to the one above. 5. From the complement table it is easily shown that: @ x+x' = 1, since 0+ 0 =OF1=1and1+1'=1+0=1 (b) x + x = 0, since 0+ 0'=0+1=O0and1- 1'=1-0=0 which verifies postulate 5. 6. Postulate 6 is satisfied because the two-valued Boolean algebra has two distinct elements 1 and 0 with 1 # 0. We have just established a two-valued Boolean algebra having a set of two elements, I and 0, two binary operators with operation rules equivalent to the AND and OR operations, and a complement operator equivalent to the NOT operator. Thus, Boolean algebra has been defined in a formal ‘mathematical manner and has been shown to be equivalent to the binary logic presented heuristically in Sec. 1-8. The heuristic presentation is helpful in understanding the application of Boolean algebra to gate-type circuits. ‘The formal presentation is necessary for developing the theorems and prop- erties of the algebraic system. The two-valued Boolean algebra defined in this section is also called “switching algebra” by engineers. In order to emphasize the similarities between two-valued Boolean algebra and other binary systems, this algebra was called “binary logic” in Sec. 1-8. From here on, we shall drop the adjective “two-valued” from Boolean algebra in subsequent discussion. 2-3 BASIC THEOREMS AND PROPERTIES OF BOOLEAN ALGEBRA Duality The Huntington postulates have been listed in pairs and designated by part (a) and part(b). One part may be obtained from the other if the binary operators and the identity elements are interchanged. This important property of Boolean algebra is called the duality principle. It states that every algebraic expression deducible from the postulates of Boolean algebra remains valid if the operators and identity elements are interchanged. In a two-valued Boolean algebra, the identity elements and the elements of the set B are the same; 1 and 0. The duality principle has many applications. If the dual of an algebraic expression is desired, we simply interchange OR and AND operators and replace I’s by 0's and 0's by 1's. Basic Theorems Table 2-1 lists six theorems of Boolean algebra and four of its postulates. The notation is simplified by omitting the - whenever this does not lead to Sec. 2:3 BASIC THEOREMS AND PROPERTIES OF BOOLEAN ALGEBRA, ‘Table 2-1 Postalates and Theorems of Boolean Algebra Post. 2 Post. 5 ‘Theorem 1 Theorem 2 ‘Theorem 3 involution: Post. 3 commutative: ‘Theorem 4 associative: Post. 4 distributive: ‘Theorem 5 De Morgan: ‘Theorem 6 absorp (@) x+02x (a) xx’ (@ x+x () x°1, (b) xx! @xtie1 Gyex G@) xtyayex ©) xy = yx confusion. The theorems and postulates listed are the most basic relations of Boolean algebra. The reader is advised to become familiar with them as soon as possible, The theorems, like the postulates, are listed in pairs; each relation is the dual of the one paired with it. The postulates are basic axioms of the algebraic structure and need no proof. The theorems must be proven from the postulates. The proofs of the theorems with one variable are presented below. At the right-hand side of the page is listed the number of the postulate which justifies each step of the proof. THEOREM I(a): x +x xtx (x +x) + xx" x+0 ex = @& + xyx +x’) x 1 by postulate: 2(b) sta) 4(b) S(b) 2(a) THEOREM 1(b): x + x = x xox =xxt0 xx tax! by postulate: 2(a) 5(b) 4@) Sta) 2b) Note that theorem 1(b) is the dual of theorem I(a) and that each step of the proof in part (b) is the dual of part (a). Any dual theorem can be similarly derived from the proof of its corresponding pair THEOREM 2a): x +1 = 1 x+1=1+@+1) by postulate: 2(b) (& +x) +1) 5(a) +x 4(b) xtx’ 2b) S(a) 40 BOOLEAN ALGEBRA Chap. 2 THEOREM 2(b): x - 0 = 0 by duality. THEOREM 3: (') = x From postulate 5, we have x + x' = 1 and x + x’ = 0, which defines the complement of x. The complement of x’ is x and is also (x’). Therefore, since the complement is unique, we have that (x'' = x. ‘The theorems involving two or three variables may be proven algebrai- cally from the postulates and the theorems which have already been proven. Take for example the absorption theorem. THEOREM 6(a): x + xy = x xtxy=x-1txy by postulate: 2(b) x+y) 4(a) x@ +1) 3G) xed by theorem: 2(a) x by postulate: 2(b) THEOREM 6(b): x (x + y) = x by duality. The theorems of Boolean algebra can be shown to hold true by means of truth tables. In truth tables, both sides of the relation are checked to yield identical results for all possible combinations of variables involved. The following truth table verifies the first absorption theorem. A peereeeeeotee cera The algebraic proofs of the associative law and De Morgan’s theorem are long and will not be shown here. However, their validity is easily shown with truth tables. For example, the truth table for the first De Morgan's theorem (x + y)' = x'y' is shown below. Sec, 2:3 BASIC THEOREMS AND PROPERTIES OF BOOLEAN ALGEBRA 41 Operator Precedence The operator precedence for evaluating Boolean expressions is: (1) paren- theses, (2) NOT, (3) AND, (4) OR. In other words, the expression inside the parentheses must be evaluated before all other operations. The next opera- tion that holds precedence is the complement, then follows the AND and finally the OR. As an example, consider the truth table for De Morgan’s theorem. The left side of the expression is (x + y)'. Therefore, the expression inside the parentheses is evaluated first and the result then complemented. The right side of the expression is x'y', Therefore, the complement of x and the complement of y are both evaluated first and the result is then ANDed. Note that in ordinary arithmetic the same precedence holds (except for the complement) when multiplication and addition are replaced by AND and OR, respectively. Venn Diagram A helpful illustration that may be used to visualize the relationship among the variables of a Boolean expression is the Venn diagram. This diagram consists of a rectangle such as shown in Fig. 2-1, inside of which re drawn overlapping circles, one for each variable. Each circle is labeled by a variable. We designate all points inside a circle as belonging to the named variable and all points outside a circle as not belonging to the variable. Take for example the circle labeled x. If we are inside the circle, we say that x = 1; when outside, we say x = 0. Now, with two over- lapping circles, there are four distinct areas inside the rectangle: The area not belonging to either x or y (x'y"), the area inside circle y but outside x (x'y), the area inside circle x but outside y (xp’), and the area inside both circles (xy), Figure 2-1 Venn diagram for two variables xy Venn diagrams may be used to illustrate the postulates of Boolean algebra or to show the validity of theorems. Figure 2-2, for example, illustrates that the area belonging to xy is inside the circle x, and therefore, x + xy = x, Figure 2-3 illustrates the distributive law x(y + z) = xy + xz. In this diagram we have three overlapping circles, one for each of the 42 BOOLEAN ALGEBRA Chap. 2 eaytx x42) xyte Figure 2-3. Venn diagram illustration of the distributive law variables x, y, and z. It is possible to distinguish eight distinct areas in a three-variable Venn diagram. For this particular example, the distributive law is demonstrated by noting that the area intersecting the circle x with the area enclosing y or z is the same area belonging to xy or xz. 2-4 BOOLEAN FUNCTIONS A binary variable can take the value of 0 or 1. A Boolean function is an expression formed with binary variables, the two binary operators OR and AND, the unary operator NOT, parentheses, and equal sign. For a given value of the variables, the function can be either 0 or 1. Consider for example the Boolean function: Fy = xyz! The function Fy is equal to 1 if x = 1 and y = 1 and-2' = 1, otherwise F, = 0. The above is an example of a Boolean function represented as an algebraic expression. A Boolean function may also be represented in a truth table. To represent a function in a truth table, we need a list of the 2" combinations of I's and 0's of the ” binary variables, and a column showing the combinations for which the function is equal to 1 or 0. As shown in Table 2-2, there are eight possible distinct combinations for assign- ing bits to three variables. The column labeled F; is either a 0 or a I for each of these combinations. The table shows that the function F, is equal see. 24 BOOLEAN FUNCTIONS 43 Table 2-2 Truth Tables for Fy = xy2', F, =x +)"2, Fa =x'y'z + x'y2 + xy’, and Fy = xy! +x'2 x zalalals neoorHco |e monomers eroococce to 1 only when x = 1, y = 1, and z = 0. It is equal to 0 otherwise. (Note that the statement z' = 1 is equivalent to saying that z = 0.) Consider now the function: Fyaxty's Fy = 1 if x = 1 or if y = 0, while z = 1, In Table 2-2, x = 1 in the last four rows, and yz = O1 in rows 001 and 101. The latter combination applies also for x = 1. Therefore, there are five combinations that make Fy = 1. Asa third example, the function Fy = x'y't +x'y2 + xy" is shown in Table 2-2 with four 1°s and four O's. Fy is the same as F, and is considered below. ‘Any Boolean function can be represented in a truth table. The number of rows in the table is 2", where m is the number of binary variables in the function. The 1’s and 0's combinations for each row is easily obtained from the binary numbers by counting from 0 to 2"—1. For each row of the table, there is a value for the function equal to either 1 or 0. The question now arises, is an algebraic expression of a given Boolean function unique? In other words, is it possible to find two algebraic expressions that specify the same function? The answer to this question is yes. As a matter of fact, the manipulation of Boolean algebra is applied mostly to the problem of finding simpler expressions for the same function. Consider for example the function Fy = xy’ + xg From Table 2-2, we find that Fy is the same as Fs, since both have identical 1’s and 0's for each combination of values of the three binary variables. In general, two functions of m binary variables are said to be equal if they have the same value for all possible 2” combinations of the 1 variables, ‘A Boolean function may be transformed from an algebraic expression into a logic diagram composed of AND, OR, and NOT gates. The imple- mentation of the four functions introduced in the previous discussion is 44 BOOLEAN ALGEBRA Chap. 2 shown in Fig. 2-4. The logic diagram includes an inverter circuit for every variable present in its complement form. (The inverter is unnecessary if the complement of the variable is available.) There is an AND gate for each term in the expression, and an OR gate is used to combine two or more terms. From the diagrams it is obvious that the implementation of Fy requires less gates and less inputs than F3. Since F, and F3 are equal Boolean functions, it is more economical to implement the Fy form than the F3 form. To find simpler circuits, one must know how to manipulate Boolean functions to obtain equal and simpler expressions. What constitutes the best form of a Boolean function depends on the particular application. In this section, consideration is given to the criterion of equipment minimization, a, Fy @) Fy =a wo) Rpexty: © yet aye tay 4 eS ae > )__ @) Fy =a tee Figure 2-4 Implementation of Boolean functions with gates Sec. 2-4 BOOLEAN FUNCTIONS 45 Algebraic Manipulation A literal is a primed or unprimed variable. When a Boolean function is implemented with logic gates, each literal in the function designates an input to a gate, and each term is implemented with a gate, The minimiza- tion of the number of literals and the number of terms will result in a circuit with less equipment. It is not always possible to minimize both simultaneously; usually, further criteria must be available. At the moment, we shall narrow the minimization criterion to literal minimization. We shall discuss other criteria in Ch. 5. The number of literals in a Boolean function can be minimized by algebraic manipulations. Unfortunately, there are no specific rules to follow that will guarantee the final answer. The only ‘method available is a cutand-try procedure employing the postulates, the basic theorems, and any other manipulation method which becomes familiar with use. The following examples illustrate this procedure. EXAMPLE 2-1. Simplify the following Boolean functions to a minimum. number of literals. Lxtxy=(tx)@ +y)=1-@ tyexty 2.x @! ty)=ar' ay =0 tay 239 3. x'y'z + x'yz toy" = x'2 (ty) tay" = x'2 ty" xy +x'2 + yz @& +x’) xy +x'2 + xyz + x'yz xy (I #2) #22 (1+) xy tx'z 5. & +) (+z) Y +2) = & +y) (+2) — by duality from 4. 4. xy tx’ ty: Functions 1 and 2 are the dual of each other and use dual expressions in corresponding steps. Function 3 shows the equality of the functions F, and Fa discussed previously. The fourth illustrates the fact that an increase in the number of literals sometimes leads to a final simpler expression. Func- tion 5 is not minimized directly but can be derived from the dual of the steps used to derive 4. Complement of a Function The complement of a function F is F” and is obtained from an inter- change of 0°s for 1’s and 1’s for 0’s in the truth table. The complement of a function may be derived algebraically through De Morgan’s theorem. This, pair of theorems is listed in Table 2-1 for two variables. De Morgan’s theorems can be extended to three or more variables. The three-variable form of the first De Morgan’s theorem is derived below. The postulates and theorems are those listed in Table 2-1. 48 BOOLEAN ALGEBRA Chap. 2 A+B+O=4+x) Let B+C=X Ax’ by theorem 5(a) (De Morgan) A’ (B+ OC) substitute B + C= X A’ + (BIC) by theorem S(a) (De Morgan) ABC by theorem 4(b) (associative) De Morgan’s theorems for any number of variables resemble in form the two-variable case and can be derived by successive substitutions similar to the method used in the above derivation. These theorems can be generalized as follows: +B+C+Dt-- + KY =ABCD ...F (ABCD +++ FY = A! ‘The generalized form of De Morgan’s theorem states that the complement of a function is obtained by interchanging AND and OR operators and comple- menting each literal. FB ECHD +t Fk EXAMPLE 2-2. Find the complement of the functions F, = x'yz’ + x'y'z and Fy = x (y'2' + yz). Applying De Morgan’s theorem as many times as necessary, the comple- ments are obtained as follows: Fy = ('y2! + x'y'zy = ('y2') + ('y'2) = & ty tz) ty +2!) Fi = [x@"2' + yz]! = x! + (2! + yz) = x" + "2 + G2) X40 +2) 0' #2/) A simpler procedure for deriving the complement of a function is to take the dual of the function and complement each literal. This method follows from the generalized De Morgan’s theorem. Remember that the dual of a function is obtained from the interchange of AND and OR operators and 1's and 0's. EXAMPLE 2-3. Find the complement of the functions F, and F, of Ex, 2-2 by taking their dual and complementing each literal. 1. Fy = x'y2! + x'y"2 The dual of Fy is: (x + y + 2’) &’ +y' +z) Complement each literal: (& + »" +z) @ +y #2')= Fi 2. Fy = x (y'2' + yz) ‘The dual of Fy is: x + (y' + 2!) (+2) Complement each literal: x’ + (y +z) (y' + 2') = Fi Sec. 25 CANONICAL FORMS 47 2.5 CANONICAL FORMS Minterms and Maxterms A binary variable may appear either in its normal form (x) or in its complement form (x'). Now consider two binary variables x and y combined with an AND operation. Since each variable may appear in either form, there are four possible combinations: x'y’, x'y, xy’, and xy. Each of these four AND terms represents one of the distinct areas in the Venn diagram of Fig. 2-1 and is called a minterm, or a standard product. In a similar manner, m variables can be combined to form 2” minterms. The 2” different minterms may be determined by a method similar to the one shown in Table 2-3 for three variables. The binary numbers from 0 to 2" — 1 are listed under the n variables. Each minterm is obtained from an AND term of the m variables, with each variable being primed if the correspond- ing bit of the binary number is a 0 and unprimed if a 1. A symbol for ‘Table 2-3 Minterms and Maxterms for Three Binary Variables Minterms Masterms nye Term Designation Term Designation Bae ated CeCe 000 x'y'?! mo xtytz Mo oon xy'r m xtyez! Mm 010 xy! i xtyler M old xyz m3 xtyltz My 100 xy’! ms xitytz Ms 101 xy's ms xltyte! Ms 110 xyz" mg x! ty! tz Mg rid xyz my xityltz! My i each minterm is also shown in the table and is of the form m, where j denotes the decimal equivalent of the binary number of the minterm designated. In a similar fashion, n variables forming an OR term, with each variable being primed or unprimed, provide 2” possible combinations, called ‘maxterms, or standard sums. The eight maxterms for three variables, together with their symbolic designation, are listed in Table 2-3. Any 2” maxterms for n variables may be determined similarly. Each maxterm is obtained from an OR term of the n variables, with each variable being unprimed if the corresponding bit is a 0 and primed if a 1*. Note that each ‘*Some books define a maxterm as an OR term of the m variables, with each variable being unprimed if the bit is a 1 and primed if a 0. The definition adopted in this book is preferable as it leads to simpler conversions between maxterm and minterm type functions. 48 BOOLEAN ALGEBRA Chap. 2 ‘maxterm is the complement of its corresponding minterm and vice versa. A Boolean function may be expressed algebraically from a given truth table by forming a minterm for each combination of the variables which produces a 1 in the function, and then taking the OR of all those terms. For example, the function f, in Table 2-4 is determined by expressing the combinations 001, 100, and 111 as x'y'z, xy’z’, and xyz, respectively. Since each one of these minterms results in f, = 1, we should have fu = xty's + xy'z! + xye = my + my tims Similarly, it may be easily verified that fa = x'yz + xy'z + xyz’ + xyz = my + ms + Mg +My These examples demonstrate an important property of Boolean algebra: any Boolean function can be expressed as a sum of minterms (by “sum” is meant the ORing of terms). Now consider the complement of a Boolean function. It may be read from the truth table by forming a minterm for each combination that produces a 0 in the function and then ORing those terms. The complement of fy is read as: = x'y'z! + x'p2! + x'y2 + xy'2 +302" If we take the complement of fj, we obtain the function fy: (et ytayQety ta) ety tz)Q ty tz) Q'ty +2) = Mo + Mz * Mz * Ms + Me ‘Table 2-4 Functions of Three Variables * mmoerwee [& Funetion fr Function fa Sec. 25 CANONICAL FORMS 49 Similarly, it is possible to read the expression for f, from the table: fr=Qtyt a etytzy@ty taG'ty+2) = MoM, Ma Ma These examples demonstrate a second important property of Boolean algebra: any Boolean function can be expressed as a product of maxterms (by “product” is meant the ANDing of terms). The procedure for obtaining the product of maxterms directly from the truth table is as follows. Form a maxterm for each combination of the variables which produces a 0 in the function, and then form the AND of all those maxterms. Boolean functions expressed as sum of minterms or product of maxterms are said to be in canonical form. ‘Sum of Minterms It was previously stated that for n binary variables, one can obtain 2” distinct minterms, and that any Boolean function can be expressed as a sum of minterms. The minterms whose sum defines the Boolean function are those that give the 1’s of the function in a truth table. Since the function can be either 1 or O for each minterm, and since there are 2” minterms, one can calculate the possible functions that can be formed with » variables to be 22”. It is sometimes convenient to express the Boolean function in its sum of minterms form. If not in this form, it can be made so by first expanding the expression into a sum of AND terms. Each term is then inspected to see if it contains all the variables. If it misses one or more variables, it is ANDed with an expression like x + x’, where x is one of the ‘missing variables. The following example clarifies this procedure. EXAMPLE 2-4. Express the Boolean function F = A + B'C in a sum of minterms. The function has three variables A, B, and C. The first term A is missing two variables, therefore A= AB +B’) = AB + AB’ This is still missing one variable: A = AB(C + C) + AB'(C + C) = ABC + ABC’ + ABYC + AB'C. The second term B'C is missing one variable: BIC = BIC(A + A) = ABIC + A'BC Combining all terms we have: FP =A + BIC = ABC + ABC + AB'C + AB'C + AB'C + A'B'C 50 BOOLEAN ALGEBRA. Chap. 2 But AB'C appears twice and according to theorem 1 (x + x = x), it is possible to remove one of them. Rearranging the minterms in ascending order, we finally obtai F=A'B'C+ ABC’ + ABC + ABC’ + ABC=m, + mg + ms + mg + m7 It is sometimes convenient to express the Boolean function, when in its sum of minterms, in the following short notation: F (4, B, C) = 2(1, 4, 5, 6, 1) The summation symbol © stands for the ORing of terms; the numbers following it are the minterms of the function. The letters in parentheses with F form a list of the variables in the order taken when the minterm is converted to an AND term. Product of Maxterms Each of the 2?” functions of m binary variables can be also expressed as a product of maxterms. To express the Boolean function as a product of maxterms it must first be brought into a form of OR terms. This may be done by using the distributive law x + yz = (x + y) (x + 2). Then any missing variable x in each OR term is ORed with xx’. This procedure is clarified by the following example. EXAMPLE 2-5. Express the Boolean function F of maxterm form. First convert the function into OR terms using the distributive law: Foxy +x'z = (xy +x’) xy +2) Ft) Gtx) +0 +2) @ +) & +2) +2) ‘The function has three variables: x, y, and z. Each OR term is missing one variable, therefore xy + 2'z in a product G ty tl ty tz) xtzextztyy'=(etyt a ety' ta) x ty ax ty ta! ytzaytzte'aety ta’ tyts Combining all the terms and removing those that appear more than once, we finally obtain: Fe@tyt a a@ty +t) ty tae ty tz’) = Mo Ma Ma Ms ‘A convenient way to express this function is as follows: F(x y, 2 = 10, 2,4, 5) Sec. 25 CANONICAL FORMS 81 The product symbol denotes the ANDing of maxterms; the numbers are the maxterms of the function. Conversion Between Canonical Forms The complement of a function expressed as the sum of minterms equals the sum of minterms missing from the original function. This is because the original function is expressed by those minterms that make the function equal to 1, while its complement is a 1 for those minterms that the function is a0. As an example, the function F (A,B, = 2 (1,4, 5,67) has a complement that can be expressed as F (A, B, ©) = ¥ (0, 2, 3) = mg + my + m3 Now, if we take the complement of F’ by De Morgan's theorem we obtain F back in a different form: F = (imo + my + ms)’ = my + ms +m = Mo Mz Ms = 11(0, 2,3) ‘The last conversion follows from the definition of minterms and maxterms as shown in Table 2-3. From the table, it is clear that the following relation holds true That is, the maxterm with subscript j is a complement of the minterm with the same subscript j, and vice versa The last example has demonstrated the conversion between a function expressed in sum of minterms and its equivalent in product of maxterms. A similar argument will show that the conversion between the product of maxterms and the sum of minterms is similar. We now state a general conversion procedure. To convert from one canonical form to another, interchange the symbol © and Il and list those numbers missing from the original form. As another example, the function F (x,y, 2) = 11 (0, 2, 4, 9) is expressed in the product of maxterm form. [ts conversion to sum of minterms is: F(x y2) 2, 3,67 Note that in order to find the missing terms, one must realize that the total number of minterms or maxterms is 2", where n is the number of binary variables in the function. 52 BOOLEAN ALGEBRA Chap. 2 Standard Forms The two canonical forms of Boolean algebra are basic forms that one obtains from reading a function from the truth table. These forms are very seldom the ones with the least number of literals. This is because each minterm or maxterm must contain, by definition, all the variables either primed or unprimed. A Boolean function expressed with terms containing any number of literals is said to be in a standard form. There are two standard forms: The sum of products (disjunctive form) and the product of sums (conjunctive form). An example of a function expressed in a sum of products is: Fy = ay t xyz’ ty" The product terms are AND terms of one, two, or any number of literals. The sum denotes the ORing of these terms. An example of a function ‘expressed in a product of sums is: Fy =x 0! +2) ty +2’) Again, the sum terms are OR terms of one or more variables and the product denotes the ANDing of these terms, 2.6 OTHER BINARY OPERATORS When the binary operators AND and OR are placed between two variables, they form two Boolean functions, x * y and x + y, respectively. We have stated previously that there are 2°” functions for variables. Therefore, these functions are only two out of a total of sixteen possible functions formed with two variables x and y. All 16 functions are listed in Table 2-5. In this table, each of the columns represents the truth table of one function F, for j = 0 to 15. Some of the functions are shown with an operator symbol and are also known by a characteristic name as indicated below. The following list of 16 functions is expressed algebraically. When applicable, the function is also written with its special symbol and its characteristic name mentioned. ‘Table 2-5 Truth Tables for the Sixteen Functions of Two Binary Variables xy {Fol Fu |Fa| Fa [Fa | Ps |e | Fr |Fe| Fo | Fro [Pui | Fra [Fis [Pia | Fis o olofofojofojojojo}ififa fa fa fa fa fa o 1fofofofo}ijif{i}ajofojo fo ja fafa fa 1 ofofofififofo}ijijojo}1 fr fo fo fafa 1 ifofifofrfofrfo}ji1foji}o]1 jo |i jo}r Operator : Symbol ody ! el+|efo}’ fc >}t Sec. 2.6 OTHER BINARY OPERATORS 53 Fo =0 zero, or null function Fy=xey AND x and y Fy = xy" = x/y INHIBITION, —_x but not y Fy=x the function is equal to x Fa = x'y = ylx INHIBITION, —y but not x Fs=y the function is equal toy Fo = x'y + xy’ =x @y EXCLUSIVE-OR x ory but not both Fy=x+y OR xory Fg=(x+yY=xty NOR not-or Fg = x'y' + xy =x @y EQUIVALENCE x equals y y COMPLEMENT not y =x +y'=x Cy IMPLICATION ify thenx 7 COMPLEMENT not x x'+y=x Dy — IMPLICATION fx then y Fig = Gy) =x ty NAND not-and Fis =1 ‘one, or identity function By checking Table 2-5 and the list of functions we see that the 16 functions of two variables define a total of eight different binary operators and one unary operator. The binary operators are: AND (-), INHIBITION (), EXCLUSIVE-OR (@), OR (+), NOR (4), EQUIVALENCE (©), IMPLI- CATION (2), and NAND (t). The unary operator is the COMPLEMENT (’). The AND, OR, and COMPLEMENT operators have been used pre- viously to define Boolean algebra. Four other functions have frequent use in computer logic. The EXCLUSIVE-OR is similar to OR but excludes the combination of both x and y. The EQUIVALENCE is a function that is a 1 when both variables are equal. Note that the EXCLUSIVE-OR and EQUIVALENCE functions are the complement of each other. The NOR function is the complement of OR, and the NAND is the complement of AND. Both these functions are of particular interest because of the frequent use of transistor circuits that implement these functions (see Ch. 5). Boolean algebra has been defined in Sec. 2-2 to have two binary operators which we have called AND and OR and a unary operator NOT (complement). From the definitions, we have deduced a number of prop- erties of these operators and now have defined other binary operators in terms of them. There is nothing unique about this procedure. We could have just as well started with the operator NOR (4), for example, and later 54 BOOLEAN ALGEBRA Chap. 2 defined AND, OR, and NOT in terms of it, There are, nevertheless, good reasons for introducing Boolean algebra in the way it has been introduced. The concepts of “and,” “or,” and “not” are familiar and are used by people to express everyday logical ideas. Moreover, the Huntington postu- lates reflect the dual nature of the algebra, emphasizing the symmetry of (#) and (-) with respect to each other. PROBLEMS 2-1. Which of the six basic laws (closure, associative, commutative, identity, inverse, and distributive) are satisfied for the pair of binary operators listed belo c 0 1 2 ecole wero 22. Show that the set of three elements {0, 1, 2} and the two binary operators + and * as defined by the above table is not a Boolean algebra. State which of the Huntington postulates is not satisfied. 23. Demonstrate by means of truth tables the validity of the following theorems of Boolean algebra. (a) The associative laws. (>) De Morgan’s theorems for three variables. (©) The distributive law of + over + 2-4. Repeat Prob. 2-3 using Venn diagrams. 25. Simplify the following Boolean functions to a minimum number of « fiterals @ xy +x ) +N ty) (xyz + x'y + xy2 (@ zx t2x'y (e) (4 + By (a + BY (8) yGwe" + we) + xy 2-6. Reduce the following Boolean expression to the required number of literals, (a) ABC + A'B'C + A'BC + ABC’ + A'B'C’ to five literals (b) BC + AC’ + AB + BCD to four literals (©) (CD) + Al +A + CD + AB to three literals (@) (A+C+D)(A+C#D')(A+C'+D)(A+B') to four literals Sec. 2.6 PROBLEMS 85 27. 28, 29. 210, 212 213, Find the complement of the following Boolean functions and reduce them to a minimum number of literals. (@) (BC' + 4'D) (AB! + CD’) (b) B'D + A'BC + ACD + A'BC (©) ((ABYA} [(ABY'B) (d) AB’ + CD’ + A'CD' + DC'(AB + A'B') + DB(AC' + A'C) Given the functions below determine the functions F, +G, and F, + G2 and reduce them to a minimum number of literals. (@) Fy =D + ABC + A'C 6, = DA + B+ QA +C) (b) Fy, = AB + A'B' + CD + A'CD! Gy = C+ A'B'C + AB'C + BCD! Obtain the truth table for F) + G, and Fz + Gz of Prob. 2-8. Implement the simplified Boolean functions from Prob. 2-6 with logic kates. . Given the Boolean function: Foxy tx'y't y's (a) Implement it with AND, OR, and NOT gates. (b) Implement it with only OR and NOT gates. (©) Implement it with only AND and NOT gates. Simplify the functions 7; and Ty to a minimum number of literals. Express the following functions in a sum of minterms and a product of maxterms. (@) FUA, B, CD) = D(A’ + B) + B'D (b) Fw, x, y, 2) "2 + wxy! + wx2! + wix'z (© FU, BCD) =(A + B+ CAF BAEC 4D!) Ul +B+C+D)G4C +d) (d) F(A, B, C) = (A' +B) (B'+ 0) (e) Fx, yz) = 1 () Fly, 2) = Gey #2) + x2) 56 BOOLEAN ALGEBRA. Chap. 2 2-14. Convert the following to the other canonical form. (@) P(x, y, 2) = BU, 3,7) () FIA, B,C, D) = 2(0, 2, 6, 11, 13, 14) (© FO y, 2) = 1, 3, 6,7) (@) FIA, B,C, D) = M0, 1, 2, 3, 4, 6, 12) 2-15. What is the difference between canonical form and standard form? Which form is preferable when implementing a Boolean function with gates? Which form is obtained when reading a function from a truth table? 2-16. The sum of all minterms of a Boolean function of n variables is 1. (a) Prove the above statement for n = 3 (b) Suggest a procedure for a general proof. 2.17. The product of all maxterms of a Boolean function of n variables is 0. (a) Prove the above statement for n = 3. (b) Suggest a procedure for a general proof. Can we use the duality principle after proving (b) of Prob. 2-16? 2:18. Show that the INHIBITION operation is not commutative. 2-19. Show that the NOR and NAND operations are not associative. 2-20. Show that the dual of the EXCLUSIVE-OR is equal to its complement. REFERENCES 1. Boole, G., An Investigation of the Laws of Thought. New York: Dover Pub., 1954, 2. Shannon, C. E., “A Symbolic Analysis of Relay and Switching Circuits,” Trans, of the AIEE, Vol. 57 (1938), 713-23, 3. Huntington, E, V., “Sets of Independent Postulates for the Algebra of Logie,” Trans. Am. Mach. Soc., Vol. 5 (1904), 288-309 4, Birkhoff, G., and T. C. Bartee, Modern Applied Algebra, New York: McGraw-Hill Book Co., 1970. 5. Birkhoff, G., and S, Maclane, A Survey of Modern Algebra, 31d ed. New York: The Macmillan Co., 1965. 6. Hohn, F. E., Applied Boolean Algebra, 2nd ed. New York: The Mac millan Co., 1966. 7. Whitesitt, J. E., Boolean Algebra and its Applications. Reading, Mass.: Addison-Wesley Pub. Co., 1961. SIMPLIFICATION OF 3 BOOLEAN FUNCTIONS 34 THE MAP METHOD The complexity of the digital logic gates that implement a Boolean function is directly related to the complexity of the algebraic expression from which the function is implemented. Although the truth-table representation of a function is unique, expressed algebraically, it can appear in many different forms. Boolean functions may be simplified by algebraic means as discussed in Sec. 2.4. However, this procedure of minimization is awkward because it lacks specific rules to predict each succeeding step in the manipulative Process. The map method provides a simple straightforward procedure for minimizing Boolean functions. This method may be regarded either as a Pictorial form of a truth table or as an extension of the Venn diagram. The map method, first proposed by Veitch (1) and slightly modified by Kar- naugh (2), is also known as the “Veitch diagram” or the “Karnaugh map.” The map is a diagram made up of squares. Each square represents one minterm. Since any Boolean function can be expressed as a sum of min- terms, it follows that a Boolean function is recognized graphically in the map from the area enclosed by those squares whose minterms are included in the function. In fact, the map presents a visual diagram of all possible ways a function may be expressed in a standard form. By recognizing various patterns, the user can derive alternate algebraic expressions for the same function, from which he can select the simplest one. We shall assume that the simplest algebraic expression is any one in a sum of products or Product of sums that has a minimum number of literals. (This expression is not necessarily unique.) 7 58 SIMPLIFICATION OF BOOLEAN FUNCTIONS: Chap. 3 32 TWO- AND THREE-VARIABLE MAPS A twovariable map is shown in Fig. 3-1. There are four minterms for two variables, hence the map consists of four squares, one for each minterm The map is redrawn in(b) to show the relation between the squares and the two variables. The 0's and 1’s marked for each row and each column designate the value of variable x and y respectively. Notice that x appears primed in row 0 and unprimed in row 1. Similarly, y appears primed in column 0 and unprimed in column J. 5 y x\_o i m | om 0} xy | xy @) () Figure 3-1 Two-variable map If we mark the squares whose minterms belong to a given function, the twowariable map becomes another useful way for representing any one of the 16 Boolean functions of two variables. As an example, the function xy is shown in Fig. 3-2a). Since xy is equal to m3, a 1 is placed inside the square that belongs to ms. Similarly, the function x + y is represented in the map of Fig. 3-2(b) by three squares marked with 1's. These squares are found from the minterms of the function: xty=x'y tay’ tay=m tm +m ‘The three squares could have also been determined from the intersection of variable x in the second row and variable y in the second column, which encloses the area belonging to x or y. yee veer | ° . A Th (a) ay (b) x+y Figure 3-2 Representation of functions in the map Sec. 3.2 ‘TWO- AND THREE-VARIABLE MAPS 59 A three-variable map is shown in Fig. 3-3. There are eight minterms for three binary variables. Therefore, a map consists of eight squares. Note that the minterms are arranged, not in a binary sequence, but in a sequence similar to the reflected code listed in Table 1-4. The characteristic of this sequence is that only one bit changes from 1 to 0 or from 0 to I in the listing sequence. The map drawn in part (b) is marked with numbers in each row and each column to show the relation between the squares and the three variables. For example, the square assigned to ms corresponds to row 1 and column 0J. When these two numbers are concatenated, they give the binary number 101, whose decimal equivalent is 5. Another way of looking at square ms = xy"z is to consider it to be in the row marked x and the column belonging to y'z (column 01). Note that there are four squares where each variable is equal to 1 and four where each is equal to 0. The variable appears unprimed in those four squares that it is equal to 1 and primed in those squares that it is equal to 0. For conyenience, we write the variable with its letter symbol under the four squares where it is unprimed. In order to understand the usefulness of the map for simplifying Boolean functions, we must recognize the basic property possessed by adjacent squares. Any two adjacent squares in the map differ by only one variable which is primed in one square and unprimed in the other. For example, ms and m, lie in two adjacent squares. Variable y is primed in ms and ye 2 \_o or “ir “o> nm Oo} xyz] xy'e} xye | xy" 2 ee eee @ () Figure 3-3 Three-variable map unprimed in mz, while the other two variables are the same in both squares. From the postulates of Boolean algebra, it follows that the sum of two minterms in adjacent squares can be simplified to a single AND term consisting of only two literals. To clarify this, consider the sum of two adjacent squares such as ms and my. ms + my = xy'z + xyz = xz (y' + y) = x2 Here the two squares differ by the variable y, which can be removed when the sum of the two minterms is formed. Thus any two minterms in 60 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 adjacent squares that are ORed together will cause a removal of the different variable. The following example explains the procedure to be used when minimizing a Boolean function with a map. EXAMPLE 3-1. Simplify the Boolean function Be x'yz + x'y2! + xy't! + xy"2 First, a 1 is marked in each square as needed to represent the function as shown in Fig. 34. This can be accomplished in two ways: either by converting each minterm to a binary number and then marking a 1 in the corresponding square, or by obtaining the coincidence of the variables in each term. For example, the term x'yz has the corresponding binary number O11 and represents minterm m3 in square O11. The second way of recognizing the square is by the coincidence of variables x', y, and z, which is found in the map by observing that x’ belongs to the four squares in the first row, » belongs to the four squares in the two right columns, and z belongs to the four squares in the two middle columns. The area that belongs to all three literals is the single square in the first row and third column. In a similar manner, the other three squares belonging to the function F are marked by I’s in the map. The function is thus represented by an area containing four squares, each marked with a 1 as shown in Fig, 3-4. The next step is to subdivide the given area into adjacent squares. These are indicated in the map by two rectangles, each enclosing two I’s. The upper right rectangle represents the area enclosed by x'y; the lower left, the area enclosed by xy’. The sum of these two terms gives the answer F=x'y +x! Next consider the two squares labeled mo and mz in Fig. 3-3(a) or x'y'z' and x'yz' in Fig. 3-3(b). These two minterms also differ by one variable y and their sum can be simplified to a tworliteral expression: xiy'd! + xpd = x!2! Figure 3-4 Map for Example 3-1; x'ye + x'ye! + xy'2' + xy"s 'y + xy’ Sec. 3.2 ‘TWO- AND THREE-VARIABLE MAPS 61 Consequently, we must modify the definition of adjacent squares to include this and other similar cases. This is done by considering the map as being drawn on a surface where the right and left edges touch each other to form adjacent squares. EXAMPLE 3-2. Simplify the Boolean function F = x'yz + xy's! + xyz + xyz’ The map for this function is shown in Fig. 3-5. There are four squares marked with 1's, one for each minterm of the function. Two adjacent squares are combined in the third column to give a two-literal term yz. The ye x00 on ¥ Figure 3.5 Map for Example 3-25 x'yz + xy'2' + xyz #292! = yz + x2! remaining two squares with 1’s are also adjacent by the new definition and are shown in the diagram enclosed by half rectangles. These two squares when combined give the two-literal term xz’. The simplified function becomes Fe yr+x2' Consider now any combination of four adjacent squares in the three- variable map. Any such combination represents the ORing of four adjacent minterms and results in an expression of only one literal. As an example, the sum of the four adjacent minterms mg, mz, ma, and mg reduces to the single literal z' as shown: xiy'z! + x'yz! + xy"2! + xy2' = x's!" ty) +22! G' +») = xz! 4x2! = 2! (x +x) EXAMPLE 3-3. Simplify the Boolean function F=A'C + A'B + ABIC + BC The map to simplify this function is shown in Fig. 3-6. Some of the terms in the function have less than three literals and are represented in the map by more than one square. For example, to find the squares corresponding 62 SIMPLIFICATION OF BOOLEAN FUNCTIONS. Chap. 3 Bc 2 40001 To 0 T Figure 34 Map for Example 3-3; A'C + A'B + AB'C + BC =C + A'B to A'C, we form the coincidence of A’ (first row) and C (two middle columns) and obtain squares 001 and O11. Note that when marking 1’s in the squares, it is possible to find a I already placed there by a preceding term, In this example, the second term A’B has I’s in squares O11 and 010, but square O11 is common to the first term A'C and only one I is marked in it. The function in this example has five minterms, as indicated by the five squares marked with 1’s. It is simplified by combining four squares in the center to give the literal C. The remaining single square marked with a 1 in O10 is combined with an adjacent square that has already been used once. This is permissible and even desirable since the combination of the two squares gives the term A’B while the single min- term represented by the square gives the three-variable term A'BC’. The simplified function is F=Ct+AB EXAMPLE 3-4. Simplify the Boolean function Foy 2=20,2,4,5,6 Here we are given the minterms by their decimal numbers. The corre- sponding squares are marked by 1’s as shown in Fig. 3-7. From the map we obtain the simplified function Foz! +xy" Figure 3-7. foxy.2) = © 245.6) = 2 + xy" Sec. 3.3 FOUR-VARIABLE MAP 63 33 FOUR-VARIABLE MAP ‘The map for Boolean functions of four binary variables is shown in Fig. 3-8. In part (a) are listed the 16 minterms and the squares assigned to each. In (b) the map is redrawn to show the relation with the four variables. The rows and columns are numbered in a reflected code sequence with only one digit changing value between two adjacent rows or columns. The minterm corresponding to each square can be obtained from the concatenation of the row number with the column number. For example, the number of the third row (11) and the second column (01), when concatenated, give the binary number 1101, the binary equivalent of dec- imal 13. Thus, the square in the third row and second column represents minterm m3 The map minimization of four-variable Boolean functions is similar to the method used to minimize three-variable functions. Adjacent squares are defined to be squares next to each other. In addition, the map is con- sidered to lie on a surface with the top and bottom edges, as well as the right and left edges, touching each other to form adjacent squares. For example, mo and m, form adjacent squares, as do ms and m,,. The combination of adjacent squares that is useful during the simplification process is easily determined from inspection of the four-variable map: One square represents one minterm, giving a term of four literals. Two adjacent squares represent a term of three literals. Four adjacent squares represent a term of two literals. Eight adjacent squares represent a term of one literal. Sixteen adjacent squares represent the function equal to 1 — wx\_00__O1 1110) mg | m, | my | om: peice wey ef’ Figure 38 Four-variable map 64 SIMPLIFICATION OF BOOLEAN FUNCTIONS. Chap. 3 No other combination of squares can simplify the function. The following two examples show the procedure used to simplify four-variable Boolean functions. EXAMPLE 3-5. Simplify the Boolean function F (w, xy 2) = DO, 1, 2,4, 5, 6 8, 9, 12, 13, 14) Since the function has four variables, a four-variable map must be used. The minterms listed in the sum are marked by 1's in the map of Fig. 3-9. Eight adjacent squares marked with 1's can be combined to form the one literal term y’. The remaining three 1's on the right cannot be combined together to give a simplified term They must be combined as two or four adjacent squares. The larger the number of squares combined, the less the number of literals in the term. In this example, the top two I’s on the right are combined with the top two I’s on the left to give the term w’z’. Note that it is permissible to use the same square more than once. We are now left with a square marked by 1 in the third row and fourth column (square 1110). Instead of taking this square alone (which will give a term of four literals), we combine it with squares already used to form an area of four adjacent squares. These squares comprise the two middle rows and the two end, columns, giving the term xz’. The simplified function is Fs yl + we + x2’ EXAMPLE 3-6. Simplify the Boolean function F = A'B'C' + BICD! + A'BCD! + AB'C' sn oe of} | 1 ol fif]} 1 1 ayy 1 1 : ifs [a Figure 3.9 Map for Example 3-5; F(w,x,y,2) = 2(0,1,2,4,5,6,8,9,12,13, 14) = "+ ws! + xz" Sec. 3:3 FOUR-VARIABLE MAP 65 ‘The area in the map covered by this function consists of the squares marked with 1’s in Fig. 3-10. This function has four variables and, as expressed, consists of three terms, each with three literals, and one term of four literals. Each term of three literals is represented in the map by two squares. For example, A’B'C’ is represented in squares 0000 and 0001. The function can be simplified in the map by taking the I’s in the four corners to give the term B'D'. This is possible because these four squares are adjacent when the map is drawn in a surface with top and bottom or left and right edges touching one another. The two leftchand 1’s in the top row are combined with the two 1’s in the bottom row to give the term B’C’. The remaining 1 may be combined in a two-square area to give the term A'CD’. The simplified function is F = BID! + BIC! + A'CD' cD pee AB_oO 01 “110 of LITT 7 oy 1 2 | A ho fad 1 — Figure 3:10 Map for Example 3-6; A'B'C’ + B’cD' + A’Bcb’ + AB‘C’ = BD + BIC + A'cD 34 FIVE- AND SIX-VARIABLE MAPS Maps of more than four variables are not as simple to use. The number of squares becomes excessively large and the geometry for combining adjacent squares becomes more involved. The number of squares is always equal to the number of minterms. For five-variable maps, we need 32 squares; for six-variable maps, we need 64 squares. Maps with seven or more variables need too many squares. They are impractical to use. The five- and six- variable maps are shown in Figs. 3-11 and 3-12, respectively. The rows and columns are numbered in a reflected code sequence; the minterm assigned to each square is read from these numbers. In this way, the square in the third row (11) and second column (001), in the five-variable map, is number 11001, the equivalent of decimal 25. Therefore, this square 66 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 DE 4B,-000 00111 010 ofifs]2]ie of s | 9 | | so ff as fas | as | a B ni] 24} 25 | 27 | 26 |] 30 | 31 | 29 | 28 A nol 16 | 17 | 19 | as |] 22 | 23 | 21 | 20 See eee seco E E Figure 3-11 Five-variable map DEF D ‘ABC, 000001 O11 010 THOTT Tor 100 oof o fifa} 2iief7{s}a oo} s | 9 | ar} io |} is | as | as | az c oni] 24 | 25 | 27 | 26 |} 30 | 31 | 29 | 28 oro] 16 | 17 | 19 | as |} 22 | 23 | 21 | 20 B no} 48 | 49 | s1 | 50 |} 54 | ss | 53 | 52 1] 56 | 57 | 59 | se |] 62 | 63 | 61 | 60 4 c roi] 40 | 41 | 43 | 42 |} 46 | 47 | 45 | 44 100] 32 | 33 | as | 34 |] 38 | 39 | 37 | 36 Pa F Figure 3-12 Six-variable map represents minterm mz. The letter symbol of each variable is marked along those squares where the corresponding bit value of the reflected code number is a 1. For example, in the five-variable map, the variable A is a 1 in the last two rows; B is a 1 in the middle two rows. The reflected Sec. 3.4 FIVE: ANDSIX:VARIABLE MAPS 67 numbers in the columns show variable C with a 1 in the rightmost four columns, variable D with a 1 in the middle four columns, and the 1’s for variable E not physically adjacent but split in two parts. The variable assignment in the six-variable map is determined similarly. The definition of adjacent squares for the maps of Figs. 3-11 and 3-12 must be modified again to take into account the fact that some variables are split in two parts. The five-variable map must be thought to consist of two fourvariable maps, and the six-variable map to consist of four four- variable maps. Each of these four-variable maps is recognized from the double lines in the center of the map: each retains the previously defined adjacency when taken individually. In addition, the center double line must be considered as the center of a book, with each half of the map being a page. When the book is closed, two adjacent squares will fall one in each other. In other words, the center double line is like @ mirror with each square being adjacent, not only to its four neighboring squares, but also to its mirror image. For example, minterm 31 in the five-variable map is adjacent to minterms 30, 15, 29, 23, and 27. The same minterm in the six-variable map is adjacent to all these minterms plus minterm 63. From inspection, and taking into account the new definition of adjacent squares, it is possible to show that any 2" adjacent squares, for k = 0, 1, 2, ...)”, in an nvariable map, will represent an area that gives a term of n — k literals. For the above statement to have any meaning, n must be larger than k. When n = Kk, the entire area of the map is combined to give the identity function. Table 3-1 shows the relation between the number of adjacent squares and the number of literals in the term. For example, eight adjacent squares combine an area in the fivewvariable map to give a term of two literals. ‘Table 3-1. The Relation between the Number of Adjacent Squares and th Number of Literals in the Term umber tine Mambeof We ete an rare mp k ak n=2 n=3 n=4 n=s nT cei Fae ates eee steele 1 a Tee ey 2] 4 ea eee eee 5] 3 oft fa | $s fa i] ts aes 3] 3 cea ‘| & Aa let 68 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 EXAMPLE 3-7. Simplify the Boolean function F (A, B, GD, E) = 300, 2, 4, 6, 9, 11, 13, 15, 17, 21, 25, 27, 29, 31) The five-variable map of this function is shown in Fig, 3-13. Each minterm is converted to its equivalent binary number and the 1’s are marked in their corresponding squares. It is now necessary to find combina- tions of adjacent squares that will result in the largest possible area. The four squares in the center of the right-half map are reflected across the double line and are combined with the four squares in the center of the left-half map to give eight allowable adjacent squares equivalent to the term BE. The two 1’s in the bottom row are the reflection of each other about the center double line. By combining them with the other two adjacent squares, the term AD'E is obtained. The four 1's in the top row are all adjacent and can be combined to give the term 4’B'E’. All the I’s are now included. The simplified function is F = BE + AD'E + A'BE coe gn, 200-001 O11 010 Ti Tor 100 oof a TP) T ol re TY? [+ + 2 hy Tha 1a 4 ho} Figure 3-13 Map for Example 3-7; FUA,B,CD.E) = 2(0,2.4,6,9,11,13, 15,17,21,25,27,29,31) = BE + AD'E + A'BE' 3-5 PRODUCT OF SUMS SIMPLIFICATION ‘The minimized Boolean functions derived from the map in all previous examples were expressed in the sum of products form. With a minor modification, the product of sums form can be obtained. The procedure for obtaining a minimized function in product of sums follows from the basic properties of Boolean functions. The 1's placed in the squares of the map represent the minterms of the function. The minterms not included in the function denote the complement of the function. From this we see that the complement of a function is repre- sented in the map by the squares not marked by 1’s. If we mark the Sec. 35 PRODUCT OF SUMS SIMPLIFICATION 69 empty squares by 0's and combine them into valid adjacent squares, we shall obtain a simplified expression of the complement of the function; ie., of F’. The complement of F" gives us back the function F. Because of the generalized De Morgan’s theorem, the function so obtained is automatically in the product of sums form. The best way to show this is by example. EXAMPLE 3-8. Simplify the following Boolean function in (a) sum of products and (b) product of sums. F(A, B,C, D)= EO, 1, 2, 5, 8, 9, 10) The 1's marked in the map of Fig. 3-14 represent all the minterms of the function. The squares marked with 0's represent the minterms not included in F, and therefore, denote the complement of F. Combining the squares with 1’s gives the simplified function in sum of products: @ F=BD +B +A'CD If the squares marked with 0’s are combined, as shown in the diagram, one obtains the simplified complemented function: F' = AB + CD + BD' Applying De Morgan’s theorem (by taking the dual and complementing each literal as described in Sec. 2-4), we obtain the simplified function in product of sums: (b) F=(4' +B) (C+D) +D) cp Figure 3.14 Map for Example 38; F4,B,CD) = 20,1,2,5,8,9,10) BD! + BC + A'C'D = (A! +B) (C+D) Dd) The implementation of the simplified expressions obtained in Ex. 3-8 is shown in Fig. 3-15. The sum of products expression is implemented in (a) with a group of AND gates, one for each AND term. The outputs of the AND gates are connected to the inputs of a single OR gate. The same 70 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 ; ’ 4 , Figure 3-15 Gate implementation of the function of Example 38 function is implemented in (b) in its product of sums form with a group of OR gates, one for each OR term. The outputs of the OR gates are connected to the inputs of a single AND gate. In each case it is assumed that the input variables are directly available in their complement so inverters are not needed. The configuration pattern established in Fig. 3-15 is the general form by which any Boolean function is implemented when expressed in one of the standard forms. AND gates are connected to a single OR gate when in sum of products; OR gates are connected to a single AND gate when in product of sums. Either configuration forms two levels of gates. Thus, the implementation of a function in a standard form is said to be a two-level implementation. Example 3-8 has shown the procedure for obtaining the product of sums simplification when the function is originally expressed in the sum of minterms canonical form. The procedure is also valid when the function is originally expressed in the product of maxterm canonical form. Consider for example the truth table that defines the function F in Table 3-2. In sum of minterms, this function is expressed as Fy, 2=2(,3,4,6 Table 3-2 Truth Table of Function F x y z|F A274 o 0 ofo oo 11 o 1 ofo o 1 ift 1 0 of1 10 1J0 11 of1 11 ito Sec. 35 PRODUCT OF SUMS SIMPLIFICATION 71 In product of maxterms, it is expressed as F (x, y, 2) = 10, 2, 5, 7) In other words, the 1’s of the function represent the minterms, while the O's represent the maxterms. The map for this function is drawn Fig. 3-16. One can start simplifying this function by first marking the 1's for each minterm that the function is a 1. The remaining squares are marked by 0's. If, on the other hand, the product of maxterms is initially given, one can start marking 0's in those squares listed in the function; the remaining squares are then marked by 1’s. Once the 1’s and 0’s are marked, the function can be simplified in either one of the standard forms. For the sum of products, we combine the 1’s to obtain Fox's + xz! For the product of sums, we combine the 0's to obtain the simplified complemented function: F Which shows that the exclusive-or function is the complement of the equivalence function (Sec. 2-6). Taking the complement of F’, we obtain the simplified function in product of sums xz +x'2! F=(x' +2’) +2) To enter a function expressed in product of sums in the map, take the complement of the function and from it find the squares to be marked by O's. For example, the function F=(A' +B! +0 (B+D) can be entered in the map by first taking its complement F' = ABC’ + B'D' and then marking 0's in the squares representing the minterms of F'. The remaining squares are marked with 1's ye y ooo TT 0- Figure 3-16 Map for the function of Table 3-2 T2__ SIMPLIFICATION OF BOOLEAN FUNCTIONS chap. 3 36 DON'T-CARE CONDITIONS The 1°s and 0's in the map signify the combination of variables that makes the function equal to 1 or 0, respectively. The combinations are usually obtained -from a truth table that lists the conditions under which the function is a 1. The function is assumed equal to 0 under all other conditions. This assumption is not always true since there are applica- tions where certain combinations of input variables never occur. A four-bit decimal code, for example, has six combinations which are not used. Any digital circuit using this code operates under the assumption that these unused combinations will never occur as long as the system is working properly. As a result, we don’t care what the function output is to be for these combinations of the variables because they are guaranteed never to occur. These don’t-care conditions can be used on a map to provide further simplification of the function. It should be realized that a don’t-care combination cannot be marked with a 1 on the map because it would require that the function always be a1 for such input combination. Likewise, putting a0 in the square requires the function to be 0. To distinguish the don’t-care conditions from 1's and O's, an X will be used. When choosing adjacent squares to simplify the function in the map, the X’s may be assumed to be either O or 1, whichever gives the simplest expression. In addition, an X need not be used at all if it does not contribute to covering a larger area. In each case, the choice depends only on the simplification that can be achieved. EXAMPLE 3-9. Simplify the Boolean function Fw, x,y, 2) = 2B (1, 3,7, 1, 15) and the don’t-care conditions d (w, x, y, 2) = 5 (0, 2, 5) The minterms of F are the variable combinations that make the function equal to 1. The minterms of d are the don’t-care combinations known never to occur. The minimization is shown in Fig. 3-17. The minterms of F are marked by 1’s; those of d are marked by Xs; and the remaining squares are filled with 0's. In (a), the 1’s and X's are combined in any convenient manner so as to enclose the maximum number of adjacent squares. It is not necessary to include all or any of the X’s, only those useful for simplifying a term. One combination that gives a minimum function encloses one X and Jeaves two out. This results in a simplified sum of products function Few +yz Sec. 36 DON'T-CARE CONDITIONS 73 x x» wr 00 OL vex 00__OL od x [fr ffl] x of x]f a | ot | fx of o |Lx|[a]|] 0 of ol| x | 1 {fo nfo fo {falf o iiffo|fo]| 1 |jo lio} o | o |lr |} o ho Lof} o}} 1 jlo (a) Combining U's and X's F< w'e+ yz (b) Combining O's and X's F = 2(w’+ y) Figure 3-17 Example with don’t-care conditions In (b), the 0's are combined with any X’s convenient to simplify the complement of the function. The best results are obtained if we enclose the two X’s as shown. The complement function is simplified to Complementing again, we obtain a simplified product of sums function F=z(w'+y) The two expressions obtained in Ex. 3-9 give two functions which can be shown to be algebraically equal. This is not always the case when don’t-care conditions are involved. As a matter of fact, if an X is used as a 1 when combining the 1’s and again as a 0 when combining the 0's, the two resulting functions will not yield algebraically equal answers. The selection of the don’t-care condition as a 1 in the first case and as a 0 in the second results in different minterm expressions and thus different functions. This can be seen from Ex. 3-9. In the solution of this example, the X chosen to be a 1 was not chosen to be a 0. Now, if in Fig. 3-17%(a), we choose the term w’x’ instead of w'z, we still obtain a minimized function Pewsx! + yz But it is not algebraically equal to the one obtained in product of sums because the same X’s are used as 1’s in the first minimization and as 0's in the second, This example also demonstrates that an expression with the minimum 74 SIMPLIFICATION OF BOOLEAN FUNCTIONS. Chap. 3 number of literals is not necessarily unique. Sometimes the designer is confronted with a choice between two terms with an equal number of literals, with either choice resulting in a minimized expression. 3-7 THE TABULATION METHOD The map method of simplification is convenient as long as the number of variables does not exceed five or six. As the number of variables increases, the excessive number of squares prevents a reasonable selection of adjacent squares. The obvious disadvantage of the map is that it is essentially a trial-and-error procedure which relies on the ability of the human user to recognize certain patterns. For functions of six or more variables, it is difficult to be sure that the best selection has been made. The tabulation method overcomes this difficulty. It is a specific step-by- step procedure that guarantees to produce a simplified standard form expression for a function. It can be applied to problems with many variables and has the advantage of being suitable for machine computation. However, it is quite tedious for human use and is prone to mistakes because of its routine, monotonous process. The tabulation method was first formulated by Quine (3) and later improved by McCluskey (4). It is also known as the Quine-McCluskey method The tabular method of simplification consists of two parts. The first is to find by an exhaustive search all the terms that are candidates for inclusion in the simplified function. These terms are called prime-implicants. The second operation is to choose among the prime-implicants those that give an expression with the least number of literals. 38 DETERMINATION OF PRIME-IMPLICANTS The starting point of the tabulation method is the list of minterms that specify the function. The first tabular operation is to find the prime- implicants by using a matching process. This process compares each minterm with every other minterm. If two minterms differ in only one variable, that variable is removed and a term with one less literal is found. This process is repeated for every minterm until the exhaustive search is completed. The matching process cycle is repeated for those new terms just found. Third and further cycles are continued until a single pass through a cycle yields no further elimination of literals. The remaining terms and all the terms that did not match during the process comprise the prime-implicants. This tabulation method is illustrated by the following example. Sec. 38 DETERMINATION OF PRIME-IMPLICANTS 75 EXAMPLE 3-10. Simplify the following Boolean function by using the tabulation method: F= (0, 1, 2,8, 10, 11, 14, 15) Step 1: Group binary representation of the minterms according to the number of 1’s contained, as shown in Table 3-3, column (a). This is done by grouping the minterms into five sections separated by horizontal lines. The first section contains the number with no 1's in it. The second section contains those numbers that have only one 1, The third, fourth, and fifth sections contain those binary numbers with two, three, and four 1's, respectively. The decimal equivalents of the minterms are also carried along for identification. Step 2: Any two minterms which differ from each other by only one variable can be combined, and the unmatched variable removed. Two min- term numbers fit in this category if they both have the same bit value in all positions except one. The minterms in one section are compared with those of the next section down only because two terms differing by more than_one bit cannot match. The minterm in the first section is compared with each of the three minterms in the second section. If any two numbers are the same in every position but one, a check is placed to the right of both minterms to show that they have been used. The resulting term, together with the decimal equivalents, is listed in column (b) of the table. The variable eliminated during the matching is remembered by inserting a ‘Table 3-3 Determination of Prime-Implicants for Example 3-10 (a) (b) (ec) wxye wr ye weye 7 o000¥ br 000- 02,810 0-0 0.2 00-09 08,210 -0-0 1 o001/ 08 -000V 10,11,14,15 1-1 — 2 oo10d/ 10,14, 115 1-1 8 1000¥ 210-010 V B10 10-0 V i ToioV 10,11 101-7 Ti ar arity, 1014 1-10 Vv M_itiod ——— This 1-11 Vv ls itiid wis t11-V 76 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 dash_in its original position. In this case, my (0000) combines with m, (0001) to form (000-). This combination is equivalent to the algebraic operation my + my = whx'y'z! + w'x'y'z = w's'y! Minterm mo also combines with m, to form (00-0) and with ms to form (-000). The result of this comparison is entered into the first section of column (b). The minterms of sections two and three of column (a) are next compared to produce the terms listed in the second section of column (b). All other sections of (a) are similarly compared and subsequent sections formed in (b). This exhaustive comparing process results in the four sections of (b). Step 3: The terms of column (b) have only three variables. A 1 under the variable means it is unprimed. A O means it is primed. And a dash means the variable is not included in the term. The searching and comparing process is repeated for the terms in column (b) to form the two-variable terms of column (c). Again, terms in each section need to be compared only if they have dashes in the same position. Note that the term (000-) does not match with any other term. Therefore, it has no check mark on its right. The decimal equivalents are written on the leftthand side of each entry for identification purposes. The comparing process should be carried out again in column (c) and subsequent columns as long as proper matching is encountered. In the present example, the operation stops at the third column. Step 4: The unchecked terms in the table form the prime-implicants. In this example we have the term w'x'y’ (000-) in column (b), and the terms x'2'(-0-0) and wy (1-1-) in column (c). Note that each term in column (c) appears twice in the table and as long as the term forms a prime-implicant, it is unnecessary to use the same term twice. The sum of the prime- implicants gives a simplified expression for the function. This is because each checked term in the table has been taken into account by an entry of a simpler term in a subsequent column. Therefore, the unchecked entries (prime-implicants) are the terms left to formulate the function. For the present example, the sum of prime-implicants gives the minimized function in sum of products: Fo ws'y! + x7 + wy Tt is worth comparing this answer with that obtained by the map method. Figure 3-18 shows the map simplification of this function. The combinations of adjacent squares give the three prime-implicants of the function. The sum of these three terms is the simplified expression in sum of products. Sec. 38 DETERMINATION OF PRIME-IMPLICANTS 77 oe ool La 1 of ||. f Tha lt ba ¥ Figure 318 Map for the function of Example’ 3-10; F = w'x'y" +x! + wy It is important to point out that Ex. 3-10 was purposely chosen to give the simplified function from the sum of prime-implicants. In most other cases, the sum of prime-implicants does not necessarily form the expression with the minimum number of terms. This is demonstrated in Ex. 3-11. The tedious manipulation that one must undergo when using the tabula- tion method is reduced if the comparing is done with decimal numbers instead of binary. A method will now be shown that uses subtraction of decimal numbers instead of the comparing and matching of binary numbers. We note that each 1 in a binary number represents the coefficient multiplied by a power of two. When two minterms are the same in every position except one, the minterm with the extra 1 must be larger than the number of the other minterm by a power of two. Therefore, two minterms can be combined if the number of the first minterm differs by a power of two from a second larger number in the next section down the table. We shall illustrate this procedure by repeating Ex. 3-10. ‘As shown in Table 3-4, column (a), the minterms are arranged in sections as before except that now only the decimal equivalents of the minterms are listed. The process of comparing minterms is as follows: inspect every two decimal numbers in adjacent sections of the table. If the number in the section below is greater than the number in the section above by a power of two (that is, 1, 2, 4, 8, 16, etc.), check both numbers to show that they have been used, and write them down in column (b). The pair of numbers transferred to column (b) includes a third number in parentheses that designates the power of two by which the pair of numbers differ. The number in parentheses tells us the position of the dash in the binary notation. The result of all comparisons of column (a) is shown in column (b). 78 SIMPLIFICATION OF BOOLEAN FUNCTIONS. Chap. 3 Table 3-4 Determination of Prime-Implicants of Example 3-10 With Decimal Notation fa) (b) fe) o fv for aw 0,2,8,10, 2,8) 02 @ |v 0,2,8,10, (2,8) 1 |v 08 @ |v 2 |v 10, 11, 14,18 (1,4) aly 2,10 @) |v 10, 11, 14, 15 (1,4) 7 810 2 |v 10 10,110) |v, uly 10, 14(4) | J |v isa) |v, is |v 14,150) |v The comparison between adjacent sections in column (b) is carried out in a similar fashion except that only those terms with the same number in parentheses are compared. The pair of numbers in one section must differ by a power of two from the pair of numbers in the next section. And the numbers in the next section below must be greater for the combination to take place. In column (c), we write all four decimal numbers with the two numbers in parentheses designating the positions of the dashes. A comparison of Tables 3-3 and 34 may be helpful in understanding the derivations in Table 3-4. The prime-implicants are those terms not checked in the table. These are the same as before except that they are given in decimal notation. To convert from decimal notation to binary, convert all decimal numbers in the term to binary and then insert a dash in those positions designated by the numbers in parentheses. Thus 0, 1 (1) is converted to binary as 0000, 0001; a dash in the first position of either number results in (000-). Similarly, 0, 2, 8, 10 (2, 8) is converted to the binary notation from 0000, 0010, 1000, and 1010, and a dash inserted in positions 2 and 8, to result in (0-0) EXAMPLE 3-11, Determine the prime-implicants of the function Fw, x,y, 2) = E(1,4,6, 7, 8,9, 10, 11, 15) The minterm numbers ate grouped in sections as shown in Table 3-5, column (a). The binary equivalent of the minterm is included for the purpose of counting the number of 1’s. The binary numbers in the first section have only one 1; the second section, two 1's, ete. The minterm numbers are compared by the decimal method and a match is found if the number in the section below is greater than that in the section above. If the number in the section below is smaller than the one above, a match is Sec. 38 DETERMINATION OF PRIME-IMPLICANTS 79 ‘Table 3-5. Determination of Prime-mplicants for Example 3-11 fa) 0) (o 0001 1 Vv, 1,9 @) 8,9, 10, 11 (1,2) 0100 4 v, 4,6 Q) 8,9, 10, 11 (1,2) 1000 8 89 a) v, ree 8.10 @) Vv ouo 6 Vv, ne 1001 9 v, 6&7 @ 1010 10 ¥ 9. @ v — 10,11 @) ¥ om 7 v — toil nv 7,15 (8) — u1s @ unis v Seca Prime Implicants: Binary Decimal wry Term 1,98) -001 x'y'z 4,62) 01-0 wre! 6.70 ou- wy 7,15 8) il od 11, 15 (4) eu wyz 8,9, 10, 11,2) 10~ vex! not recorded even if the two numbers differ by a power of two. The exhaustive search in column (a) results in the terms of column (b), with all minterms in column (a) being checked. There ate only two matches of terms in column (b). Each gives the same twoditeral term recorded in column (c). The prime-implicants consist of all the unchecked terms in the table. The conversion from the decimal to the binary notation is shown under the table. The prime-implicants are found to be x'y'z, w'x2’, w'xy, xyz, wyz, and wr’ The sum of the prime-implicants gives a valid algebraic expression for the function. However, this expression is not necessarily the one with the minimum number of terms. This can be demonstrated from inspection of the map for the function of Ex. 3-11. As shown in Fig. 3-19, the mini- mized function is reorganized to be F = x'y'z + w'xz! + wys + wx! which consists of the sum of four out of the six prime-implicants derived in Ex. 3-11. The tabular procedure for selecting the prime-implicants that give the minimized function is the subject of the next section 80 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 ye y wx 0001 “i 10~ 0 1 ot 1|{ hy hol ft (fr]] a | a Figure 3-19 Map for the function of Example 3-11; F = x'y's + w'xz!’ xyz + we 349 SELECTION OF PRIME-IMPLICANTS The selection of prime-implicants that form the minimized function is made from a prime-implicant table. In this table, each prime-implicant is repre- sented in a row and each minterm in a column. Crosses are placed in each row to show the composition of minterms that make the prime-implicants. ‘A minimum set of prime-implicants is then chosen that covers all the minterms in the function. This procedure is illustrated in Ex. 3-12. EXAMPLE 3-12. Minimize the function of Ex. 3-11 The prime-implicant table for this example is shown in Table 3-6. There are six rows, one for each prime-implicant (derived in Ex. 3-11), and nine columns, each representing one minterm of the function. Crosses are placed in each row to indicate the minterms contained in the prime-implicant of that row. For example, the two crosses in the first row indicate that minterms 1 and 9 are contained in the prime-implicant x'y'z. It is advisable to include the decimal equivalent of the prime-implicant in each row, as it conveniently gives the minterms contained in it. After all the crosses have been marked, we proceed to select a minimum number of prime-implicants. The completed prime-implicant table is inspected for columns containing only a single cross. In this example, there are four minterms whose columns have a single cross: 1, 4, 8, and 10. Minterm 1 is covered by prime- implicant x'y'z: ie, the selection of prime-implicant x'y'z guarantees that minterm 1 is included in the function. Similarly, minterm 4 is covered by prime-implicant w'xc’; and minterms 8 and 10, by prime-implicant wx’ Prime-implicants that cover minterms with a single cross in their column are called essential prime-implicants. To enable the final simplified expression to Sec. 39 SELECTION OF PRIMEIMPLICANTS 81 ‘Table 3-6 Prime-tmplicant Table for Example 3-12 rftatol7{|sfo {iol m [is xyz [1.9 x waz" | 4.6 x {x wry | 6.7 x [x xy | 7,18 x x wz | 1015 x [x we’ | 8,9, 10, 11 x [x [x Tx vi[Vv[Vv viv iviv contain all the minterms, we have no alternative but to include essential prime-implicants. A check mark is placed in the table next to the essential prime-implicants to indicate that they have been selected. Next we check each column whose minterm is covered by the selected essential prime-implicants. For example, the selected prime-implicant x'y'z covers minterms 1 and 9. A check is inserted in the bottom of the columns. Similarly, prime-implicant w'x2' covers minterms 4 and 6 and wx" covers minterms 8, 9, 10, and 11. Inspection of the prime-implicant table shows that the selection of the essential prime-implicants covers all the minterms of the function except 7 and 15. These two minterms must be included by the selection of one or more prime-implicants. In this example, it is clear that prime-implicant xyz covers both minterms and is, therefore, the one to be selected. We have thus found the minimum set of prime- implicants whose sum gives the required minimized function: F = x'y'z + wiz! + wx! + xyz ‘The simplified expressions derived in the preceeding examples were all in the sum of products form. The tabulation method can be adapted to give a simplified expression in product of sums. As in the map method, we have to start with the complement of the function by taking the O's as the initial list of minterms. This list contains those minterms not included in the original function which are numerically equal to the maxterms of the function. The tabulation process is carried out with the 0's of the function and terminates with a simplified expression in sum of products of the complement of the function. By taking the complement again, one obtains the simplified product of sums expression. A function with don’t-care conditions can be simplified by the tabulation method after a slight modification. The don’t-care terms are included in the list of minterms when the prime-implicants are determined. This allows the derivation of prime-implicants with the least number of literals, The don’t- care terms are not included in the list of minterms when setting up the prime-implicant table. This is because don’t-care terms do not have to be covered by the selected prime-implicants. 82 SIMPLIFICATION OF BOOLEAN FUNCTIONS Chap. 3 3-10 CONCLUDING REMARKS Two methods of Boolean function simplification were introduced in this chapter. The criterion for simplification was taken to be the minimization of the number of literals in sum of products or product of sums expres- sions. Both the map and the tabulation methods are restricted in their capabilities since they are useful for simplifying only Boolean functions expressed in the standard forms. Although this is a disadvantage of the methods, it is not very critical. Most applications prefer the standard forms over any other form. We have seen from Fig. 3-15 that the gate implemen- tation of expressions in standard form consists of no more than two levels of gates. Expressions not in the standard form are implemented with more than two levels. Humphrey (5) shows an extension of the map method that produces simplified multilevel expressions. One should recognize that the reflected code sequence chosen for the maps is not unique. It is possible to draw a map and assign a binary reflected code sequence to the rows and columns different from the sequence employed here. As long as the binary sequence chosen produces a change in only one bit between adjacent squares, it will produce a valid and useful map. Any map that looks different from the one used in this book, or is called by a different name, should be recognized as merely a variation of reflected code assignment. As evident from Exs. 3-10 and 3-11, the tabulation method has the drawback that errors will inevitably occur in trying to compare numbers over long lists. The map method would seem to be preferable, but for more than five variables, we cannot be certain that the best simplified expression has been found. The real advantage of the tabulation method lies in the fact that it consists of specific step-by-step procedures that guarantee an answer. Moreover, this formal procedure is suitable for computer ‘mechanization. Tt was stated in Sec. 3-8 that the tabulation method always starts with the minterm list of the function. If the function is not in this form, it must be converted. In most applications, the function to be simplified comes from a truth table, from which the minterm list is readily available. Otherwise, the conversion to minterms adds considerable manipulative work to the problem. However, an extension of the tabulation method exists for finding prime-implicants from arbitrary sum of prod: expressions, See, for example, McCluskey (7). In this chapter, we have considered the simpntication of functions with many input variables and a single output variable. However, some digital circuits have more than one output. Such circuits are described by a set of Boolean functions, one for each output variable. A circuit with multiple See, 3.10 CONCLUDING REMARKS 83 outputs may sometimes have common terms among the various functions which can be utilized to form common gates during the implementation. This results in further simplification not taken into consideration when each function is simplified separately. There exists an extension of the tabulation method for multiple output circuits (6, 7). However, this method is too specialized and very tedious for human manipulation. It is of practical importance only if a computer program based on this method is available to the user. PROBLEMS 3-1. Obtain the simplified expressions in sum of products for the following Boolean functions. (@) FO, y, 2) = BQ, 3,6,7) (b) F(A, B, C.D) = E(7, 13, 14, 15) (©) F(A, B, CD) = E44, 6,7, 15) (@) Fow, x,y, 2) = EC, 3, 12, 13, 14, 15) 3-2. Obtain the simplified expressions in sum of products for the following Boolean functions. (a) xy + x'y'e! + xyz! (b) 4'B + BC+ BIC’ (e) a! + be + abe! () xy'z + xy2! + x'yz + xyz . Obtain the simplified expressions in sum of products for the following Boolean functions. (a) D(A’ + B) + B'(C + AD) (b) ABD + A'CD' + A'B + A'CD! + AB'D' (©) K'lm! + Kimin + kim'n! + Ima! (@) 4'B'C'D' + AC'D' + BICD' + A'BCD + BC'D (e) x'2 + wixy' + wOr'y + xy’) 3-4. Obtain the simpufigd expressions in sum of products for the following Boolean function: (a) F(A, B, C, D, £) = XO, 1, 4, 5, 16, 17, 25, 29) (b) BDE + B'C'D + CDE + A'B'CE + A'B'C + BIC'D'E’ (©) A'B'CE' + A'B'C'D' + B'D'E' + BYCD' + CDE" + BDE' 84 SIMPLIFICATION OF BOOLEAN FUNCTIONS: Chap. 3 3-5, Given the following truth table: (a) express F, and Fy in product of maxterms. (b) obtain the simplified functions in sum of products, (c) obtain the simplified functions in product of sums. 3-6. Obtain the simplified expressions in product of sums: (@) F(xy,2) = 1G, 1, 4, 5) (b) F(A,B,C,D) = M1(0, 1, 2, 3, 4, 10, 11) (©) F(w,x,y,z) = TI, 3, 5, 7, 13, 15) 3-7. Obtain the simplified expressions in (1) sum of products and (2) product of sums: (a) x'2! + y'2! + yz! + xyz (b) (A + BY + DXA’ + B+ DC + DYC' + D') (c) (A' + B+ D'MA + B+ CMA’ + B+ DVB + C+D) (@) (A' + B+ DMA + DIVA +B + DXA + B+ C+D) (e) wiy2! + vwiz! + vw'x + v'we + o'w'y'2! 3-8. Draw the gate implementation of the simplified Boolean functions obtained in Prob. 3-7 39. Simplify the Boolean function F in sum of products using the don’t- care conditions d: (a) F=y' +x'2' = yz tay (b) F = B'CD! + BCD’ + ABCD d= BICD' + BCD! 3-10, Simplify the Boolean function F, using the don’t-care conditions d, in (2) sum of products and (2) product of sums: (@) F = A'B'D' + A'CD + ABC d = A'BCD + ACD + ABD’ (b) F = w'(x'y + x'y! + xyz) + x2 + w) d= wx" + ye’) + wy2 Sec. 3-10 PROBLEMS 85 (©) F = ACE + A'CD'E’ + A'CDE d = DE’ + A'D'E + AD'E’ (a) F = BIDE’ + A'BE + BCE’ + A'BCD! d = BDE' + CDE’ 3-11. Given Boolean functions Fy and F2: (a) Show that the Boolean function G obtained from ANDing F and Fy (ie., G = F, * F,) contains those minterms common to both Fy and Fy (v) Show that the Boolean function H obtained from ORing F, and Fa (ie. H = Fy + Fa) contains all the minterms of both F, and Fy (©) Explain how the maps of F, and F; can be used to find G and H. 3-12. The following Boolean expression BE + B'DE’ is a simplified version of the expression A'BE + BCDE + BC'D'E + A'B'DE’ + B'CDE’ ‘Are there any don’t-care conditions? If so, what are they? 3-13, Give three possible ways to express the function F = A'B'D' + AB'CD' + A'BD + ABCD with eight or less literals. 3-14, With the use of maps, find the simplest form in sum of products of the function F = fg, where f and g are given by: f= wry! + y's + wlya! + x'y2! ex (wtx ty +290" ty! + 2ywl ty +2) Hint: See Prob. 3-11 3-15. Simplify the following Boolean functions by means of the tabulation method. (@) F(A, B, CD, £, F, G) = 5(20, 28, 52, 60) (b) FCA, B, CD, E, F, G) = 2(20, 28, 38, 39, 52, 60, 102, 103, 127) (©) FCA, B, C.D, E, F) 2(6,9, 13, 18, 19, 25,27, 29, 41, 45, 57, 61) 3:16, Repeat Prob. 3-6 using the tabulation method. 3-17. Repeat Prob. 3-10(c) and (d) using the tabulation method, 4 REFERENCES 1. Veitch, E. W., “A Chart Method for Simplifying Truth Functions,” Proc. of the ACM (May 1952), 127-33. 86 SIMPLIFICATION OF BOOLEAN FUNCTIONS. Chap. 3 2. Karnaugh, M., “A Map Method for Synthesis of Combinational Logic Circuits,” Trans. AIEE, Comm. and Electronics, Vol. 72, Part 1 (Novem- ber 1953), 593-99. 3. Quine, W. V., “The Problem of Simplifying Truth Functions,” Am. Math Monthly, Vol. 59, No. 8 (October 1952), 521-31 4. McClusky, E. J., Jr., “Minimization of Boolean Functions,” Bell System Tech. J.. Vol. 35, No. 6 (November 1956), 1417-44. 5. Humphrey, W. S., Jr., Switching Circuits with Computer Applications. New York: McGraw-Hill Book Co., 1958. Ch. 4. 6. Hill, F, J., and G, R. Peterson, Introduction to Switching Theory and Logical Design. New York: John Wiley & Sons, Inc., 1968. Chs. 6 and 7. 7, McCluskey, E, J., Jr., Introduction to the Theory of Switching Circuits. New York: McGraw-Hill Book Co., 1965. Ch. 4. 4 comsinaTIONAL Locic 41 INTRODUCTION TO COMBINATIONAL LOGIC CIRCUITS Logic circuits for digital systems may be combinational or sequential. A combinational circuit consists of logic gates whose outputs at any time are determined directly from the present combination of inputs without regard for previous inputs. A combinational circuit performs a specific information processing operation fully specified logically by a set of Boolean functions. Sequential circuits employ memory elements (binary cells) in addition to logic gates. Their outputs are a function of the inputs and the state of the memory elements. The state of memory elements, in turn, is a function of previous inputs. As a consequence, the outputs of a sequential circuit depend not only on present, but also on past, inputs, and the circuit behavior must be specified by a time sequence of inputs and internal states. Sequential circuits are discussed in Ch. 6. In Ch. 1 we learned to recognize binary numbers and binary codes that represent discrete quantities of information. These binary variables are repre- sented by electric voltages or by some other signal. The signals can be ‘manipulated in digital logic gates to perform required functions. In Ch. 2 we introduced Boolean algebra as a way to express logic functions algebraically. In Ch.3 we learned how to simplify Boolean functions to achieve economi- cal gate implementations. The purpose of this chapter is to utilize the knowledge acquired in previous chapters and formulate various systematic design and analysis procedures of combinational circuits. The solution of some typical examples will provide a useful catalog of elementary functions important for the understanding of digital computers and systems. 87 88 COMBINATIONAL LoGIC Chap. 4 A combinational circuit consists of input variables, logic gates, and output variables. The logic gates accept signals from the inputs and generate signals to the outputs. This process transforms binary information from the given input data to the required output data. Obviously, both input and output data are represented by binary signals; ie., they exist in two possible values, one representing logic—I and the other logic—0. A block diagram of a combinational circuit is shown in Fig. 4-1. The n input binary variables come from an external source; the m output variables go to an external destination. In many applications, the source and/or destination are storage registers (Sec. 1-7) located either in the vicinity of the combina- tional circuit or in a remote external device. By definition, an external register does not influence the behavior of the combinational circuit, because if it does, the total system becomes a sequential circuit For m input variables, there are 2” possible combinations of binary input values. For each possible input combination, there is one and only one possible output combination. A combinational circuit can be described by m Boolean functions, one for each output variable. Each output function is expressed in terms of the ” input variables. Each input variable to a combinational circuit may have one or two wires. When only one wire is available, it may represent the variable either in the normal form (unprimed) or in the complement form (primed). Since a variable in a Boolean expression may appear primed and/or unprimed, it is necessary to provide an inverter for each literal not available in the input wire. On the other hand, an input variable may appear in two wires, supplying both the normal and complement forms to the input of the circuit. If so, it is unnecessary to include inverters for the inputs. The type External destination ‘m output variables ‘input variables External source Figure 4-1 Block diagram of a combinational circuit Sec. 4-1 INTRODUCTION TO COMBINATIONAL LOGIC CIRCUITS 89 of binary cells used in most digital systems are flip-flop circuits (Ch. 6) that have outputs for both the normal and complement values of the stored binary variable. In our subsequent work, we shall assume that each input variable appears in two wires, supplying both the normal and complement values simultaneously. We must also realize that an inverter circuit can always supply the complement of the variable if only one wire is available. 42 DESIGN PROCEDURE The design of combinational circuits starts from the verbal outline of the problem and ends in a logic circuit diagram, ot a set of Boolean functions from which the logic diagram can be easily obtained. The procedure involves the following steps: 1. The problem is stated. 2. The number of available input variables and required output variables is determined. 3. Each input and output variable is assigned a letter symbol. 4. The truth table that defines the required relations between inputs and outputs is derived. The simplified Boolean function for each output is obtained . The logic diagram is drawn. an A truth table for a combinational circuit consists of input columns and output columns. The 1’s and 0's in the input columns are obtained from the 2" binary combinations available for n input variables. The binary values for the outputs are determined from examination of the stated problem. An output can be equal to either 0 or 1 for every valid input combination. However, the specifications may indicate that some input combinations will not occur. These combinations become don’t-care conditions. The output functions specified in the truth table give the exact defini- tion of the combinational circuit. It is important that the verbal specifica- tions are interpreted correctly into a truth table. Sometimes the designer ‘must use his intuition and experience to arrive at the correct interpretation. Word specifications are very seldom complete and exact. Any wrong inter- pretation which results in an incorrect truth table produces a combinational circuit that will not fulfil the stated requirements. The output Boolean functions from the truth table are simplified by any available method, such as algebraic manipulation, the map method, or the tabulation procedure. Usually there will be a variety of simplified expres- sions from which to choose. However, in any particular application, certain restrictions, limitations, and criteria will serve as a guide in the process of 90 COMBINATIONAL LOGIC Chap. 4 choosing a particular algebraic expression. A practical design method would hhave to consider such constraints as (a) minimum number of gates, (b) min- imum number of inputs to a gate, (c) minimum propagation time of the signal through the circuit, (4) minimum number of interconnections, and (© limitations of the driving capabilities of each gate. Since all these criteria cannot be satisfied simultaneously, and since the importance of each constraint is dictated by the particular application, it is difficult to make a general statement as to what constitutes an acceptable simplification. In most cases, the simplification begins by satisfying an elementary objective such as producing a simplified Boolean function in a standard form and from that proceeds to meet any other performance criteria In practice, designers tend to go from the Boolean functions to a wiring list that shows the interconnections among various standard logic gates. In that case, the design need not go any further than the required simplified output Boolean functions. However, a logic diagram is helpful for visualizing the gate implementation of the expressions. 43 ADDERS Digital computers perform a variety of information processing tasks. Among the basic functions encountered are the various arithmetic operations. The most basic arithmetic operation, no doubt, is the addition of two binary digits. This simple addition consists of four possible elementary operations, namely: 0+ 0=0,0+1 = 1,1+0=1, and 1 +1 = 10. The first three operations produce a sum whose length is one digit, but when both augend and addend bits are equal to 1, the binary sum consists of two digits. The higher significant bit of this result is called a carry. When the augend and addend numbers contain more significant digits, the carry obtained from the addition of two bits is added to the next higher order pair of significant bits. A combinational circuit that performs the addi- tion of two bits is called a halfadder. One that performs the addition of three bits (two significant bits and a previous carry) is a fulladder. The name of the former stems from the fact that two half-adders can be employed to implement a full-adder. The two adder circuits are the first combinational circuits we shall design. Half-Adder From the verbal explanation of a half-adder, we find that this circuit needs two binary inputs and two binary outputs. The input variables designate the augend and addend bits; the output variables produce the sum and carry. It is necessary to specify two output variables because the result may consist of two binary digits. We arbitrarily assign symbols x and y to Sec. 43 ADDERS 91 the two inputs and S (for sum) and C (for carry) to the outputs. Now that we have established the number and name of the input and output variables, we are ready to formulate a truth table to identify exactly the function of the half-adder. This truth table is shown below: The carry output is 0 unless both inputs are 1. The J output represents the least significant bit of the sum. The simplified Boolean functions for the two outputs can be obtained directly from the truth table. The simplified sum of products expressions are: S = xy xy" Caxy The logic diagram for this implementation is shown in Fig. 4-2 (a), as are four other implementations for a half-adder. They all achieve the same result as far as the input-output behavior is concerned. They illustrate the flexibility available to the designer when implementing even a simple com- binational logic function such as this. Figure 4-2(a), as mentioned above, is the implementation of the half- adder in sum of products. Figure 4-2(b) shows the implementation in product of sums: S=@ tye ty) Caw To obtain the implementation of Fig. 4-2(c), we note that S is the exclusive-or of x and y. The complement of S is the equivalence of x and y (Sec. 1-6): So =xy + xy" but C = xy, and therefore we have S=(C+xyy In Fig. 4-2(d) we use the product of sums implementation with C derived as follows: Caxy = ty In Fig. 4-2(c) we obtain the product term (x' + y') from the complement of C as follows: 92 COMBINATIONAL LOGIC Chap. 4 (©) Sate @ s=etyut+y) Cay cauwtyy © S=a+ne Can Figure 42. Various implementations of a half-adder S=@& + y)G@' + y') = & + YY = & + yIC The half-adder is limited; it adds only two single bits. Although it generates a cary for the next higher pair of significant bits, it cannot accept a carry generated from the previous pair of lower significant bits. A full-adder solves this problem. Full-Adder A full-adder is a combinational circuit that forms the arithmetic sum of three input bits. It consists of three inputs and two outputs. Two of the input variables, denoted by x and y, represent the two significant bits to be See. 4.3 ADDERS 93 added. The third input, z, represents the carry from the previous lower significant position. Two outputs are necessary because the arithmetic sum of three binary digits ranges in value from 0 to 3, and binary 2 or 3 needs two digits. The two outputs are designated by the symbols S for sum, and C for carry. The binary variable S gives the value of the least significant of the sum. The binary variable C gives the output carry. The truth table of the full-adder is as follows: The eight rows under the input variables designate all possible combinations of 1’s and 0's that these variables may have. The 1's and 0's for the ouput variables are determined from the arithmetic sum of the input bits. When all input bits are 0's, the output is 0. The S output is equal to 1 when only one input is equal to 1 or when all three inputs are equal to 1. The C output has a carry of 1 if two or three inputs are equal to 1 The input and output bits of the combinational circuit have different interpretations at various stages of the problem. Physically, the binary signals of the input wires are considered binary digits added arithmetically to form a two-digit sum at the output wires. On the other hand, the same binary values are considered variables of Boolean functions when expressed in the truth table or when the circuit is implemented with logic gates. It is important to realize that two different interpretations are given to the values of the bits encountered in this circuit, The input-output logical relation of the fulladder circuit may be expressed in two Boolean functions, one for each output variable. Each output Boolean function requires a unique map for its simplification. Each map must have eight squares since each output is a function of three input variables. The maps of Fig. 4-3 are used for simplifying the two output functions. The 1’s in the squares for the maps of S and C are determined directly from the truth table. The squares with 1’s for the S output do not combine in adjacent squares to give a simplified expression in sum of products. The C output can be simplified to a six-literal expression. The logic diagram for the full-adder implemented in sum of products is shown 94 COMBINATIONAL LOGIC Chap. 4 » y ye Baws fo ae Figure 4-3 Maps for fulladder in Fig. 4-4. This implementation uses the following Boolean expressions: s ic xly'z + x'y2! + ay's! tay xy taz tye Other configurations for a full-adder may be developed. The product of sums implementation requires the same number of gates as in Fig. 4-4, with the number of AND and OR gates interchanged. A full-adder can be Figure 4-4 Implementation of fulladder in sum of products implemented with two half-adders and one OR gate as shown in Fig. 4-5. The S output from the second half-adder is the exclusive-or of z and the output of the first half-adder giving ay try Halfadder [xy y— ‘ ae row 45 mpi se vo man Halfadder . See. 44 SUBTRACTORS 95 S = 2'(xy' + x'y) + ary’ + x'p)’ z(xy" + x'y) + z@xy + x'y') = xylz! + x'yz! + xyz + x'y'2 and the carry output is C= oxy! + x'y) + xy = xy'z + x'y2 tay 44 SUBTRACTORS The subtraction of two binary numbers may be accomplished by taking the complement of the subtrahend and adding it to the minuend (Sec. 1-5). By this method, the subtraction operation becomes an addition operation requiring fulladders for its machine implementation. It is possible to imple- ment subtraction with logic circuits in a direct manner as done with paper and pencil. By this method, each subtrahend bit of the number is sub- tracted from its corresponding significant minuend bit to form a difference bit. If the minuend bit is smaller than the subtrahend bit, a 1 is borrowed from the next higher significant position. The fact that a 1 has been borrowed must be conveyed to the next higher pair of bits by means of a binary signal coming out (output) of a given stage and going into (input) the next higher stage. Just as there are half- and full-adders, there are half- and fullsubtractors. Half-Subtractor A halfsubtractor is a combinational circuit that subtracts two bits and produces their difference. It also has an output to specify if a 1 has been borrowed. Designate the minuend bit by x and the subtrahend bit by y. To perform x — y we have to check the relative magnitudes of x and y. If x > y we have three possibilities: 0 — 0 = 0,1 - O= 1,1 — 1 = 0. The result is called the difference bit. If x < y, we have 0 — 1, and it is necessary to borrow a 1 from the next higher stage. The 1 borrowed from the next higher stage adds two to the minuend bit, just as in the decimal system a borrow adds 10 to a minuend digit. With the minuend equal to 2, the difference becomes 2 — 1 = 1. The half-subtractor needs two outputs. One output generates the difference and will be designated by the symbol D. The second output, designated B for borrow, generates the binary signal that informs the next stage if a 1 has been borrowed. The truth table for the input-output relations of a half-subtractor can now be derived as follows: 96 COMBINATIONAL Loic Chap. 4 The output borrow B is a O as long as x > y. It is a1 for x = 0 and » = 1, The D output is the result of the arithmetic operation 2B + x — y. The Boolean functions for the two outputs of the half-subtractor are derived directly from the truth table: D B x'y + xy" xy It is interesting to note that the logic for D is exactly the same as the output S in the half-adder. Full-Subtractor A full-subtractor is a combinational circuit that performs subtraction between two bits, taking into account that a 1 may have been borrowed by a lower significant stage. This circuit has three inputs and two outputs. The three inputs, x, y, and z, denote the minuend, subtrahend, and previous borrow, respectively. The two outputs, D and B, represent the difference and output borrow, respectively. The truth table for the circuit is as follows: The eight rows under the input variables designate all possible combina- tions of I's and 0's that the binary variables may take. The 1’s and 0's for the output variables are determined from the subtraction of x — y — z The combinations having input borrow z= 0 reduce to the same four conditions of the half-adder. For x = 0, y = 0, and z = 1, we have to borrow a 1 from the next stage, which makes B = 1 and adds 2 to x. Since 2~0 ~1 = 1, D= 1. For x = 0 and yz = II, we need to borrow again, making B = 1 and x = 2. Since 2~1 —1 = 0,D=0. For x = I and yz = Ol, we have x — y — z = 0, which makes B = 0 and D = 0. Finally, for x = 1, y = 1, z= 1, we have to borrow 1, making B = 1 and x = 3, and 3 ~ 1 ~ 1 = 1, making D = 1. The simplified Boolean functions for the two outputs of the full- subtractor are derived in the maps of Fig. 4-6. The simplified sum of product output functions are: See. 45 CODE CONVERSION 97 ye y e000 10 o| 1 1 1 xqifa 1 Daxyetxyet a? tr Baxytxety Figure 4-6 Maps for full-subtractor D=x'y'z + x'yz! + xy't! + x92 Boxy tx'rtyz ‘Again we note that the logic function for output D in the full-subtractor is exactly the same as output Sin the full-adder. Moreover, the output B resembles the function for C in the full-adder except that the input variable x is complemented. Because of these similarities, it is possible to convert a full-adder into a full-subtractor by merely complementing input x prior to its application to the gates that form the carry output. 45 CODE CONVERSION The availability of a large variety of codes for the same discrete elements of information results in the use of different codes by different digital systems. It is sometimes necessary to use the output of one system as input to another. A conversion circuit must be inserted between the two systems if each uses different codes for the same information. Thus, a code converter is a circuit that makes the two systems compatible even though each uses a different binary code. To convert from binary code A to binary code B, the input lines must supply the bit combination of elements as specified by code A and the output lines must generate the corresponding bit combination of code B. A combinational circuit performs this transformation by means of logic gates. The design procedure of code converters will be illustrated by means of a specific example of conversion from the BCD to the excess-3 code. The bit combinations for the BCD and excess-3 codes are listed in Table 1-2, Sec. 1-6. Since each code uses four bits to represent a decimal digit, there must be four input variables and four output variables. Let us designate the four input binary variables by the symbols A, B, C, D, and the four output variables by w, x, y, z. The truth table relating the input-output variables is shown in Table 4-1. The bit combinations for the 98 COMBINATIONAL LOGIC Chap. 4 ‘Table 4-1 Truth Table for Code Conversion Example Input Ourput BCD Excess:3 Code 4B C D Waieeae tues 0 0 0 oO oo 1 4 0 0 0 1 Dual aOsHe 0: Oost o: Og eet. o o 1 14 o 1 1 0 eco Heee-O HEE: Oceans eee: Orie oe 1 0 0 0 o 1 1 oo 1 0 0 4 OsetPiaeieece Fleeien tases Geese 1 0 0 0 1 o 41 4 aro ioral ea aeoteO: inputs and their corresponding outputs are obtained directly from Table 1-2. We note that four binary variables may have 16 bit combina- tions, only 10 of which are listed in the truth table. The six bit combi nations not listed for the input variables are don’t-care combinations. Since they will never occur, we are at liberty to assign to the output variables either a 1 or a 0, whichever gives a simpler circuit. ‘The maps in Fig. 4-7 are drawn to obtain a simplified Boolean function for each output. Each of the four maps of Fig. 4-7 represents one of the four outputs of this circuit as a function of the four input variables. The 1's marked inside the squares are obtained from the minterms that make the output equal to 1. The 1’s are obtained from the truth table by going over the output columns one at a time. For example, the column under output z has five I's; therefore, the map for z must have five 1’s, each being in a square corresponding to the minterm that makes z equal to 1. The six don’t-care combinations are marked by X's. One possible way of simplifying the functions in sum of products is listed under the map of each variable. ‘A twolevel logic diagram may be obtained directly from the Boolean expressions derived by the maps. There are various other possibilities for a logic diagram that implements this circuit. The expressions obtained in Fig. 4-7 may be manipulated algebraically for the purpose of using common gates for two or more outputs. This manipulation, shown below, illustrates the flexibility obtained with multiple output systems when implemented with three or more levels of gates. z =D y = CD + Cb'= cD + (C+ Dy BIC + BID + BCD! = B(C + D) + BCD BIC + D) + BC + DY w =A + BC + BD = A + BC + D) x Sec. 46 COMPARATORS 99 cD c£ cD Beer eeea Bo ag BO oof 7 i 7 oft 1 olf 1 2 2 lx] Pax | |] lfx]] » |] x|] x 4 A hola x | le id xl] x —— z=p ? cp panera AB_O_ i110 ‘oa olf , 2 tall | x Px 4 Hs 1 [hele Saupe x=BC+BD+BCD A+BC+BD Figure 4-7. Maps for BCD to excess-3 code converter, The logic diagram that implements the above expressions is shown in Fig. 4-8. In it we see that the OR gate whose output is C + D has been used to implement partially each of three outputs. Not counting input inverters, the implementation in sum of products requites seven AND gates and three OR gates. The implementation of Fig. 4-8 requires four AND gates, four OR gates, and one inverter. If only the normal inputs are available, the first implementation will require inverters for variables B, C, and D, while the second implementation requires inverters for variables B and D. 46 COMPARATORS ‘A comparator is a combinational circuit that compares two numbers A and B and determines their relative magnitude. The outcome of the comparison is displayed in three outputs that indicate whether A > B, A = B, or A B, A = B, A B,Bo. The remaining six input combinations for the “less than” output z are those with AyAo < ByBo. The three maps for the See. 4.6 COMPARATORS 1 Table 4-2. Truth Table for Comparator Inputs Outputs A>B A=B OA B, otherwise A < B. In the case of binary digits, this sequential comparison can be logically expressed by the following Boolean functions: (A > B) = AgBy + A,B, (ABs + AB) + AgBo(A2B2 + AzB2)(AiBi + AiBi (A B) and (A < B) are binary output variables which are equal to logic-|_ when A > B or A B) is equal to logic-! if ‘Aq = 1 and By = 0, or if Ay = 1 and By = 0 (provided that Az = B2) or if Ag = 1 and By = 0 (provided that both Az = Bz and Ay = By). The gate implementation of the three outputs just derived is simpler than it seems because the “unequal” outputs can use portions of the outputs generated by the “equality” output. The logic diagram for a comparator of two three-bit numbers is shown in Fig. 4-10. The procedure for obtaining comparator circuits for binary numbers with more than three bits is obvious. Comparators of decimal numbers will use the same algorithm except that two four-bit numbers must be compared for each decimal digit. 47 ANALYSIS PROCEDURE ‘The design of a combinational circuit starts from the verbal specifications of fa required function and culminates with a set of output Boolean functions or a logic diagram. The analysis of a combinational circuit is a somewhat reverse process: it starts with a given logic diagram and culminates with a set of Boolean functions, a truth table, or a verbal explanation of the circuit operation. If the logic diagram to be analyzed is accompanied with a function name or with an explanation of what it is assumed to accompli then the analysis problem reduces to a verification of the stated function. The first step in the analysis is to make sure that the given circuit is combinational and not sequential. The diagram of a combinational circuit has logic gates with no feedback paths or memory elements. A feedback path is a connection from the output of one gate to the input of a second gate that forms part of the input to the first gate. Feedback paths or memory elements in a digital circuit define a sequential circuit and must be analyzed according to procedures outline in Ch. 6. 104 COMBINATIONAL LOGIC Chap. 4 Lr) 4 34— B HH) ES Sen re pre) 5 HE (Sf) ——-« =B) Figure 4-10 Logic diagram of a 3-bit comparator Once the logic diagram is verified as a combination! circuit, one can proceed to obtain the output Boolean functions and/or the truth table. If the circuit is accompanied with a verbal explanation of its function, then the Boolean functions or the truth table is sufficient for verification. If the function of the circuit is under investigation, then it is necessary to interpret the operation of the circuit from the derived truth table. The ‘Sec. 4.7 ANALYSISPROCEDURE 105 success of such investigation is enhanced if one has previous experience and familiarity with a wide variety of digital circuits. The ability to correlate a truth table with an information processing task is an art one acquires with experience. To obtain the output Boolean functions from a logic diagram, proceed as follows: 1. Label with arbitrary symbols all gate outputs that are a function of the input variables. Obtain the Boolean functions for each gate. 2. Label with other arbitrary symbols those gates which are a function of input variables and/or previously labeled gates. Find the Boolean functions for these gates. 3. Repeat the process outlined in step 2 until the outputs of the circuit are obtained. 4, By repeated substitution of previously defined functions, obtain the output Boolean functions in terms of input variables only. The analysis of the combinational circuit in Fig. 4-11 will illustrate the proposed procedure. We note that the circuit has three binary inputs, 4, B, and C, and two binary outputs, F; and Fz. The outputs of various gates are labeled with intermediate symbols. The output of gates that are a function of input variables only are Fz, T,, and T;. The Boolean functions for these three outputs are: F, = AB + AC + BC T,=A+B4C T, = ABC Next we consider outputs of gates which are a function of already defined symbols. Ts = Fal; Fi =T; +T, The output Boolean function Fz expressed above is already given as a function of the inputs only. To obtain F; as a function of A, B, and C, form a series of substitutions as follows: 4T, + ABC = (AB + AC + BC) (A+B +C)+ABC = (A' + BA’ +C')(B' + C)(A + B + ©) + ABC (A’ + B'C’)(AB’ + AC’ + BC’ + BIC) + ABC A'BC' + A'BIC + ABC! + ABC Fi=Ty+Th If we want to pursue the investigation and determine the information transformation task achieved by this circuit, we can derive the truth table 106 COMBINATIONAL LOGIC Chap. 4 oma 7 Figure 4-11 Logic diagram for analysis example directly from the Boolean functions and try to recognize a familiar opera- tion. For this example, we note that the circuit is a fulladder with F, being the sum output and F2 the carry output. A, B, and C are the three inputs added arithmetically. The derivation of the truth table for the circuit is a straightforward process once the output Boolean functions are known. To obtain the truth table directly from the logic diagram without going through the derivations of the Boolean functions, proceed as follows: 1, Determine the number of input variables to the circuit. For n inputs, form the 2” possible input combinations of 1’s and 0’s by listing the binary numbers from 0 to 2" — 1. 2. Label the outputs of selected gates with arbitrary symbols. 3. Obtain the truth table for the outputs of those gates that are a function of the input variables only. 4, Proceed to obtain the truth table for the outputs of those gates that are a function of previously defined values until the columns for all outputs are determined. This process can be illustrated using the circuit of Fig. 4-11. In Table 43 we form the eight possible combinations for the three input variables. The truth table for Fz is determined directly from the values of Sec. 47 ANALYSISPROCEDURE 107 ‘Table 4.3 Truth Table for Logic Diagram of Figure 4-11 F, |{T% | Ts | | Pi x BoC F; A, B, and C, with Fz equal to 1 for any combination that has two or three inputs equal to 1. The truth table for F2 is the complement of F2 The truth tables for T, and T are the OR and AND function of the input variables, respectively. The values for Ts are derived from T, and Fa: T3 is equal to 1 when both 7, and F> are equal to 1 and to O otherwise. Finally, F, is equal to 1 for those combinations in which either T2 or T or both are equal to 1. Inspection of the truth table combinations for A, B, C, Fy, and F, of Table 4-3 shows that it is identical to the truth table of the full-adder listed in Sec. 4-3 for x, y, 2, S, and C, respectively. Consider now a combinational circuit that has don't-care input combina- tions. When such a circuit is designed, the don’t-care combinations are marked by X’s in the map and assigned an output of either a 1 or a 0, whichever is more convenient for the simplification of the output Boolean function. When a circuit with don’t-care combinations is being analyzed, the situation is entirely different. Even though we assume that the don’t-care input combinations will never occur, the fact of the matter is that if any cone of these combinations is applied to the inputs (intentionally or in error), @ binary output will be present. The value of the output will depend fon the choice for the X’s taken during the design. Part of the analysis of such a circuit may involve the determination of the output values for the don't-care input combinations. As an example, consider the BCD to excess3 code converter designed in Sec. 4-5. The outputs obtained when the six unused combinations of the BCD code are applied to the inputs are: Unused BCD Inputs Outputs A BC D Wee eee 108 | COMBINATIONAL LoGIC Chap. 4 These outputs may be derived by means of the truth table analysis method as outlined in this section. In this particular case, the outputs may be obtained directly from the maps of Fig. 4-7. From inspection of the maps, we determine whether the X’s in the corresponding minterm squares for each output have been included with the 1's or the 0's. For example, the square for minterm myo (1010) has been included with the 1’s for outputs w, x, and 2 but not for y. Therefore, the outputs for mo are wxyz = 1101 as listed in the above table. We also note that the first three outputs in the table have no meaning in the excess-3 code, while the last three outputs correspond to decimals 5, 6, and 7, respectively. This coincidence is entirely a function of the choice for the X’s taken during the design. 48 DECODERS AND ENCODERS Discrete elements of information are represented in digital systems by binary numbers or binary codes. Consider, for example, a binary code or a binary number of n bits capable of representing m < 2" discrete elements of information. A decoder is a combinational circuit that converts a binary code of m variables into m output lines, one for each discrete element of information. An encoder is a combinational circuit that accepts m input lines, one for each element of information, and generates a binary code of ‘n output lines. A decoder is a construction of AND gates with n inputs and 2" (or less) outputs. An encoder has OR gates with 2" (or less) inputs and ‘n outputs. A binary code of m bits with don’t-care combinations represents, by definition, less than 2” elements of information. Its decoder or encoder will use less outputs or inputs, respectively. As an example, consider the binary to octal decoder circuit of Fig. 4-12. The three inputs (x,y, and z) represent a binary number of three bits. The eight outputs (Do to Dz) represent octal digits 0 to 7. The decoder consists of a group of AND gates that decode the input binary number. It supplies as many outputs as there are possible input binary number combinations. In this particular example, the clements of information are the eight octal digits. The code for this discrete information consists of the binary numbers represented by three bits. The operation of the decoder may be further clarified from its input-output relations shown in Table 4-4. Observe that the output variables are mutually exclusive because only one output can be equal to 1 at one time. The output line whose value is equal to 1 represents the octal digit equivalent of the binary number in the input lines. An example of an encoder is shown in Fig. 4-13. The octal to binary encoder consists of eight inputs, one for each of the eight digits, and three outputs that generate the corresponding binary number. It is constructed with OR gates. Its truth table is shown in Table 4-5. It is assumed that only one input line can be equal to I at any time, otherwise the circuit has Sec. 48 DECODERS AND ENCODERS Ds Dg Dy Figure 4-12. Binary to octal decoder Table 4-4 Truth Table of Binary to Octal Decoder Inputs 0 0 y 0 ° 0 Ourputs Do Di Dz Dy Da Ds De Dr 10000000 o 1000000 00100000 oo 010000 00001000 00000100 00000010 ooo oo oot Dy = eye xe xy xy wt wt od Ed 109 110 COMBINATIONAL LOGIC Do Dae Peet aol iia = > Figure 4-13 Octal to binary encoder Chap. 4 Dy+Ds+De+D, y=D,+D3+De+ Dy D,+D3+Ds+D; ‘Table 4$ ‘Truth Table of Octal to Binary Encoder Inputs Do Di Dz Ds De Ds Dg Dy 10000000 isos ot ooo ee eres Pe et ee 00010000 00001000 00000100 00000010 Outputs xy 00 0 no meaning. Note that the circuit has eight inputs, which can give 2° possible input combinations, but that only eight of these combinations have any meaning. The other 2° —8 input combinations are don’t-care conditions. ‘A BCD to decimal decoder is shown in Fig. 4-14. The elements of information in this case are the 10 decimal digits represented by the BCD code. The function of the decoder is to supply one output for each decimal digit. Each output is equal to 1 only when the input variables constitute a bit combination corresponding to the decimal digit as represented in BCD. Sec. 48 DECODERS AND ENCODERS 111 Do=wxy'?) Dy=wx'y2 Da xy Dyaxve Diane Ds=xye Do=e Dy = 39% Dy=w2! Dy ="z Figure 4-14 BCD to decimal decoder Table 4.6 shows the input-output relations of the decoder. Only the first 10 input combinations are valid code assignments, the last six are not used and are, by definition, don’t-care conditions. It is obvious that the don't-care conditions were used initially to simplify the output functions because, otherwise, each gate would have required four inputs. For the benefit of a complete analysis, Table 4-6 lists the outputs for the six not used combina- tions of the BCD code, but these combinations obviously have no meaning in this circuit. Decoders and encoders have many applications in digital systems. Decoders are useful for displaying discrete elements of information stored in registers. For example, a decimal digit represented in BCD and stored in a four-cell register may be displayed with the help of a BCD to decimal decoder, with the outputs of the four binary cells feeding the inputs of the decoder and the outputs of the decoder driving 10 indicator lights. The indicator lights may be in the form of display digits, such that one decimal digit lights up when the corresponding decoder output is a logic-1. Decoder circuits are also useful in applications where the contents of registers need to be determined for the purpose of decision making. Another application 112 COMBINATIONAL LoGiC Cha ‘Table 4-6 Truth Table of BCD to Decimal Decoder of Figure 4-14 Inputs Outputs w x y 2|Do Dy Da Ds Dy Ds De Dy Dg Dy 0000/1 0 0 000000 0 ooo1]/o 1 0000000 0 of decoder circuits is in the generation of timing and sequencing signals for control purposes. These applications are introduced in their appropriate places in the following chapters. Encoder circuits are useful for binary code formation when the discrete elements of information are each available from a single line. An example of an encoder circuit is demonstrated in Fig. 1-2. The box labeled “control” contains an encoder circuit that accepts an input from each key of the keyboard and generates the corresponding eight-bit code of the letter whose key is struck. Sec. 4.9 MULTIPLEXERS AND DEMULTIPLEXERS 113, 49 MULTIPLEXERS AND DEMULTIPLEXERS Multiplexing means transmitting a large number of information units over a smaller number of channels or lines. Demultiplexing is a reverse operation and denotes receiving information from a small number of channels and distributing it over a larger number of destinations. A digital multiplexer is a combinational circuit that selects data from 2” input lines and directs it to a single output line. The selection of input-output transfer paths is controlled by a set of input selection lines. An example of a multiplexer is shown in Fig. 4-15. The eight input lines J to I; are applied to eight AND gates whose outputs go to a single OR gate. Only one input line has a path to the output at any particular time. The selection lines So, S,, and Sz =—S=, : ste = A\AIA So sy Sy Figure 4-15 Eight-input digital multiplexer 114 COMBINATIONAL LoGic Chap. 4 determine which input is selected to have a direct path to the output. The eight AND gates resemble a decoder circuit and indeed decode the three input selection lines. The output Boolean function of the eight-input multi- plexer shows clearly how the selection is accomplished. Y = [9845180 + [1855480 + I28481Sq + 15525180 + [482S\Sq + [5825180 + [65281So + 17528180 Figure 4-16 is an example of a multiplexer that selects one of two data inputs A and B, each input data consisting of three bits Az, 41, 4o and Bz, By, Bo. A one-bit selection line determines which input, A or B, is to be applied to the outputs. The Boolean functions for the outputs are: Yo = AoS’ + BoS Y, = A,S' + BS Y, = A,S' + B.S In general, an m-input, k-bit multiplexer requires 1 selection lines (with m = Ao" Sa Figure 4-16 Two-input, 3-bit digital multiplexer Sec. 49 MULTIPLEXERS AND DEMULTIPLEXERS 115 2") for decoding the input data. There are & outputs, each with an OR gate, and mk inputs, each with an AND gate. The decoding scheme is repeated k times. ‘An example of a demultiplexer is shown in Fig. 4-17. A single input line is steered to any of four identical outputs under control of two selection lines. It consists of four three-input AND gates, each receiving the data input along with one of the four possible combinations of the selec- tion variables. The single input variable has a path to all four outputs, but the information is directed to the one output specified by the two selection lines, A demultiplexer can function as a decoder circuit if the single input is connected permanently to a signal that corresponds to logic-1. Multiplexer and demultiplexer devices, used in conjunction, are ideal in systems where it desired to multiplex many data lines, transmit on one line, and convert back to the original data form at the receiving end for processing ‘An interesting application for a multiplexer circuit is its use as a uni- versal logic element; i.e., a circuit that implements any Boolean function of rn variables. A multiplexer circuit with k selection lines and 2* input wires is a universal logic element for Boolean functions of k + 1 variables. For example, the circuit of Fig. 4-15 with three selection lines S2, Si, So and eight input lines Ig through J, can be used to implement any Boolean function of four variables Y(I, So, Sy, So), where S2, Sy, and So are three of the input variables and J is the fourth input variable. By expanding the S459) M(SiS0) 18,50) ¥3—= S80) Figure 4-17 Four-output digital demultiplexer 116 COMBINATIONAL LoGiC Chap. 4 function into its sum of minterms, it is possible to determine values for inputs Jo through /;. These values are either 0 or 1 or f or J’. Inputs 0 and 1 are fixed logic signals. Inputs J and 1’ are the normal and complement values of the fourth variable. ‘As an example, consider the following Boolean function of four variables expressed in sum of minterms: YC, Sa, S1, So) = 2(0, 4, 8, 13) ‘To implement this function with the multiplexer circuit of Fig. 415 we need to determine values for fo through I>. The function is first expressed in terms of the four variables: ¥ = 'SzS,Sq + I'S.5,S8q + IS4SSo + IS2S,So Two minterms are combined if and only if they cause the elimination of the variable J. The first and third minterms in this example are combined and ¥ can be expressed as follows: ¥ = $4S;Sq + [825,80 + 182SSo Comparing this expression with the Boolean function for the multiplexer given above, we determine the following input values: I= er Is 21 Pi emeaet eget Ogee This example demonstrates the procedure for determining the values of Jo to Jy from the minterm expression of the Boolean function. PROBLEMS 41, A combinational circuit has four inputs and one output. The output is equal to 1 when: (1)all the inputs are equal to 1 or (2)none of the inputs are equal to 1 or (3)an odd number of inputs are equal to | (a) Obtain the truth table. (b) Find the simplified output function in sum of products. (©) Find the simplified output function in product of sums. (@) Draw the two logic diagrams. 42. A three-digit binary number is represented by the Boolean variables w, %, y, and 2. w represents the sign of the number so that w= if the number is negative and w=0 if the number is non-negative (positive or zero). x, y, and z represent the magnitude of the number, with z being See. 49 PROBLEMS 117 43, 4-4, 4s. 46. 47, 4.12. 413, 414, 44s. the least significant digit. Design a combinational circuit that fulfills the following requirements: (1) if the input number is non-negative, the output number is equal to the input number minus two; (2)if the input number is negative, the output number is equal to the input number plus two. Design a combinational circuit that accepts a three-bit number and generates an output binary number equal to the square of the input number. It is necessary to multiply two binary numbers, each two bits long, in order to form their product in binary. Let the two numbers be represented by ay,d9 and b,,bo, where subscript 0 denotes the least significant bit. (a) Determine the number of output lines required (b) Find the simplified Boolean expressions for each output. Repeat Prob, 4-4 to form the sum (instead of the product) of the two binary numbers. Obtain the simplified Boolean functions of the fulladder and full- subtractor in product of sums and draw the logic diagrams. ‘The half-adder is a circuit that adds two bits. The full-adder is a cirouit that adds three bits. Now design a circuit that adds four bits. Determine the number of outputs necessary to form the sum and carries and find their simplified Boolean functions. Design a combinational circuit with four input lines that represent a decimal digit in BCD and four output lines that generate the 9’s complement of the input digit Design a combinational circuit whose input is a four-bit number and whose output is the 2's complement of the input number. Design a combinational circuit that multiplies by 5 an input decimal digit represented in BCD. The output is also in BCD. Show that the outputs can be obtained from the input lines without using any logic gates, . Design a combinational circuit that detects an: error in the representa- tion of a decimal digit in BCD. In other words, obtain a logic diagram whose output is logic-1 when the inputs contain an unused combination in the code, Implement a full-subtractor with two half-subtractors and an OR gate. Show how a full-adder can be converted to a full-subtractor with the addition of one inverter circuit. Design a combinational circuit that converts a decimal digit from the 8,421 code to BCD. Design a combinational circuit that converts a decimal digit from the 2,4,2,1 code to the 8,4,-2,-1 code. 118 COMBINATIONAL LOGIC Chap. 4 4-16. Obtain the logic diagram that converts a four-digit binary number to a decimal number in BCD. Note that two decimal digits are needed since the binary numbers range from 0 to 15. 4-17. Obtain the logic diagram of a circuit that compares two four-bit numbers. Use the algorithm given in Sec. 4-6. 4-18. Show the external connections between two three-bit comparators (Fig. 4-10) that will form a comparator circuit for two numbers of five bits each 419. Analyze the two output combinational circuits shown in Fig. P4-19, Obtain the Boolean functions for the two outputs and explain the circuit operation. 4.20. Derive the truth table of the circuit shown in Fig. P4-19. 4-21, Analyze the circuit shown in Fig. P4-21 and explain the circuit operation. fi = Iy I 4-22, The circuit of Fig. P4-21 establishes a given relation between the inputs and outputs depending on the value of the selection lines Sy and So. Extend the circuit to seven outputs; that is, add outputs Ys, Y6, and Y, and the needed gates to accomplish an equivalent relation. 4.23. Obtain the logic diagram of an excess-3 to decimal decoder. p a See. 4.9 PROBLEMS 119 424. A BCD to seven-segment decoder is a combinational circuit that accepts a decimal digit in BCD and generates the appropriate outputs for selection of segments in a display indicator used for displaying the decimal digit. The seven outputs of the decoder (a, b, c, d, e, f, 8) select the corresponding segments in the display as shown in Fig. P4-24(a). The numeric designation chosen to represent the decimal digit is shown in Fig. P4-24(b). (>) Numerical designation for display (a) Segment designation (a) Design a seven-segment decoder using all don’t-care conditions. (b) Show the resulting displays generated when the unused bit combina tions of the BCD code are applied to the inputs, (©) Redesign the decoder so that unused combinations give meaningless displays. 4.25. Obtain the logic diagram of a decimal to BCD encoder. 4-26. Design an encoder whose inputs are the letters A to J and whose outputs represent the corresponding character in the six-bit internal code given in Table 1-5. 427. Six input lines must be decoded to obtain 64 output lines. AND gates are available with up to six inputs maximum. Each output of an AND gate can be connected to no more than 10 other AND gate inputs. Each external input can be connected to an inverter and no more than 10 AND gate inputs. Each inverter output can be con- nected to no more than 10 AND gate inputs. Determine a gate configuration for the decoder. 428. Show the logic diagram of a three-input, four-bit digital multiplexer. 4.29. Show the logic diagram of an eight-output digital demultiplexer. Show how this circuit in conjunction with the multiplexer of Fig. 4-15 can be used to multiplex eight data lines, transmit one line, and convert back to the original line at the receiving end. 4.30. Implement the following Boolean function with an eight-input digital multiplexer. YU, Sp, $1, So) = B(2, 4, 7, 10, 12, 14) 4.31. Implement a full-adder circuit with two four-input digital multiplexers. 5 GATE IMPLEMENTATION 5-1 INTEGRATED CIRCUITS ‘An integrated circuit (IC) is a small silicon crystal called a “chip” contain- ing electronic components such as transistors, diodes, resistors, and capacitors, The various components are interconnected inside the chip to form an electronic circuit. The chip is mounted in a metal or plastic package and connections are welded to external pins. Integrated digital circuits with several logic gates are available in small packages like those shown in Fig. 5-1. Integrated circuits differ from conventional circuits in that individual components cannot be separated or disconnected and the circuit inside the package is accessed only through external pins. The benefits derived from ICs are: (1) substantial reduction in size, (2) substan- tial reduction in cost, (3) high reliability against failures, (4) increase in ‘operating speed, and (5) reduction of externally wired connections. As the technology of ICs has improved, the number of devices and components which can be put on a single silicon chip has increased con- siderably. The differentiation between those chips that have a few internal gates and those having tens or hundreds of gates is made by a customary reference to a package as being either a small-, medium-, or large-scale integration device. Several logic gates in a single package make it a small- scale integration (SSI) device. To qualify as medium-scale (MSI), a device must perform a complete logic function and have a complexity of approxi- mately 10 gates or more. A large-scale integration (LSI) device performs a logic function with more than a hundred gates. 120 ‘See. 5-1 INTEGRATED CIRCUITS 121 Flat package Figure 5-1 Integrated circuit packages Examples of MSI functions are those introduced in Ch. 4: adders, sub- tractors, code converters, comparators, decoders, encoders, multiplexers, and demultiplexers. These functions are used extensively in the design of di tal systems and are classified as standard items. These and similar digital functions are available in one IC package like the one shown in Fig. 5-1. Digital integrated circuits in an MSI package provide the logic designer with standard functional building blocks that can be very efficiently connected to satisfy a large variety of logic systems requirements. ‘An example of an LSI function is the general purpose accumulator register introduced in Ch. 9. LSI devices provide greater circuit complexity and may, at times, encompass the entire digital system in one chip. A digital computer may be constructed with only a few LSI devices to provide an extremely efficient method of packaging. MSI and LSI devices provide a considerable decrease in package count over the same design using individual gates from SSI devices. This reduction in package count is significantly advantageous because the reliability of a system is a function of the number of packages used. In addition, fewer packages also mean fewer connections and less wiring. Higher speed of operation is achieved since interpackage delays are eliminated. Other advan- tages of MSI and LSI are reduction in power consumption, improvement of noise immunity, and a considerable savings in costs. Logic diagrams of digital systems considered throughout the book are shown in detail up to the individual gates and their interconnections. Such logic diagrams are useful for demonstrating the logical construction of a particular function, However, it must be realized that, in practice, the function may be obtained from an MSI or LSI device and the user has access only to external inputs and outputs but not to inputs or outputs of intermediate gates. For example, a designer who wants to incorporate a decoder in his system is more likely to choose such a function from an available MSI integrated circuit instead of designing an individual gate structure. 122 GATE IMPLEMENTATION Chap. 5 52 DIGITAL LOGIC GATES In Sec, 2-6 we defined eight binary operators and gave them the follow- ing characteristic names: AND, OR, inhibition, implication, exiclusive-or, equivalence, NOR, and NAND. Each operator defines a logical function of ‘two variables. The AND and OR operators together with the unary NOT operator express Boolean functions. For this reason, logic gates that perform these functions have been employed to implement Boolean functions. The possibility of constructing logic gates for the other six binary operators is of practical interest. Factors to be weighed when considering the construc tion of other types of logic gates are: (1) the feasibility and economy of producing the gate with physical components, (2) the possibility of extend- ing the gate to more than two inputs, (3) the basic properties of the binary operator such as commutativity and associativity, and (4) the ability of the gate to implement Boolean functions alone or in conjunction with other gates. The binary operators “inhibition” and “implication” are not commutative and are thus impractical for use as standard logic gates. The exclusive-or and equivalence operations have many excellent characteristics as candidates for logic gates but are expensive to construct with physical components. They are available as standard logic gates in IC packages but are usually con- structed internally with other standard gates (see Fig. 5-28). The NAND and NOR functions are extensively used as standard logic gates and are in fact more popular than the AND and OR gates. This is because NAND and NOR gates are easily constructed with transistor circuits and because Bool- ean functions can be easily implemented with them. The block diagram symbols for the digital logic gates of exclusive-or and equivalence are shown in Fig. 5-2. The symbol for the exclusive-or gate is similar to the OR gate except for the curved line on the input side. Since equivalence is the complement of exclusive-or, it is customary to use identical symbols for both except for the small circle in the output of the equivalence gate. Henceforth, a small circle in the output (or input) of a gate will designate complementation. The NAND function is the complement of the AND function. For this reason we use identical symbols with the exception of the small circle in the output of the NAND gate as shown in Fig. 5-3(a). Similarly, the NOR and OR functions, being the complement of each other, use the same symbol except for the circle in the output of the NOR gate as shown in 5-3(b). It should be clear now why the symbol for the inverter circuit hhas been drawn with a small circle. It differentiates it from a non-inverting amplifier as shown in Fig. 5-4(a). A non-inverting amplifier is used for signal amplitude restoration or for signal amplification when a large amount of power is needed to drive heavy loads. The exclusive-or and equivalence gates are rarely extended to more than See. 5:2 DIGITAL LOGIC GATES 123 B = AB+AB B =AB+AB Figure 5-2 Symbols for logic gates (a) exclusive-or, (b) equivalence ee = 7 (A+By =(4By = @ () Figure 5-3. Symbols for logic gates (a) NAND, (b) NOR 4 —>— —>o— “a (a) (b) Figure $-4 Symbols for (a) non-inverting amplifier, (b) inverter two inputs. The NOR and NAND gates can be extended to many inputs provided the definition of the operation is slightly modified. The difficulty is that the NOR and NAND operators are not associative; iv., (x 4 y) + 2 # x 4 ( 4 2) as shown below and in Fig. 5-5. (by) 42 [ety tz) = ete xbQ bak tO tal = 049 To overcome this difficulty, we define the multiple-input NOR (or NAND) gate as a complemented OR (or AND) gate a po) etn tetne fs Yo FO = F042) a Figure 5-5 Demonstrating the non-associativity of the NOR operator; @byezex tz 124 GATE IMPLEMENTATION Chap. 5 Thus by definition we have: xbybz=(@etytay xt y tz = (yz) The symbols for the three-input gates are shown in Fig. 5-6. NOR and NAND are sometimes called OR-invert and AND-invert to conform with the generalized definition. In writing cascaded NOR and NAND operations using the symbolic operators (4) and (t) as in Fig. 5-5, one must use the correct parentheses to signify the proper sequence of these operations. The opera- tion signifies a multiple-input gate when no parentheses are included. NAND and NOR gates are discussed in more detail in Secs. 5-5 and 5-6. Functions suitable for implementation with exclusive-on and equivalence gates are presented in Sec. 5-8. PSD eet tey Se te oe (@) NOR (>) NAND Figure 5.6 Three-input NOR and NAND gates. 53 POSITIVE AND NEGATIVE LOGIC The relation between a binary signal and a binary variable was discussed in Sec. 1-8. A binary signal exists in one of two values except during transi- tion. One signal value represents logic-1 and the other, logic-0. Since two signal values are assigned to two logic values, there exists two different assignments of signals to logic. Because of the principle of duality of Boolean algebra, an interchange of signal value assignment results in a dual function implementation. The consequences of an interchange of signal assignment is investigated in this section. Consider the two values of a binary signal as shown in Fig. 5-7. One value must be higher in amplitude than the other since the two values must be different in order to distinguish between them. Designate the higher amplitude by H and and the lower by L. There are two choices for signal value assignment. Choosing the high amplitude H to represent logic-1 as shown in Fig, 5-7(a) defines a positive logic system. Choosing the low amplitude Z to represent logic-1 as in Fig. 5-7(b) defines a negative logic system, The terms positive and negative are somewhat misleading since both signal values may be positive or both may be negative. It is not the polarity of the signal that determines the type of logic but the assignment according to their relative magnitude. Consider for example, a two-input AND gate whose truth table is given in Table 5-1(a). Assume that we are operating with a positive logic system so that logic-1 is equivalent to signal H and logic-O to signal L. By a direct, Sec.5-3 POSITIVE AND NEGATIVE LOGIC 128 Logic Signal Logic Signal value value value value ae ; : | : ° L 1 L (a) Positive logic (bo) Negative logic Figure 5-7. Signal amplitude assignment and type of logic ‘Table 5-1 Positive Logic AND, Negative Logic OR (a) (o) substitution, we obtain the input-output signal amplitude relations of the circuit as shown in Table 5-1(b). Similarly, the truth table of a two-input OR gate in Table 5-2(a) translates into the positive logic assignment of Table 5-2(b). It is important to realize that part (b) of the two tables defines the input-output behavior of the physical circuit and that the logic assignment of I and O is chosen arbitrarily by the user. Therefore, if we take the same piece of hardware whose physical behavior is specified in Table 5-I(b) and employ negative logic, the 1 signal becomes logic-1 and the H signal logic-0. A direct substitution of logic values for signal values ‘Table 5-2. Positive Logic OR, Negative Logic AND 7) (oy 126 GATE IMPLEMENTATION Chop. 5 gives the relations listed in Table 5-I(c), from which we obtain the OR function. Similarly, the AND function of Table 5-2(c) is obtained from the negative logic choice of the circuit specified by Table 5-2(b). From these observations we can easily conclude that a positive logic AND gate and a negative logic OR gate are the same piece of hardware. A similar statement can be made for a positive logic OR gate and a negative logic AND gate. The type of logic gate we choose to call the physical circuit depends entirely on whether we choose positive or negative logic. What has been said with respect to AND and OR in conjunction with Positive and negative logic also applies to NOR and NAND because these two functions are the dual of each other. Thus, a circuit that performs the logic function of NOR when positive logic symbols are used will perform the function of NAND for negative logic and vice versa. The choice of positive or negative logic for a given set of standard logic circuits is arbitrary. For this reason, standard functions in MSI devices are sometimes described in tables in terms of H and L signals instead of 1 and 0. The choice of positive or negative logic in many instances has been dictated by the type of transistors employed. Circuits using NPN-type transistors use positive signals and are usually assigned positive logic values, while circuits using PNP-type transistors use negative signals and are usually assigned negative logic values. This is the reason for the terminology of positive and negative logic. 5-4 SIMPLIFICATION CRITERIA The purpose of Boolean function simplification is to obtain an algebraic expression that, when implemented, results in a low-cost circuit. However, the criteria that determine a low-cost circuit or system must be defined if we are to evaluate the success of the achieved simplification. It. has pre- viously been mentioned that criteria for simplification depend on the con- straints imposed by the particular application. The subject of switching theory is concerned, among other things, in finding algorithms for simplify- ing Boolean functions. The best known algorithm for simplifying Boolean functions is the tabulation (or map) method presented in Ch. 3. This method gives an answer in one of the two standard forms and achieves a minimization of the number of terms and the number of literals of the function. Other algorithms may be found in the literature of switching theory, but they usually satisfy a narrow set of criteria and are not general enough for inclusion in a textbook.* The simplest approach available to a logic designer is to use the map method in conjunction with algebraic manipulation and his skill, experience, and ingenuity. The labor involved ‘The best reference for articles on switching theory is the IEEE Transactions on Computers Sec. 5-4 SIMPLIFICATION CRITERIA 127 may be reduced if a computer program for simplifying Boolean functions is available. Although algorithms that satisfy all possible criteria for simplification are not available, it is important that the designer be familiar with the different requirements, restrictions, and limitations of a practical situation. Criteria for Boolean function simplification may be divided into three broad cate- gories: (1) reduction in the number of components and wires, (2) reduction in propagation delay, and (3) loading constraints. Each of these categories is discussed below. Number of Components and Wires It seems reasonable that we would try to minimize the following items if we wanted to reduce the cost of implementing digital systems: 1. The number of logic gates. The number of inputs to a gate. . The number of integrated circuit packages. ReN The number of printed-circuit boards (IC packages are sometimes mounted on printed circuit boards). 5. The number of connecting wires. ‘The map (or tabulation) method minimizes the number of logic gates and inputs to a gate provided we are satisfied with a standard form implementation. Given two circuits that perform the same function, the one that requires less gates is more likely to be preferable because it will cost less. This is not necessarily true when integrated circuits are used. Since several logic gates (or an entire function) are included in a single IC package, it becomes economical to use as many of the gates from an already used package as possible even if, by doing so, we increase the total number of gates. Moreover, some of the interconnections among gates in an IC are internal to the chip and it is more economical to use as many internal interconnections as possible in order to minimize the number of wires between external pins. With integrated circuits, it is not the count of logic gates that determines the cost but the number and type of ICs used to implement the given function. In order to minimize the number of ICs and the number of external interconnections, we must specify the physical layout and construction of the various components. Computer programs that consider the pertinent physical variables are sometimes used for optimizing the cost of digital systems. 128 GATE IMPLEMENTATION Chap. 5 Propagation Delay and Logic Levels The signals through a logic gate take a certain amount of time to propagate from the inputs of the circuit to its output. This interval of time is defined as the propagation delay of the circuit. The signals that travel from the inputs of a combinational circuit to its outputs pass through a series of gates. The sum of the propagation delays through the gates is the total propagation delay of the circuit. A reduced propagation delay means a faster operation. When speed of operation is important, logic gates must have small propagation delays and combinational circuits must have a min- imum number of series gates between inputs and outputs, The input signals in most combinational circuits are applied simul- taneously to more than one gate. All those gates that receive their inputs exclusively from external inputs constitute the first logic level of the circuit. Gates that receive at least one input from an output of a first logic level gate constitute the second level and similarly for third and higher levels, The total propagation delay through the combinational circuit is equal to the propagation delay through a gate times the number of logic levels in the circuit. Thus, a reduction in the number of levels results in a reduction of signal delay and a faster circuit. The reduction of the propagation delay in circuits may be more important than the reduction in the number of gates if speed of operation is a major factor. A Boolean function expressed in sum of products or product of sums can be implemented with two levels of gates provided both normal and complement inputs are available. A Boolean function expressed in one of, the standard forms is said to provide a two-level implementation. Obviously, if only the normal input variables are available, the inverters that generate the complements constitute a third level in the implementation. Because any function can be written as a sum of products, any function can be implemented with a two-level implementation. The use of two-levels provides the least propagation delay and thus the fastest circuit. If speed is unimportant, a multilevel implementation of less gates is normally more desirable. General algorithms for obtaining a simplified multilevel implemen- tation are not available. The designer must resort to algebraic manipulation or to any available computer program that searches for a minimum gate implementation under certain given conditions. Examples of multilevel implementation can be found in Ch. 4. These are: the full-adder of Fig. 4-11 (five levels), the code converter of Fig. 4-8 (four levels), and the comparator of Fig. 4-10 (four levels). Fan-In, Fan-Out, and Loading Fan-in specifies the number of inputs to a gate. For example, a four- input AND gate has a fan-in of four. For logic gates constructed with see. 5:5, NANDLOGIC 129 discrete components, an increase in fan-in usually means adding more inputs by inserting more diodes or resistors. The fan-in limitation in this case is determined by the ability of the circuit to support additional inputs and still function within the allowable tolerance. The fan-in of IC gates is determined mostly from the limitation of external pins in the package. For example, the dual-indine package shown in Fig. 5-1 has 14 external pins, One pin is needed for supply voltage and one for ground connection, leaving 12 for inputs and outputs. Suppose it is desirable to include one gate in a package. The largest fan-in that can be achieved is 11 since one pin must be available for the output. If it is desired to include two gates in cone IC package with equal number of inputs per gate, then the largest fan-in for each is five (two gates, each with five inputs and one output, require a total of 12 pins). When Boolean functions are implemented with IC gates, the fan-in of available gates must be considered for minimizing the number of IC packages. Fan-out specifies the number of “standard loads” an output of a gate can drive without impairing its normal operation. A standard load may be an input to a gate, to an inverter, or to any other circuit which must be specified in defining fan-out. Sometimes the term loading is used instead of fan-out. This term is derived from the fact that the output of a gate can supply a limited amount of power, above which it ceases to operate properly. Each circuit connected to the output consumes a certain amount of power, so that each additional circuit adds to the “load” of the gate. “Loading rules” are usually listed for a family of standard digital circuits. These rules specify the maximum amount of loading allowed for each output of each circuit. Exceeding the specified maximum load may cause a malfunction because the circuit cannot supply the power demanded from it. The loading (or fan-out) capabilities of a gate must be considered when simplifying Boolean functions. Care must be taken not to develop expres- sions that result in an overloaded gate. Non-inverting amplifiers (whose symbol is shown in Fig. 5-4) are sometimes employed to provide additional driving capabilities for heavy loads. 5-5 NAND LOGIC Combinational circuits are more frequently constructed with NAND or NOR gates than with AND, OR, and NOT gates. NAND and NOR circuits are superior to AND and OR gates from the hardware point of view, as they supply outputs that maintain the signal value without loss of amplitude. OR and AND gates sometimes need amplitude restoration after the signal travels through a few levels of gates. Because of the prominence of NAND and NOR gates in the design of combinational circuits, rules and procedures have been developed for the conversion from Boolean functions given in 130 GATE IMPLEMENTATION Chap. 5 terms of AND, OR, and NOT into equivalent NAND or NOR logic dia- grams. The procedures for NAND logic are presented in this section and for NOR logic in the next section. Since the NOR operation is the dual of the NAND, the rules and procedures for NOR logic are the dual of those for NAND logic. Universal Gate The NAND gate is said to be a universal gate because any digital system can be implemented with it. Not only combinational circuits but sequential circuits as well can be constructed with this gate. This is because the flip-flop circuit (the memory element most frequently used in sequential circuits) can be constructed from two NAND gates connected back to back as shown in Sec. 6-2. To show that any Boolean function can be implemented with NAND gates, we need only show that the logical operations AND, OR, and NOT can be implemented with NAND gates. The implementation of the AND, OR, and NOT operations with NAND gates is shown in Fig. 5-8. The NOT operation is obtained from a one-input NAND gate, actually another symbol for an inverter circuit. The AND operation requires two NAND gates. The first produces the inverted AND and the second acts as an inverter to obtain the normal output. The OR operation is achieved through a NAND gate with additional inverters in each input. A convenient way to implement a combinational circuit with NAND gates is to obtain the simplified Boolean functions in terms of AND, OR, and NOT and convert the functions to NAND logic. The conversion of the algebraic expression from AND, OR, and NOT operations to NAND opera- NOT (inverter) A— (ABY “4D +» pena on Ee [> Figure 5-8 Implementation of NOT, AND or OR by NAND gates See. 5-5 NAND Locic 131 tions (using + symbol) is usually quite complicated because it involves a large number of applications of De Morgan's theorem. This difficulty is avoided by the use of simple circuit manipulations and simple rules as outlined below. Boolean Function Implementation—Block Diagram Method The implementation of Boolean functions with NAND gates may be obtained by means of a simple block diagram manipulation technique. This method requires that two other logic diagrams be drawn prior to obtaining the NAND logic diagram, Nevertheless, the procedure is very simple and straightforward: 1, From the given algebraic expression, draw the logic diagram with AND, OR, and NOT gates. Assume that both the normal and com- plement inputs are available. 2. Draw a second logic diagram with the equivalent NAND logic as given in Fig. 5-8 substituted for each AND, OR, and NOT gate. 3. Remove any two cascaded inverters from the diagram since double inversion does not perform a logic function. Remove inverters con- nected to single external inputs and complement the corresponding put variable. The new logic diagram obtained is the required NAND gate implementation. This procedure is illustrated in Fig. 5-9 for the function: F=A(B+CD)+ BC ‘The AND/OR implementation of this function is drawn in the logic diagram of Fig. 5-9(a). For each AND gate we substitute a NAND gate followed by an inverter; for each OR gate we substitute input inverters followed by a NAND gate. This substitution follows directly from the logic equivalences of Fig. 5-8 and is drawn in the diagram of Fig. 5-9(b). This diagram has seven inverters and five two-input NAND gates listed with numbers inside the gate symbol. Pairs of inverters connected in cascade (from each AND box to each OR box) are removed since they form double inversion. The inverter connected to input B is removed and the input variable is designated by B’. ‘The result is the NAND logic diagram shown in Fig. 5-9(c), with the number inside each symbol identifying the gate from Fig. 5-9(b). This example demonstrates that the number of NAND gates required to implement the Boolean function is equal to the number of AND/OR gates, provided both the normal and complement inputs are available. If only the normal inputs are available, inverters must be used to generate any required complemented inputs. 132 GATE IMPLEMENTATION B a (a) AND/OR implementation Chap. 5 ‘AND >t >_> OR ‘AND, eH >> e () Substituting equivalent NAND functions from Fig. 5.8 B ie (©) NAND implementation D>—: Figure 5-9 Implementation of F = A(B + CD) + BC’ with NAND gates ‘A second example of NAND implementation is shown Boolean function to be implemented is: F= (A+B!) (CD+£) Fig. 5-10. The Sec. 55 NANDLOGIC 133 (a) AND/OR implementation i oH >> : YI ro OR ito (b) Substituting equivalent NAND functions = p> (c) NAND implementation Figure $-10 Implementation of (4 + B') (CD + E) with NAND gates The AND/OR implementation is shown in Fig. 5-10(a) and its NAND logic substitution in Fig. 5-10(b). One pair of cascaded inverters may be removed. The three external inputs £, A, and B’, which go directly to inverters, are complemented and the corresponding inverters removed. The final NAND gate implementation is in Fig. 5-10(c) 134 GATE IMPLEMENTATION Chap. 5 The number of NAND gates for the second example is equal to the number of AND/OR gates plus an additional inverter in the output (NAND gate number 5). In general, the number of NAND gates required to imple- ment a function equals the number of AND/OR gates, except for an occasional inverter. This is true provided both normal and complement inputs are available because the conversion forces certain input variables to be complemented. The block diagram method is somewhat tiresome to use because it requires the drawing of two logic diagrams to obtain the answer in a third. With some experience, it is possible to reduce the amount of labor by anticipating the pairs of cascaded inverters and the inverters in the inputs. Starting from the procedure just outlined, it is not too difficult to derive general rules for implementing Boolean functions with NAND gates directly from an algebraic expression.* We shall now formulate the rules for conver- sion when the Boolean functions are expressed in sum of products or product of sums. Two-Level Implementation—Sum of Products The derivation of the NAND logic diagram when the Boolean function is expressed in sum of products is simple and direct. The rule for this conversion can be deduced from inspection of Fig. 5-11, where a typical sum of product expression is implemented: F=AB+CD+E A sum of products expression is always implemented with a group of AND gates in the first logic level and a single OR gate in the second level, as shown in Fig. S-11(a). The substitution of the NAND equivalent logic as drawn in Fig. 5-11(b) clearly shows that each first-level AND produces an output inverter and each input of the OR gate produces an input inverter. ‘The pairs of cascaded inverters can be removed as shown in the final diagram of Fig. 5-11(c). A term with one literal is an exception, since the input variable is applied directly to the input of the OR gate. This is clearly shown in the diagram where the variable E, being a term of one literal, is complemented during the conversion. The rule for obtaining the NAND logic diagram directly from a Boolean function expressed in sum of products should be clear from this example: 1, Draw a NAND gate for each AND term of the function that has at least two literals. The inputs to each NAND gate are the literals of the term. This constitutes a group of first-level gates. A comprehensive set of rules for both NAND and NOR logic can be found in reference (1) at the end of this chapter. Sec. 5-5 NAND LOGIC 138 Q ie a (a) AND/OR A ee wo > & (b) Substitution of NAND \ A inv) an B » oe ot i = re (Agtcrte > 7 (e) NAND Figure 5-11 Sum of products implementation with NAND gates Draw a single NAND gate in the second level with inputs coming from outputs of first-level gates. 3. Any literal that appears as a term by itself is complemented and applied as input to the second-level NAND gate. Three-Level Implementation—Product of Sums ‘A Boolean function expressed in product of sums when implemented with NAND gates requires three levels of gates, the third and last level 136 GATE IMPLEMENTATION Chap. 5 being an inverter. The rules of conversion from the algebraic expression to the logic diagram can be deduced from the example shown in Fig. 5-12, where the following function is implemented: Fr(A+B)(C+DE A product of sums expression is always implemented with a group of OR xf) A i awe 7 a0” u aD ooo : Saas (b) Substitution of NAND m (©) NAND Figure 5-12 Product of sums implementation with NAND gates Sec. 5.5 NAND LOGIC 137 gates in the first level and a single AND gate in the second level as shown in Fig. 5-12(a). The substitution of NAND equivalent logic, as in Fig. 5-12(b), clearly shows that each OR gate produces an input inverter and that the AND gate produces a single output inverter. There are no pairs of inverters that can be removed, but inverters connected to external inputs can be removed provided the input variables are complemented. The literal that forms a single term goes directly to the input of the second level and is not complemented during the conversion. The final NAND logic diagram is shown in Fig. 5-12(c). From it we can deduce the rule for obtaining the NAND logic diagram directly from a Boolean function expressed in product of sums: 1, Draw a NAND gate for each OR term of the function that has at least two literals. The inputs to each NAND gate are the complements of the literals in the term. This constitutes a group of first-level gates. 2. Draw a single NAND gate in the second level with inputs coming from outputs of first-level gates. 3. Any literal that appears as a term by itself is applied as input to the second-level NAND gate. 4, Draw a third-level NAND gate with a single input connected to the output of the second-level. NAND gate. In stating the rules for the sum of products and product of sums implementations it was assumed that both the normal and complement inputs were available. If only the normal inputs ate available, the inverters that generate the complements constitute one additional logic level. The following example demonstrates the derivation of the NAND logic diagrams for a Boolean function initially expressed in canonical form. EXAMPLE 5-1. Implement the following Boolean function with NAND gates. F (4, B, C, D) = B (0, 1, 3,7, 8, 9, 10, 14, 15) First, the expression must be simplified. This is done by using the map shown in Fig. 5-13. In (a) we combine the 1's to obtain a simplified expression in sum of products: F = B'C' + A'CD + BCD + ACD’ Following the rules for implementing a sum of products expression, we obtain the NAND logic diagram of Fig. 5-14(a). A. different NAND gate structure is obtained when the function is simplified in a product of sums form. This is achieved by combining the 0's of the function as shown in the map of Fig. 5-13(b) and then complement- ing to obtain: Fz +OQ(A+C +D) (A+ B+C' 4D) Following the rules for implementing a product of sums expression, we obtain the NAND logic diagram of Fig. 5-14(b). Both implementations 138 GATE IMPLEMENTATION Chap. 5 cD pee cD Seer Selene onaicig es S89. a alc 0 of 1 olfolo ° 2 2 f Lh nll |e 4 4 Tfa 1 ho ° reectacotacpeace FRU FOUFE4D) (+ B+C4D) Figure $-13 Map for Example 5-1 pom bam 4 € §e (a) Tworlevel from sum of produets (b) Three evel from product of sums Figure 5-14 NAND implementation of the Boolean function of Example of 5-1 require five NAND gates, but one has two logic levels and the other, three. If for any reason only the complemented output of this function is required, the output inverter in Fig. 5-14(b) can be removed and the circuit becomes more economical than the one of Fig. 5-14(a). Analysis Procedure Up to now we have considered the problem of deriving a NAND logic diagram from a given Boolean function. The reverse process is the analysis, problem which starts with a given NAND logic diagram and culminates with See. 55 NAND LOGIC 139 ‘a Boolean expression or a truth table. The analysis of NAND logic diagrams follows the same procedures presented in Sec. 4-7 for the analysis of combinational circuits. The only difference is that NAND logic requires a repeated application of De Morgan’s theorem. We shall now demonstrate the derivation of the Boolean function from a logic diagram. We shall then proceed to show the derivation of the truth table directly from the NAND logic diagram. Finally, a method will be presented for converting 2 NAND logic diagram to AND/OR/NOT logic diagram by means of block diagram manipulation. Derivation of the Boolean Function by Algebraic Manipulation The procedure for deriving the Boolean function from a logic diagram is, outlined in Sec. 4-7. This procedure is demonstrated for the NAND logic diagram shown in Fig. 5-15, which is the same as Fig. 5-9(c). First, all gate outputs are labeled with arbitrary symbols. Second, the Boolean functions for the outputs of gates that receive only external inputs are derived: T, =(CDY =C'+D T, = (BC'Y = B+ C The second form follows directly from De Morgan’s theorem and may, at times, be more convenient to use. Third, Boolean functions of gates which have inputs from previously derived functions are determined in consecutive order until the output is expressed in terms of input variables: T, = BT.) = (B'C' + BD'Y =B+O)@+D)=B+cD T, = (ATs) = [4 (B+ CD)’ F = (TT) ={(BCY [A (B+ CD))"}" = BC' + A (B + CD) c+ r p— Ts Ts s— T F Figure 5-15 Analysis example 140 GATE IMPLEMENTATION chap. 5 Derivation of the Truth Table The procedure for obtaining the truth table directly from a logic diagram is also outlined in Sec. 4-7. This procedure is demonstrated for the NAND logic diagram of Fig. 5-15. First, the four input variables, together with their 16 combinations of 1’s and 0's, are listed as in Table 5-3. Second, the outputs of all gates are labeled with arbitrary symbols as in Fig. 5-15. Third, we obtain the truth table for the outputs of those gates that are a function of the input variables only. These are T, and T,. T, = (CD), so we mark 0s in those rows where both C and D are equal to I and fill the rest of the rows of T, with 1’s. Also T, = (BC’)', so we mark 0's in those rows where B = land C = 0, and fill the rest of the rows of T with 1's We then proceed to obtain the truth table for the outputs of those gates ‘Table 5-3 Truth Table for the Circuit of Figure 5-15 A Bc D{%|m| tr] mF oo o ofifirlofijo See, 5-5 NAND LOGIC 141 that are a function of previously defined outputs until the column for the output F is determined. It is now possible to obtain an algebraic expression for the output from the derived truth table. The map drawn in Fig. 5-16 obtained directly from Table 5-3 and has 1’s in the squares of those minterms for which F is equal to 1. The simplified expression obtained from the map is F = AB + ACD + BC’ = A (B + CD) + BC’ which is the same as the one of Fig. 5-9, thus verifying the correct answer. c cD aes 4p, 0001 110 F=AB+BC + ACD Figure 5-16 Derivation of F from Table 5-2 Block Diagram Transformation It is sometimes convenient to convert a NAND logic diagram to its equivalent AND/OR/NOT logic diagram to facilitate the analysis procedure. By doing so, the Boolean function can be derived more easily without employing De Morgan’s theorem. The conversion of logic diagrams is accom- plished through a process reverse from that used for implementation. A convenient equivalent symbol for a NAND gate for the conversion is shown in Fig. 5-17(b). Instead of representing a NAND gate with an AND symbol followed by a circle, we can represent it by an OR gate preceded by circles in all inputs. The invert-OR symbol for a NAND gate follows from De Morgan’s theorem and from the convention that small circles denote complementation. The conversion of a NAND logic diagram to an AND/OR/NOT diagram is achieved through a change in symbols from AND-invert to invert-OR in alternate levels of gates. The first level to be changed to an invert-OR symbol should be the last level. These changes produce pairs of circles along 182 GATE IMPLEMENTATION Chap. 5 (ancy Coe (2) ANDJnvert () inver-OR Figure 5-17 Two symbols for NAND gate the same line which can be removed since they represent double com- plementation. Moreover, a one-input AND or OR gate can be removed since it does not perform a logical function. A one-input AND or OR with a circle in the input or output is changed to an inverter circuit. ‘This procedure is demonstrated in Fig. 5-18. The NAND logic diagram of CO _ c D: (a) NAND logic diagram () Substitution of invert OR symbols in alternate levels [> = (©) AND/OR/NOT logic diagram Figure 5-18 Conversion of NAND logic diagram to AND/OR/NOT Sec. 5-6 NOR LOGIC 143 Fig. 5-18(a) is to be converted to an AND/OR diagram. The symbol of the gate in the last level is changed to an invert-OR, Looking for alternate levels, we find one more gate requiring a change of symbol as shown in Fig. 5-18(b). Any two circles along the same line are removed. Circles that g0 to external inputs are also removed, provided the corresponding input variable is complemented. The required AND/OR logic diagram is drawn in Fig. 5-18(c). 56 NOR LOGIC The NOR function is the dual of the NAND function, For this reason, all procedures and rules for NOR logic form a dual of the corresponding procedures and rules developed for NAND logic. This section enumerates various methods for NOR logic implementation and analysis by following the same list of topics used for NAND logic. However, less detailed explana- tion is included so as to avoid excessive repetition of the material in Sec, 5-5. Universal Gate ‘The NOR gate is universal because any Boolean function can be implemented with it, including a flip-flop circuit as shown in Sec. 6-2. The conversion of AND, OR, and NOT to NOR is shown in Fig. 5-19. The NOT operation is obtained from a one-input NOR gate, yet another symbol for an inverter circuit. The OR operation requires two NOR gates. The first produces the inverted OR and the second acts as an inverter to obtain the normal output. The AND operation is achieved through « NOR gate with additional inverters in each input. .—_] > NOT (inverer) (A+ BY ‘AND cay > coe’ Figure 5-19 Implementation of NOT, OR and AND by NOR gates 144° GATE IMPLEMENTATION Chap. 5 Boolean Function Implementation—Block Diagram Method The block diagram procedure for implementing Boolean functions with NOR gates is similar to the procedure outlined in the previous section for NAND gates. 1, Draw the AND/OR/NOT logic diagram from the given algebraic expression. Assume that both the normal and complement inputs are available, 2. Draw a second logic diagram with equivalent NOR logic as given in Fig. 5-19 substituted for each AND, OR, and NOT gate. 3. Remove pairs of cascaded inverters from the diagram. Remove inverters connected to single external inputs and complement the corresponding input variable. The procedure is illustrated in Fig. 5-20 for the function: F =A (B+ CD) + BC’ The AND/OR implementation of the function is drawn in the logic diagram of Fig. 5-20(a). For each OR gate we substitute a NOR gate followed by an inverter. For each AND gate we substitute input inverters followed by a NOR gate. The pair of cascaded inverters from the OR box to the AND box is removed. The four inverters connected to external inputs are removed and the input variables complemented. The result is the NOR logic diagram shown in Fig. 5-20(c). The number of NOR gates in this example equals the number of AND/OR gates plus an additional inverter in the output (NOR gate number 6). In general, the number of NOR gates required to implement a Boolean function equals the number of AND/OR gates, except for an occasional inverter. This is true provided both normal and complement inputs are available because the conversion forces certain input variables to be complemented. ‘Two-Level Implementation—Product of Sums The rule for obtaining the NOR logic diagram directly from a Boolean function expressed in product of sums is the same as that given for NAND logic for sum of products. This follows directly from the principle of duality: . Draw a NOR gate for each OR term of the function that has at least two literals. The inputs to each NOR gate are the literals of the term. This constitutes a group of first-level gates. 2, Draw a single NOR gate in the second level with inputs coming from outputs of first-level gates. Sec. 5-6 NOR LOGIC 145 | B o— (2) AND/OR implementation "AND. ct OR D AND 2° OR (>) Substituting equivalent NOR functions from Fig. 5-19 (©) NOR implementation Figure 5-20 Implementation of F = A(B + CD) + BC’ with NOR gates 3. Any literal that appears as a term by itself is complemented and applied as input to the second-level gate. 146 GATE IMPLEMENTATION Chap. 5 Three-Level Implementation-Sum of Products Aéain following the principle of duality, we find that a sum of products implementation with NOR gates follows a similar procedure as the product of sums implementation for NAND gates. 1. Draw a NOR gate for each AND term of the function that has at least two literals. The inputs to each NOR gate are the complements of the literals in the term. This constitutes a group of first-level gates. 2. Draw a single NOR gate in the second level with inputs coming from outputs of firstevel gates. 3. Any literal that appears as a term by itself is applied as input to the second-level NOR gate, 4. Draw a third-level NOR gate with a single input connected to the output of the second-level NOR gate. The following example demonstrates the derivation of the NOR logic diagrams from the same Boolean function used in Ex. 5-1. EXAMPLE 5-2. Implement the function of Ex. 5-1 with NOR gates, The simplified Boolean expressions for this function are derived in the maps of Fig. 5-13. The simplified product of sums expression is: F=(B +0) (A+C'+D)(A'+B4+C'+D) Following the rules for a two-level implementation, we obtain the logic diagram of Fig. 5-21(a). The simplified sum of products expression is: F = B'C' + A'CD + BCD + ACD! Following the rules for a three-level implementation, we obtain the logic diagram of Fig. 5-21(b). It is interesting to compare the logic diagrams of Fig. 5-21 with those obtained for NAND logic in Fig. 5-14. Provided both NAND and NOR gates are available, the four logic diagrams constitute four different implementations of the same Boolean function. Analysis Procedure The analysis of NOR logic diagrams follows the same procedures presented in Sec. 4-7 for the analysis of combinational circuits. To derive the Boolean function from a logic diagram, we mark the outputs of various gates with arbitrary symbols. By repetitive substitutions we obtain the output variable as a function of the input variables. To obtain the truth table from a logic diagram without first deriving the Boolean function, we form a table listing the n input variables with 2” rows of 1's and 0's. The truth table of various NOR gate outputs is derived in succession until the output truth table is obtained. The output function of a typical NOR gate is of the form T = (4 + BY + Cy’, so the truth table for T is marked with @ 0 for those combinations where A = 1 or B= 1 or C= 1. The rest of the rows are filled with 1’s. Block Diagram Transformation To convert a NOR logic diagram to its equivalent AND/OR/NOT logic diagram, we utilize the two symbols for NOR gates as shown in Fig. 5-22. Sec. 5-6 NOR LOGIC 147 A 4; ‘ : & ; 3 BS i / ‘ f Figure 5-21 NOR implementation of the Boolean function of Example 5-2 4 (A+B+0y" sq es (a+B+cy é (a) ORinvert (b) invert AND On Figure 5-22 Two symbols for NOR gate The OR-invert is the normal symbol for a NOR gate and the invert-AND is a convenient alternative that utilizes De Morgan's theorem and the con- vention that small circles in the inputs denote complementation. The conversion of a NOR logic diagram to an AND/OR/NOT diagram is achieved through a change in symbols from OR-invert to invert-AND start- ing from the last level and in alternate levels. Pairs of small circles along the same line are removed. One input AND or OR gate is removed, but if it has a small circle in the input or output, it is converted to an inverter. This procedure is demonstrated in Fig. 5-23, where the NOR logic diagram in (a) is converted to an AND/OR/NOT diagram. The symbol of the gate in the last level (5) is changed to an invert-AND. Looking for alternate levels, we find one gate in level 3 and two in level 1. These three gates undergo a symbol change as shown in (b). Any two circles along the same line are removed. Circles that go to external inputs are also removed provided the corresponding input variable is complemented. The gate in level S becomes a one-input AND gate and is removed. The required AND/ OR logic diagram is drawn in Fig. 5-23(c). 148 GATE IMPLEMENTATION Chap. 5 (a) NOR Togic diagram De > (©) AND/OR/NOT logic diagram Figure 5-23 Conversion of NOR logic diagram to AND/OR/NOT 5-7 OTHER TWO-LEVEL IMPLEMENTATIONS A general form of a two-level gate implementation is shown in Fig. 5-24. Each box labeled cy represents a first-level gate. The box labeled co represents the second-level gate. For example, a Boolean function expressed in sum of products can be implemented in a two-level form with each cy box representing an AND gate and the cz box representing an OR gate. For convenience we use the notation c,/cz to designate a two-level implementa- tion and refer to it as an AND/OR two-level form. In previous sections we have considered four types of gates: AND, OR, NAND, and NOR. If we assign one type of gate for c, and one type for €2, we shall obtain 16 possible combinations of two-level forms. Eight of Sec. 5-7 OTHER TWO-LEVEL IMPLEMENTATIONS 149, Figure 5-24 General two-level gate implementation these combinations are said to be degenerate forms; they degenerate to a single ‘operation (see Prob. 5-27). The other eight nondegenerate forms produce an implementation in sum of products or product of sums provided both normal and complement inputs are available. The eight nondegenerate forms are: 1. AND/OR 2. OR/AND 3. NAND/NAND. 4, NOR/NOR 5. NOR/OR 6. NAND/AND. 7. OR/NAND 8. AND/NOR The four forms listed in the left column implement a function expressed in sum of products, while the four forms on the right implement a function expressed in product of sums. Note that any two forms listed in the same row are the dual of each other. The AND/OR and OR/AND forms are basic two-level forms and have been discussed extensively in previous chapters. The NAND/NAND form was introduced and discussed in Sec. 5-5 and the NOR/NOR form in Sec. 5-6. ‘The remaining two-level forms are investigated in this section. ‘One must realize that it is sometimes convenient, from various hardware considerations, to construct a two-level form as a nonseparable unit. For example, an AND/NOR two-level form may be available in an IC package with only the inputs to firstlevel gates and the output of the second-level gate connected to external pins. Another hardware facility that produces equivalent two-level forms is the wired-OR or wired-AND property of certain logic gates. Wired is a word used to explain the ability to produce a second-level operation (usually OR or AND but not both) by connecting 150 GATE IMPLEMENTATION Chap. 5 (Shorting) together the outputs of all first-level gates. For example, a NOR/OR form may be available from NOR gates that possess the property of producing a wired-OR function when two or more outputs are shorted. Figure 5-25 may be conveniently used to determine the logic function of the various two-level forms. Each logic diagram has two firstevel gates with inputs designated by A and B for one gate and C and D for the other gate. A fifth input designated by E goes directly to the second-level gate. The logic function performed by each two-level form can be determined algebra- ically and from it one can deduce the rule for implementing any Boolean function with the given two-level form. B+CDE (2) NOR/OR f(A + BY (C+ DIE (b) NAND/AND (©) OR/NAND, 4 B (+ BC + DIE ! c D: E l (@) AND/NOR Figure 5-25 Two-level implementations of four non-degenerate forms See. 5-7 OTHER TWO-LEVEL IMPLEMENTATIONS 181 NOR/OR Form The logic diagram for this two-level form is shown in Fig. 5-25(a). The output function for the circuit is: Fy =(A +B) + (C+D) +E=AB HCD +E which shows that a Boolean function expressed in sum of products can be implemented with a NOR/OR form if the input literals are complemented except for a single literal forming an OR-term. NAND/AND Form The logic diagram for this two-level form is shown in Fig. 5-25(b). The output function for the circuit is: Fe = (AB) (CDy E=(A'+ BY (C+ DIE which shows that a Boolean function expressed in product of sums can be implemented with a NAND/AND form if the input literals are com- plemented (except for a single literal forming an AND-term). OR/NAND Form The logic diagram for this form shown in Fig. 5-25(c) produces the following output function: Fy = (A +B) (C+D) El = AB + CD +E This result shows that a Boolean function expressed in sum of products can be implemented with an OR/NAND form provided all input literals are complemented. Another way of looking at the OR/NAND form is to consider it as an OR/AND/INVERT function with the inversion done by the small circle of the second-level NAND gate. An OR/AND implementation requires a product of sums form. The inversion complements the function. Therefore, if we derive the complement of the function in product of sums, we can implement it with the OR/AND function and when it passes through the always present output inversion, it will generate the normal output function. This procedure produces the same implementation as the one outlined above. AND/NOR Form The logic diagram for this form produces the following output function: Fy = (AB + CD + Ey =(A'+ BY (C+ DIE’ 152 GATE IMPLEMENTATION Chap. 5 ‘The result shows that a Boolean function expressed in product of sums can be implemented with an AND/NOR form provided all input literals are complemented. Again, an AND/NOR form may be considered as an AND/OR/INVERT form, with the inversion done by the small circle of the second-level NOR gate. Therefore, if the complement of the function is expressed in sum of products, it can be implemented with the AND/OR function. When the complement passes through the output inversion, it generates the normal function. The implementation achieved by this method is the same as that outlined above. The implementation of a Boolean function with any one of the eight nondegenerate two-level forms is summarized in Table 5-4. For completeness, we include the first four forms, which have been discussed in previous chapters. EXAMPLE 5.3. Implement the Boolean function represented in the map of Fig. 5-26 with the four two-level forms NOR/OR, NAND/AND, OR/ NAND, and AND/NOR. The simplified Boolean function in sum of products is obtained by combining the 1°s in the map: F = AB+ A'C + ACD! (5-1) The complement of the function in sum of products is obtained by combining the 0's in the map: F' = A'C' + ABC + AB'D (5-2) ‘Table 5-4 Method of Implementation of a Boolean Function With Non-degenerate Two-level Forms Standard form of function Inputs Twolevel form 10 be used Literals to be complemented AND/OR sum of products none ‘OR/AND product of sums none NAND/NAND sum of products single literal forming a term NOR/NOR product of sums single literal forming a term NOR/OR sum of products all, except for a single literal forming a term NAND/AND product of sums all, except for a single literal forming a term OR/NAND, sum of products al AND/NOR product of sums all See. 5:7 OTHER TWO-LEVEL IMPLEMENTATIONS 153 oo ol “Th 70 D Figure 5-26 Map for Example 5-3 The product of sums expression is obtained from the complement of F’ as expressed in Eq. 5-2: F=(A+0(4'+B+C')(A'+B+D) G63) The complement of the function in product of sums is obtained from the complement of F as expressed in Eq. 5-1: Fi=(A'+ BY (A+C)(A' + CHD) (5-4) The NOR/OR implementation is shown in Fig. 5-27(a). The expression employed is the sum of products from Eq. 5-1, with all input literals complemented. (There is no term with a single literal in this expression.) ‘The NAND/AND implementation is shown in Fig. 5-27(b). The expression employed is the product of sums from Eq. 5-3, with all input literals complemented (again, there is no single-literal term in this expression), ‘The OR/NAND implementation is shown in Fig. 5-27(c). The expression used is the sum of products from Eq. 5-1, with all input literals com- plemented. The same result is obtained if we use the complement function F" expressed in product of sums from Eq. 5-4 to generate an OR/AND function. The small circle in the output complements F’ to produce the normal function F. The AND/NOR implementation is shown in Fig. 5-27(d). The expression employed is the product of sums from Eq. 5-3, with all input literals complemented. Again, the same result is obtained when F’ is used from Eq. 5-2 to generate an AND/OR function. The small circle in the output complements F” to produce the normal function F. It was shown in Secs. 5-5 and 5-6 that a Boolean function can be implemented with three levels of NAND or NOR gates. Similarly, other two-level forms can be extended to three levels, with the third level being 154 GATE IMPLEMENTATION Chap. 5 be (2) NOR/OR (b) NAND/AND Peay cD es) [> SDS =) = bea (©) OR/NAND (@) AND/NOR Figure 5-27 Two-level implementations for Example 5-3 either an inverter, a one-input NOR (when NOR gates are used), or a one-input NAND (when NAND gates are used). The third level acts as an inverter to complement what has been generated by the first two levels. The various methods for three-level implementation of a Boolean function can be listed in a table similar to Table 5-4 with the following changes: (a) each two-level form is changed to a threedlevel form (for example AND/OR is changed to ANDJOR/NOT), (b) the standard form listed in Table 5-4 is interchanged—sum of products is changed to product of sums and vice versa, (c) the literals to be complemented are those listed nor to be complemented in Table 5-4 (see Prob. 5-30). These rules are a direct consequence of the fact that a complemented function produces a dual function with all literals complemented. 5-8 EXCLUSIVE-OR AND EQUIVALENCE FUNCTIONS Exclusive-or and equivalence, denoted by ® and © respectively, are binary operations that perform the following Boolean functions: xOy = xy" +x) xOy =x +2'y! Sec. 58, EXCLUSIVE-OR AND EQUIVALENCE FUNCTIONS 155 ‘The two operations are the complement of each other. Each is commutative and associative. Because of these two properties, a function of three or more Variables can be expressed without parentheses as follows: (A® B)@C=A@BOC)=ASBSC This would imply the possibility of using exclusive-or (or equivalence) gates with three or more inputs, However, multiple-input exclusive-or gates are very uneconomical from a hardware standpoint. In fact, even a two-input function is usually constructed with other types of gates. For example, Fig. 5-28(a) shows the implementation of a two-input exclusive-or function with AND, OR, and NOT gates. Figure 5-28(b) shows it with NAND gates. It is sometimes convenient to employ the gate symbols introduced in Fig. 5-2 instead of drawing the internal constructions in detail Only a limited number of Boolean functions can be expressed exclusively in terms of exclusive-or or equivalence operations. Nevertheless, these func- tions emerge quite often during the design of digital systems. The two functions are particularly useful in arithmetic operations and error detection and correction. In fact, we have already encountered these functions in two T_T) De eee eee) = (b) with NAND gates —_) >—— « Figure $-28 Exclusive-OR implementations 156 GATE IMPLEMENTATION Chap. 5 combinational circuits in Ch. 4: the $output of the half-adder in Fig. 4-2 is an exclusive-or function and the part of the comparator circuit of Fig. 4-10 used to generate the equality of each pair of binary digits comprises three equivalence functions. An_nvariable exclusive-or expression is equal to the Boolean function with 2"/2 minterms whose equivalent binary numbers have an odd number of 1's, This is demonstrated in the map of Fig. 5-29(a) for the four-variable case. Thete are 16 minterms for four variables. Half the minterms have a numerical value with an odd number of 1’s; the other half have a numerical value with an even number of 1’s. The numerical value of a minterm is determined from the row and column numbers of the square that represents the minterm. The map of Fig. 5-29(a) has 1°s in the squares whose min- term numbers have an odd number of 1’s. The function can be expressed in terms of the exclusive-or operations on the four variables. This is justified by the following algebraic manipulation: A®B@C@D = (AB + A'B) ®(CD + CD) = (AB' + A'B) (CD + C'D') + (4B + 4'B') (CD! + C'D) =E(1, 24,7, 8 1, 13, 14) An n-variable equivalence expression is equal to the Boolean function with 2"/2 minterms, whose equivalent binary numbers have an even number of 0's. This fact is demonstrated in the map of Fig. 5-29(b) for the fourvariable case. The squares with 1’s represent the eight minterms with even number of 0's and the function can be expressed in terms of the equivalence operations on the four variables. E E on 1 ow 1 a 5 : : i Sap ece eee (a) (b) Figure 5-29 Map for a 4-variable (a) exclusive-OR function, (b) equiv. alence function See. 58 EXCLUSIVE-OR AND EQUIVALENCE FUNCTIONS 157 BC BC 2 4,00. O 400 OL TT 0 | 1 1 of 4 1 aya 1 aql 1 1 c c (@) F=A@BOC=AOBOC (b) F=ABBOC = AOBOC Figure 5-30 Map for 3-variable functions When the number of variables in a function is odd, the minterns with an even number of 0's are the same as the minterms with an odd number of, 1's. This fact is demonstrated in the three-variable map of Fig. 5-30(a). Therefore, an exclusive-or expression is equal to an equivalence expression when both have the same odd number of variables. However, they form the complement of each other when the number of variables is even, as is demonstrated in the two maps of Fig. 5-29(a) and (b). When the minterms of a function with an odd number of variables have an even number of 1’s (or equivalently, an odd number of 0's), the function can be expressed as the complement of either an exclusive-or or an equivalence expression. For example, the three-variable function shown in the map of Fig. 5-30(b) can be expressed as follows: (AG BOC! =A@BOC or (A@BOC!=AOBEC The S output of a full-adder and the D output of a full-subtractor (Sec. 4-3) can be implemented with exclusive-or functions because each function consists of four minterms with numerical values having an odd number of 1's. The exclusive-or function is extensively used in the implementation of digital arithmetic operations because the latter are usually implemented through procedures that require a repetitive addition or subtraction operation. Exclusive-or and equivalence functions are very useful in systems requiring error detection and correction codes. As discussed in Sec. 1-6, a parity bit is a scheme for detecting errors during transmission of binary information. A parity bit is an extra bit included with a binary message to make the number of 1's either odd or even. The message, including the parity bit, is transmitted and then checked in the receiving end for erros. An error is detected if the checked parity does not correspond to the one transmitted. The circuit that generates the parity bit in the transmitter is called a parity generator; the circuit that checks the parity in the receiver is called a parity checker. ‘As an example, consider a three-bit message to be transmitted with an odd parity bit. Table 5-5 shows the truth table for the parity generator. The three 158 GATE IMPLEMENTATION Chap. 5 ‘Table $-5 Odd Parity Generation Parity-bit Sebit message | generated yz P (a) 3+bit odd parity generator (b) bit odd parity checker Figure 5-31 Logic diagrams for ps generation and checking bits x, y, and z constitute the message and are the inputs to the circuit. The parity bit P is the output. For odd parity, the bit P is generated so as to make the total number of 1's odd (includingP). From the truth table we see that P= I when the number of 1’s in x, y, and z is even. This corresponds to the map of Fig. 5-30(b), so that the function for P can be expressed as follows: P=x®y Oz The logic diagram for the parity generator is shown in Fig. 5-31(a). It consists of one two-input exclusive-or gate and one two-input equivalence gate. The two gates can be interchanged and still produce the same function since P is also equal to P=x@y@z The three-bit message and the parity bit are transmitted to their destination, where they are applied to a parity checker circuit. An error occurs during transmission if the parity of the four bits received is even, since the binary information transmitted was originally odd. The output C of the parity checker should be a 1 when an error occurs; i., when the number of 1’s in the four inputs is even. Table 5-6 is the truth table for the odd parity checker circuit. From it we see that the function for C consists of the eight minterms with numerical values having an even number of 0’s. This corresponds to the map of Sec. 58 EXCLUSIVE-OR AND EQUIVALENCE FUNCTIONS 159 Table 5-6 Odd Parity Check 4-bits received | Parity error check soy Pe 0 0 00 1 oo 01 ° oo 10 ° ood 1 O10 0 ° orod 1 Oi a0 1 Octet 0 Too ° 1001 1 ar ee 1 Cea ° 110 0 1 10d ° 1110 ° Leal 1 Fig. 5-29(b) so that the function can be expressed with equivalence operators as follows: C=x@yOzOP ‘The logic diagram for the parity checker is shown in Fig. 5-31(b) and consists of three two-input equivalence gates. It is worth noting that the parity generator can be implemented with the circuit of Fig. 5-31(b) if the input P is permanently held at logic-O and the output is marked P, the advantage being that the same circuit can be used for both parity generation and checking, 160 GATE IMPLEMENTATION Chap. 5 It is obvious from the above example that parity generation and checking circuits always have an output function that includes half of the minterms whose numerical values have either an even or odd number of I’s. As a consequence, they can be implemented with equivalence and/or exclusive-or gates. PROBLEMS 5-1, How many of the MSI functions introduced in Ch. 4 can be accom- modated in one 14-pin IC package? 5-2. Discuss some of the advantages and difficulties that one may encounter when LSI devices are used in digital systems. 5-3. By substituting the AND/OR/NOT equivalent definition of the binary operations as defined in Sec. 2-6, show that: (@) The inhibition and implication operators are neither commutative nor associative. (0) The exclusive-or and equivalence operators are commutative and associative. (©) The NAND operator is not associative. (4) The NOR and NAND operators are not distributive. 54. A majority gate is a digital circuit whose output is logic-1 if the majority of the inputs are logic-1 and logic-0 otherwise. By means of truth tables, find the Boolean function implemented by the following majority (MAJ) gate circuit. pasa La y—| mar [77] Mas MAJ Fr zt I z—| 5-5. Repeat Prob. 5-4 for the following circuit. What is the function of the circuit? MAJ i 5-6. The gate shown below performs the inhibition operation F = x'y, Obtain the AND, OR, and NOT functions using inhibition gates only. Remember that 0 and 1 are valid values for inputs, Sec. 58 PROBLEMS 161 5-12, (a) Show that the NOR and NAND operations are the dual of each other, (b) Show that one physical circuit can function as a positive logic NAND gate or as a negative logic NOR gate. (©) Repeat part (b) for positive logic NOR and negative logic NAND. ‘The following example demonstrates the duality of positive and negative negative logic. (a) Draw the logic diagram for the function F = AB + C. (b) Obtain the truth table. (c) You are told that the gates operate with positive logic logic-1 and logic-0 being +5 volt and -5 volt, respectively. Obtain a truth table showing voltage values instead of I’s and 0's. (4) Use the same physical gates as in (c) but change to a negative logic system; that is, let signal -5 volt represent logic-1 and +5 volt represent logic-0. Obtain the truth table in terms of I's and 0's. (e) Show that the truth table obtained in (d) gives a Boolean function F = (A + B)C which is the dual of the one in (a). () Now repeat steps (a) to (e) after changing step (c) to read: “You are told that the gates operate with negative logic with -5 volt and +5 volt representing 1 and 0 respectively.” Show that the function obtained in step (e) is AB + C. Count the number of logic levels in the circuits of Fig. P4-19 and 4-21. Obtain the equivalent two-level circuit for each. |. What is the fan-in and fan-out of the AND gates and inverters specified in Prob. 4-27? Obtain a two-level and a multilevel implementation AND and OR gates of the function F(A, B, CD) = E (2, 3, 5, 7, 10, 12, 13, 14) with the restriction that inputs 4 and 4’ can provide only one load; ie., each can be connected to one gate input only. Assume that both the normal and complement input variables are available. Using the block diagram method, convert the following logic diagrams from AND/OR implementation to NAND implementation, (a) BCD to excess-3 converter, Fig. 4-8. (>) Three-bit comparator, Fig. 4-10. (©) Fulkadder, Fig. 4-11 (@) Binary to octal decoder, Fig. 4-12. (©) Octal to binary encoder, Fig. 4-13. (©) ight-input multiplexer, Fig. 4-15. (g) Four-output demultiplexer, Fig. 4-17. 162 GATE IMPLEMENTATION Chap. 5 5-13, Repeat Prob. 5-12 for NOR implementation, 5-14, Obtain the NAND logic diagram of a full-adder from the Boolean functions Ca xy txz tye S= Cty ta) taye 5-15. Simplify each of the following functions and implement them with NAND gates, Give two alternatives. (a) Fy = AC' + ACE + ACE’ + A'CD' + A'D'E’ (b) Fy = @' + DY (A+ C+D) (A+ B+ C+D) tet 4D) 5-16, Repeat Prob, 5-15 for NOR implementations. 5-17. Implement the following functions with NAND gates. Assume that both the normal and complement inputs are available. (a) BD + BCD + AB'CD' + A’B'CD' with no more than six gates each having three inputs. (b) (AB + A’B') (CD! + C’'D) with two-input gates. 5-18, Implement the following functions using the don’t-care conditions. Assume that both the normal and complement inputs are available. (a) F = A'B'C' + ABD + A'B'CD' with no more than two NOR d= ABC + AB‘D' gates. (b) F = (4 +D)(A'+B)(4'+C) with no more than three NAND gates. (c) F = B'D + B'C + ABCD with NAND gates. d= ABD + AB'CD' 5-19, Implement the following function with either NAND or NOR gates. Use only four gates. Only the normal inputs are available. F = wiz + w'y2 + x'y2! twxy'z d= we 5-20. Implement the following functions with NOR gates. Assume that both the normal and complement inputs are available. (a) AB! + CD’ + A'CD' + DC (AB + AB’) + DB (AC + A'C) (>) AB'CD! + A'BCD' + ABCD + A'BC'D 5-21. Design the following combinational circuit using minimum number of NOR gates. The circuit has four inputs which represent « decimal digit in BCD and a single output, which is equal to 1 whenever the value of the input digit is less than 3 or more than 7. 5-22. Determine the Boolean function for the output F of the circuit in Fig. P5-22. Obtain an equivalent circuit with less NOR gates. Sec. 5-8 PROBLEMS 163, 5-23, alll a 7 Determine the output Boolean functions of the circuits of Fig. P5-23 —Ds| F B 5-24, 5:25, 5-26. 5-27. 5-28, () Obtain the truth table for the circuits of Fig. PS-23. Obtain the equivalent AND/OR/NOT logic diagram of Fig. PS-23(a). Obtain the equivalent AND/OR/NOT logic diagram of Fig. PS-23(b). List the eight degenerate two-level forms and show that they reduce to a single operation. Explain how the degenerate two-level forms can be used to extend the fan-in of gates. Implement the functions of Prob. 5-15 with the following two-level forms: NOR/OR, NAND/AND, OR/NAND, and AND/NOR 5-2 53 53 533 53. 53 53 GATE IMPLEMENTATION Chap. 5 9. Implement the functions of Prob. 5-15 with the three-level forms: NOR/OR/NOR, NAND/AND/NAND, OR/NAND/NAND, and AND/NOR/NOR. 0. List in a table (similar to Table 5-4) the method of implementation of Boolean functions with nondegenerate three-level forms. Implement the functions of Ex. 5-3 with all possible three-level forms. 1. Obtain the logic diagram of a two-input equivalence function using (a) AND, OR, NOT gates; (b) NOR gates; (c) NAND gates. 2. Show that the circuit of Fig. 5-28(b) is an exclusive-or. Show that A @B OCOD = Z(, 3, 5, 6, 9, 10, 12, 15). 4. Design a combinational circuit that converts a four-bit reflected code number (Table 1-4) to a four-bit binary number. Implement the circuit with exclusive-or gates. 5. Design a combinational circuit to check for even parity of four bits. A logic-1 output is required when the four bits do not constitute an even parity. 6. Implement the four Boolean functions listed using three hal circuits (Fig. 4-2). (@) D=4@Be@C (b) E = A'BC + AB'C (o) F = ABC + (A' + BC (@) ABC der 5-37. Implement the Boolean function: F = AB'CD' + A'BCD' + AB'CD + A'BCD with exclusive-or and AND gates. REFERENCES Maley, G. A., and J. Earle, The Logic Design of Transistor Digital Computers. Englewood Cliffs, N.J.: Prentice-Hall, 1963. Ch. 6. Mano, M. M., “Converting to NOR and NAND Logic,” Electro- Technology, Vol. 75, No. 4 (April 1965), 34-37. |. Maley, G. A., Manual of Logic Circuits, Englewood Cliffs, N.J.: Prentice Hall, 1970. Garrett, L. S., “Integrated-Circuit Digital Logic Families,” IEEE Spectrum (October, November, December 1970). SEQUENTIAL 6 LOGIC 6-1 INTRODUCTION The digital circuits considered thus far have been combinational; that is, the outputs at any instant of time are entirely dependent upon the inputs present at that time. Although every digital system is likely to have combinational circuits, most systems encountered in practice also include memory elements, which require that the system be described in terms of sequential logic. A block diagram of a sequential circuit is shown in Fig. 6-1. It consists of combinational logic gates which accept binary signals from external inputs and from outputs of memory elements and generate signals to extemal outputs and to inputs of memory elements. A memory element is a device capable of storing one bit of information. The binary information stored in memory elements can be changed by the outputs of the combinational circuit. The outputs of memory elements, in turn, go to the inputs of gates in the combinational circuit. The combinational circuit, by itself, performs a specific information processing operation, part of which is used to determine the binary value to be stored in memory elements. The outputs of memory elements are applied to the combinational circuit and determine, in part, the circuit's outputs. This process clearly demonstrates that the external outputs of a sequential circuit are a function not only of external inputs but also of the present state of memory elements. (The state of a memory element or binary cell was defined in Sec. 1-7.) The next state of memory 165 166 SEQUENTIAL Logic cha elements is a function of external inputs and the present state. Thus, a sequential circuit is specified by a time sequence of inputs, outputs, and internal states. Outputs from combinational Outputs from Togie gates memory elements ‘Combinational Memory ‘circuit elements a Figure 6-1 Block diagram of « sequential circuit There are two main types of sequential circuits. Their classification depends on the timing of their signals. A synchronous sequential circuit is a system whose behavior can be defined from the knowledge of its signals at discrete instants of time. The behavior of an asynchronous sequential circuit depends upon the order in which its input signals change and can be affected at any instant of time. The memory elements commonly used asynchronous sequential circuits are time-delay devices. The memory capa- bility of a time-delay device is due to the fact that it takes a finite time for the signal to propagate through the device. In practice, the internal Propagation delay of logic gates is of sufficient duration to produce the needed delay so that physical time-delay units may be unnecessary. In gate type asynchronous systems, the memory elements of Fig. 6-1 consist of logic gates whose propagation delays constitute the required memory. Thus, an asynchronous sequential circuit may be regarded as a combinational circuit with feedback. Because of the feedback among logic gates, an asynchronous sequential circuit may, at times, become unstable. The insta- bility problem imposes many difficulties to the designer. Hence they are not as commonly used as synchronous systems. A. synchronous sequential logic system, by definition, must employ signals that affect the memory elements only at discrete instants of time. ‘One way of achieving this goal is to use pulses of limited duration through- out the system so that one pulse amplitude represents logic-1 and another ‘Sec. 6-1 INTRODUCTION 167 pulse amplitude (or the absence of a pulse) represents logic-0. The difficulty with a system of pulses is that any two pulses arriving from separate independent sources to the inputs of the same gate will exhibit unpre- dictable delays, separate the pulses slightly, and result in unreliable operation. Practical synchronous sequential logic systems use fixed amplitudes such as voltage levels for the binary signals. Synchronization is achieved by a timing device called a master clock generator which generates a periodic train of clock pulses. The clock pulses are distributed throughout the system in such a way that memory elements are affected only with the arrival of the synchronization pulse. In practice, the clock pulses are applied into AND gates together with the signals that specify the required change in memory elements. The AND gate outputs can transmit signals only at instants which coincide with the arrival of clock pulses. Synchronous sequential circuits that use clock pulses in the inputs of memory elements are called clocked sequential circuits. Clocked sequential circuits are the type encountered most frequently. They do not manifest instability prob- lems and their timing is easily broken down into independent discrete steps, each of which is considered separately. The sequential circuits discussed in this book are exclusively of the clocked type. The memory elements used in clocked sequential circuits are called Slip-flops. These circuits are binary cells capable of storing one bit of information. A flip-flop circuit has two outputs, one for the normal value and one for the complement value of the bit stored in it. Binary informa- tion can enter a flip-flop in a variety of ways, a fact which gives rise to different types of flip-flops. In the next section we shall examine the various types of flip-flops and define their logical properties. 6-2 FLIP-FLOPS A flip-flop circuit can maintain a binary state indefinitely (as long as power is delivered to the circuit) until directed by an input signal to switch states. ‘The major differences among various types of flip-flops are in the number of inputs they possess and in the manner in which the inputs affect the binary state. The most common types of flip-flops are discussed below. Basic Flip-Flop Circuit It was mentioned in Secs. 5-5 and 5.6 that a flip-flop circuit can be constructed from. two NAND gates or two NOR gates. These constructions are shown in the logic diagrams of Figs. 6-2 and 6-3. Each circuit forms a basic Mip-flop upon which other more complicated types can be built. The 168 SEQUENTIAL LOGIC Chap. 6 cross-coupled connection from the output of one gate to the input of the other gate constitutes a feedback path. For that reason, the circuits are classified as asynchronous sequential circuits. Each flip-flop has two outputs, Q and Q', and two inputs, ser and reset. This type of flip-flop is sometimes called a direct-coupled RS flip-flop; the R and the S being the first letters of the two input names. To analyze the operation of the circuit of Fig. 6-2, we must remember that the output of a NOR gate is O if any input is 1, and that the output is 1 only when all inputs are 0. As a starting point, assume that the set input is 1 and the reset input is 0. Since gate number 2 has an input of 1, its output Q' must be 0, which puts both inputs of gate number 1 at 0, so that output Q is 1. When the set input is returned to 0, the outputs remain the same. This is because output Q remains a 1, leaving one input of gate number 2 at I. That causes output Q' to stay at 0, which leaves both inputs of gate number 1 at 0, so that output @ is a 1. In the same manner it is possible to show that a 1 in the reset input changes output Q to 0 and Q' to 1. When the reset input returns to 0, the outputs do not change. Ss RIOe ° To 0 0)1 0 (afters =1,R=0) 1 ojo og 9 0/0 1 (aters=0,R=1) 0 S(set) 11fo0 (a) Logie diagram (b) Truth table Figure 6-2 Basic flip-flop circuit with NOR gates When a 1 is applied to both the set and reset inputs, both Q and Q' outputs go to 0. Strictly speaking, a flip-flop whose two outputs assume the same binary value while both the set and reset inputs have equal values is called a latch. In a true flip-flop circuit this condition is normally avoided. A flip-flop has two useful states. When Q = 1 and Q' = 0 it is in the set-state (or I-state). When Q = 0 and Q' = 1, it is in the clearstate (or Ostate). The outputs Q and Q' are complements of each other and are referred to as the normal and complement output, respectively. The binary state of the flip-flop is taken to be the value of the normal output. Under normal operation, both inputs remain at 0 unless the state of the flip-flop has to be changed. The application of a momentary 1 to the set input causes the flip-flop to go to the set-state. The set input must go back to 0 before a 1 is applied to the reset input. A momentary 1 applied to Sec. 6-2 FLIP-FLOPS 169 the reset input causes the flip-flop to go the clear-state. When both inputs are initially 0, a 1 applied to the set input while the flip-flop is in the set state or a 1 applied to the reset input while the flip-flop is in the clearstate leaves the outputs unchanged. When a I is applied to both the set and reset inputs, both outputs go to 0. This state is undefined and is usually avoided. If both inputs now go to 0, the state of the flip-flop is indeterminate and depends on which input remains a 1 longer before the transition to 0. aa Ro ° S(set) — 1 Oneal 1 1{0 1 (aters=1,R =0) fo yo ifr o 4 “A 1} 1 0 (after$=0,R=1) ° R(seset) Qo ola (a) Logic diagram (b) Truth table Figure 6-3 Basic flip flop circuit with NAND gates The NAND basic flip-flop circuit of Fig. 6-3 operates with both inputs normally at 1 unless the state of the flip-flop has to be changed. The application of a momentary 0 to the set input causes output Q to go to 1 and Q' to go to 0, thus putting the flip-flop into the setstate. After the set input returns to 1, a momentary 0 to the reset input causes a transition to the clear-state. When both inputs go to 0, both outputs go to 1, a condition avoided in normal flip-flop operation. Clocked AS Flip-Flop The basic flip-flop as it stands is an asynchronous sequential circuit. By adding gates to the inputs of the basic circuit, the flip-flop can be made to respond to input levels during the occurrence of a clock pulse. The clocked RS flip-flop shown in Fig. 6-4(a) consists of a basic NOR flip-flop and two AND gates. The outputs of the two AND gates remain at 0 as long as the clock pulse (abbreviated CP) is 0, regardless of the S and R input values. When the clock pulse goes to 1, information from the S and R inputs is allowed to reach the basic flip‘lop. The set-state is reached with S=1, R=0, and CP = 1. To change into the clear-state, the inputs must be S=0, R = 1, and CP = 1, With both S= 1 and R = 1, the occurrence of a clock pulse causes both ouputs to momentarily go to 0. When the pulse is removed, the state of the flip-flop is indeterminate; i.e., either state may result, depending on whether the set or the reset input of the basic flip-flop remains a 1 longer before the transition to 0 at the end of the pulse. 170 SEQUENTIAL LoGic Chap. 6 Two symbols for the RS flip-flop are shown in Fig. 6-4(b). The AND gates with the clock-pulse input may be drawn external to the symbol, or a (Goce ay s (8) Laie diaram (©) Chass te Q Q g Q SR £ ' Ae Sy wih an sete, of fo} inka ci cr QU+ = S+RO Spel with clock pulses 7 swag ” (3) Chants goto Figure 64 Clocked RS flip-flop symbol with a CP label may be used to mean that flip-flop outputs are not affected unless a clock pulse occurs in the input marked CP. ‘The characteristic table for the flip-flop is shown in Fig. 6-4(c). This table summarizes the operation of the flip-flop in a tabular form. @ is the binary state of the flip-flop at a given time (referred to as present state), the S and R columns give the possible values of the inputs, and Q(t + 1) is the state of the flip-flop after the occurrence of a clock pulse (referred to ‘as next state). The characteristic equation of the flip-flop is derived in the map of Fig. 6-4(d). This equation specifies the value of the next state as a function of the present state and the inputs. The characteristic equation is an algebraic expression for the binary information of the characteristic table. The two indeterminate states are marked by X’s in the map since they may result in either a 1 or a 0. However, the relation SR = 0 must be included as part Sec. 6-2 FLIPFLOPS 171 of the characteristic equation to specify that both $ and R cannot equal 1 simultaneously. O Flip-Flop The D flip-flop shown in Fig. 6-5 is a modification of the clocked RS flip-flop. NAND gates 1 and 2 form a basic flip-flop and gates 3 and 4 modify it into a clocked RS flip-flop. The D input goes directly to the S input and its complement, through gate 5, is applied to the R input. As long as the clock pulse input is at 0, gates 3 and 4 have a 1 in their i >— 3 1 @ cp—t 2 pp (@) Logie diagram with NAND gates D ge @ 2 Dijov+n ‘otro 0 oo ° 1 ot orl D ce 1 ofo oft 7 raja O04 =D (b) Symbol (€) Characteristic table () Characteristic equation Figure 65 Clocked D flip-flop outputs, regardless of the value of the other inputs. This conforms to the requirement that the two inputs of a basic NAND flip-flop (Fig. 6-3) remain initially at the 1 level. The D input is sampled during the occur- rence of a clock pulse. If it is a 1, the output of gate 3 goes to 0, switching the flip-flop to the set-state (unless it was already set). If it is a 0, the output of gate 4 goes to 0, switching the flip-flop to the clear-state. The D flip-flop receives the designation from its ability to transfer “data” into a flip-flop. It is basically an RS flip-flop with an inverter in the R input. The added inverter reduces the number of inputs from two to one. The symbol for a clocked D flip-flop is shown in Fig. 6-5(b). The characteristic table is listed in part (©) and the characteristic equation is 172 SEQUENTIAL LoGiC Chap. 6 derived in part (d). The characteristic equation shows that the next state of the flip-flop is the same as the D input and is independent of the value of the present state. IK Flip-Flop A JK flip-flop is a refinement of the RS flip-flop in that the indeterminate state of the RS type is defined in the JK type. Inputs J and K behave like inputs $ and R to set and clear the flip-flop (note that in a JK flip-flop, the first letter J is for ser and the second letter K is for clear). When inputs ate applied to both J and K simultaneously, the flip-flop switches to its complement state; that is, if Q = 1, it switches to Q = 0, and vice versa. cp. (a) Logie diagram 7 ieee. @rKiou+y 2 oO | 0-0 ofo o| (ef) th cr — ts tice atta: en) || | fiz Morals barat} JO+RO Oo) sytet eee Figure 66 Clocked JK flip-flop A clocked JK flip-flop is shown in Fig. 6-6(a). Output @ is ANDed with K and CP inputs so that the flip-flop is cleared during a clock pulse only if, Q was previously 1. Similarly, output Q' is ANDed with J and CP inputs so that the flip-flop is set with a clock pulse only if Q’ was previously 1 Sec. 6-2 FLIP-FLOPS 173 When both J and K are 1, a clock pulse is transmitted through one AND gate; the one connected to the output equal to 1. Thus, if Q equals 1, the output of the upper AND gate becomes a 1 and the flip-flop is cleared. If Q equals 1, the output of the lower AND gate becomes a 1 and the flip-flop is set. In either case, the state of the flip-flop is complemented. Note that if the CP signal remains a 1 after the outputs have been complemented, the flip-flop will go through a new transition, This timing problem is eliminated with a master-slave JK flip-flop. The symbol, characteristic table, and characteristic equation of the JK flip-flop are shown in Fig 6-6, parts (b), (c), and (4), respectively. T Flip-Flop The T flip-flop is a single-input version of the JK flip-flop. As shown in Fig. 6-7(a), the T flip-flop is obtained from a JK type if both inputs are ui ° cr |_ — es 1_S (a) Loge diagram pi ear oT 1 cr ot oft} a outn=re+Te (6) Symbol (6) Characteristic table (4) Characteristic equation Figure 6-7. Clocked T flip-flop 174 SEQUENTIAL Locic Chap. 6 tied together. The designation “7” comes from the ability of the flip-flop to “toggle,” or change state. Regardless of the present state of the flip-flop, it assumes the complement state when the clock pulse occurs while input 7 is logic-1 The symbol, characteristic table, and characteristic equation of the T flip-flop are shown in Fig. 6-7, parts (b), (c), and (a), respectively. Other Considerations The four types of flip-flops introduced in this section may be available in an unclocked version. Flip-flops without a clock input are useful for asynchronous operations. Unclocked flip-flop can be converted to clocked flip-flop by ANDing the clock pulse and the input information prior to their application to each input of the flip-flop. The flip-flops introduced in this section are the most common types available commercially. The analysis and design procedures developed in this and the next chapter are applicable for any clocked flip-flop once its characteristic table is defined. 6-3 TRIGGERING OF FLIP-FLOPS The state of a flip-flop is switched by a momentary change in the input signal. This momentary change is called a srigger and the transition which it causes is said to trigger the flip-flop. Asynchronous flip-flops, such as the basic circuits of Figs. 6-2 and 6-3, require an input trigger defined by a change of signal level. This level must be returned to its initial value (O in the NOR and 1 in the NAND flip-flop) before applying a second trigger. Clocked flip-flops are triggered by pulses. A pulse starts from an initial value of 0, goes momentarily to 1, and after a short time, returns to its initial value. The time interval from the application of the pulse until the ‘output transition occurs is a critical factor that needs further investigation. As seen from the block diagram of Fig. 6-1, a sequential circuit has a feedback path between the combinational circuit and the memory elements. This path can produce instability if the outputs of memory elements (flip-flops) are changing while the outputs of the combinational circuit that g0 to flip-flop inputs are being sampled by the clock pulse. This timing problem can be prevented if the outputs of flip-flops do not start changing until the pulse input has returned to 0. To insure such an operation, a flip-flop must have a signal propagation delay from input to output in excess of the pulse duration. This delay is usually very difficult to control if the designer depends entirely on the propagation delay of logic gates. One way of insuring the proper delay is to include within the flip-flop circuit a physical delay unit having a delay equal to or greater than the Sec. 6:3 TRIGGERING OF FLIP-FLOPS 175 pulse duration. More commonly, the flip-flop is triggered during the trailing edge of the pulse as discussed below. ‘A clock pulse may be either positive or negative, depending on whether positive or negative logic is employed. A clock source remains at logic-0 between pulses and goes to logic-l during the occurrence of a pulse. A pulse goes through two signal transitions: from 0 to I and the return from 1 back to 0. As shown in Fig. 6-8, the initial transition is defined as the leading edge and the return transition as the trailing edge. The two edges may be either rising or falling depending on whether we have positive or negative pulses. ‘The clocked flip-flops introduced in Sec. 6-2 are triggered during the leading edge of the pulse; that is, the state transition starts as soon as the pulse reaches the logic-1 level. The new state of the flip-flop may appear at the output terminals while the input pulse is still a I. When this happens, the output of one flip-flop cannot be used as a condition for triggering another flip-flop when both respond to the same clock pulse. When flip- flops are triggered during the trailing edge of the pulse, the new state of the flip-flop appears at the output terminals after the pulse signal has gone back to 0. This is in effect a way of achieving the needed delay mentioned previously. The time dependence of sequential circuits is best illustrated in a timing diagram. A timing diagram shows how the signals at various terminals vary over time. Each terminal in a timing diagram is assigned two coordinates: a horizontal coordinate to represent time and a vertical coordinate to repre- sent signal amplitude. An example of a timing diagram that demonstrates the difficulty encountered when flip-flops are triggered on the leading edge of a clock pulse is shown in Fig. 6-9(a). The diagram shows the signals of a Positive pulse Negative pulse 1 ° ° 1 Leading Trailing Leading Trailing edge edge edge edge Figure 68 Definition of leading and trailing edge of # pulse clock pulse (CP) terminal, a J input terminal and a Q output terminal of a clocked JK flip-flop. It is assumed that the signal at J comes from the output of some other flip-flop triggered by the same clock pulses. The 176 SEQUENTIAL LoGiC Chap. 6 signal at J starts its transition from 0 to 1 during the leading edge of clock pulse t,. Output Q may or may not switch to 1 during clock pulse t). This is because input J is in a state of transition during the pulse interval and ‘may sometimes be detected as a 0 signal and sometimes as a 1 signal. The best way to avoid this ambiguity is to insure that the flip-flop is not triggered while input J is in transition. It should be triggered at time r2, when input J is maintained at logic-1, which requires additional control signals not indicated in the timing diagram. Under this condition, the signal at J must remain at logic-1 during pulse interval f2 and can return to logic-0 during clock pulse fy or later. 1 cP ° (a) Leading edge triggering ty 12 ty cr, filet (b) ‘Trailing edge triggering Figure 69 Timing diagram for a JK flip-flop Now consider the same timing sequence with flip-flops being triggered on the trailing edge of the pulse as shown in Fig. 6-9(b). The signal at J becomes a 1 after the termination of pulse interval r,. Output Q remains a O after 1, because J is a 0 during the pulse interval. During pulse interval ta, input J of the flip-flop is a 1, so output Q starts switching to 1 at the trailing edge of f2. The same fz pulse is used to trigger the other flip-flop that returns the signal at J to 0. This is possible because the signal at J Sec. 6.3 TRIGGERING OF FLIP-FLOPS 177 remains a 1 until the end of the clock pulse. Thus, with trailingedge triggering, it is possible to switch the output of a flip-flop and its input information with the same clock pulse. The trailing-edge trigger allows the application of the same clock pulse to all flip-flops. The state of all flip-flops can be changed simultaneously since the new state appears at the output terminals only after the clock pulse has retumed to 0. One way to make a flip-flop respond to the trailing edge of a pulse is to use AC coupling; i.e., insert a capacitor in the input. The capacitor forms a circuit capable of generating a spike as a response to a momentary change of input signal amplitude. A positive pulse emerges from such a circuit with a positive spike from the leading edge and a negative spike from the trailing edge. Trailing-edge operation is achieved by designing the flip-flop to neglect a positive spike and trigger on the occurrence of a negative spike. Another method for achieving state transition during the trailing edge of a clock pulse is to employ two flip-flop circuits; one to hold the output state until the trailing edge and the other to sample the input information on the leading edge. Such a combination is called a master-slave flip-flop. Jy 7 +0 8 t—o K cp. Figure 6-10 Clocked master-slave JK flip-op ‘An example of a master-slave JK flip-flop constructed with NAND gates is shown in Fig. 6-10. It consists of two flip-flops; gates 1 through 4 form the master flip-flop and gates 5 through 8 form the slave flip-flop. The information present at the J and K inputs is transmitted to the master flip-flop on the leading edge of a clock pulse and held there until the trailing edge of the clock pulse occurs, after which it is allowed to pass through to the slave flip-flop. The clock input is normally 0, which keeps the outputs of gates I and 2 at the logic-1 level. This prevents the J and K 178 SEQUENTIAL LoGiC Chap. 6 inputs from affecting the master flip-flop. The slave flip-flop is a clocked RS type with the master flip-flop supplying the inputs and the clock input being inverted by gate number 9. When the clock is 0, the output of gate 9 is 1, so that output Q is equal to Y and Q' is equal to Y'. When the leading edge of a clock pulse occurs, the master flip-flop is affected and may switch states. The slave flip-flop is isolated as long as the clock is at logic-1 level, because the output of gate 9, being at 0, provides a 1 to both inputs of the NAND basic flip-flop of gates 7 and 8. When clock input retums to 0, the master flip-flop is isolated from the J and K inputs, and the slave flip-flop goes to the same state as the master flip-flop. The master-slave combination can be constructed for any type of flip- flop. A leading-edge flip-flop is converted to a master-slave by the addition of a clocked RS flip-flop with an inverted clock input to form the slave flip-flop. We shall assume throughout the rest of the book that all flip-flop types change state after the trailing edge of the clock pulse. Among the different types of flip-flops, the JK master-slave is the one used most in the design of digital systems. Some JK flip-flops available in IC packages provide additional inputs for setting and clearing the flip-flop asynchronously. These inputs are usually called “direct set” and “direct clear.” They affect the flip-flop on a positive (or negative) swing of a signal amplitude without the need of clock pulses. These inputs are useful for bringing the flip-flops to an initial state prior to its clocked operation. 6-4 ANALYSIS OF CLOCKED SEQUENTIAL CIRCUITS The behavior of a sequential circuit is determined from the inputs, the outputs, and the state of its flip-flops. Both the outputs and next state are a function of the inputs and the present state. The analysis of sequential circuits consists of obtaining a table or a diagram for the time sequence of inputs, outputs, and internal states. It is also possible to write Boolean expressions that describe the behavior of sequential circuits. However, these expressions must include the necessary time sequence either directly or indirectly. A logic diagram is recognized as the circuit of a sequential circuit if it includes flip-flops. The flip-flops may be of any type and the logic diagram may or may not include combinational gates. In this section we shall first introduce a specific example of a clocked sequential circuit and then present various methods for describing the behavior of sequential circuits. ‘The specific example will be used throughout the discussion to illustrate the various methods. Sec, 6-4 ANALYSIS OF CLOCKED SEQUENTIAL CIRCUITS 179 An Example of a Sequential Circuit An example of a clocked sequential circuit is shown in Fig. 6-11. It has one input variable x, one output variable y, and two clocked RS flip-flops labeled A and B. The cross connections from outputs of flip-flops to inputs of gates are not shown by line drawings so as to facilitate the tracing of the circuit. Instead, the connections are recognized from the letter symbol marked in each input. For example, the input marked x’ in gate number 1 designates an input from the complement of x. The second input marked A designates a connection to the normal output of flip-flop A We shall assume trailing-edge triggering in both flip-flops and in the source that produces the external input x. Therefore, the signals for a given present state are available during the time from the termination of a clock pulse to the termination of the next clock pulse, at which time the circuit goes to the next state. State Table ‘The time sequence of inputs, outputs, and flip-flop states may be enumerated in a state table*. The state table for the circuit of Fig. 6-11 is shown in Table 6-1. It consists of three sections labeled present state, : . cr 7 . 4} st A Figure 6-11 Example of a clocked sequential circuit “Switching circuit theory books call this table a sransition table. They reserve the name state table for a table with internal states represented by arbitrary symbols. 180 SEQUENTIAL LOGIC Chap. 6 Table 6-1. State Table for Circuit of Fig. 6-11 Next state Output Present State x=0 xl x=0 xed AB AB AB ’ y 00 00 o1 ° 0 o1 un a1 0 0 10 10 00 ° 1 u 10 u 0 0 next state, and output. The present state designates the state of flip-flops before the occurrence of a clock pulse. The next state shows the state of flip-flops after the application of a clock pulse, and the output section lists the value of the output variable during the present state. Both the next state and output sections have two columns, one for x = 0 and the other for x = 1 ‘The derivation of the state table starts from an assumed initial state. The initial state of most practical sequential circuits is defined to be the state with 0's in all flip-flops. Some sequential circuits have a different initial state and some have none at all. In either case, the analysis can always start from any arbitrary state. In this example, we start deriving the state table from the initial state 00. When the present state is 00, A = 0 and B = 0. From the logic diagram we see that with both flip-flops cleared and x = 0, none of the AND gates produce a logic-1 signal. Therefore, the next state remains unchanged. With AB = 00 and x = I, gate number 2 produces a logic-1 signal at the S input of flip-flop B and gate number 3 produces a logic-1 signal at the R input of flip-flop A. When a clock pulse triggers the flip-flops, A is cleared and B is set, making the next state O1. This information is listed in the first row of the state table. In a similar manner, we can derive the next state starting from the other three possible present states. In general, the next state is a function of the inputs, the present state, and the type of flip-flop used. With RS flip-flops, for example, we must remember that a 1 in input S sets the flip-flop and a 1 in input R clears the flip-flop, irrespective of its previous state. A O in both the Sand R inputs leaves the flip-flop unchanged, while a 1 in both the $ and R inputs shows a bad design and an indeterminate state table ‘The entries for the output section are easier to derive. In this example, output y is equal to 1 only when x = 1, A = I, and B = 0. Therefore, the output columns are marked with 0’s except when the present state is 10 and input x = 1, for which y is marked with a 1. Sec. 6-4 ANALYSIS OF CLOCKED SEQUENTIAL CIRCUITS 181 The state table of any sequential circuit is obtained by the same procedure used in the example. In general, a sequential circuit with m flip-flops and n input variables will have 2” rows, one for each state. The next state and output sections each will have 2” columns, one for each input combination. The external outputs of a sequential circuit may come from logic gates ‘or from memory elements. The output section in the state table is neces- sary only if there are outputs from logic gates. Any external output taken directly from a flip-flop is already listed in the present state column of the state table. Therefore, the output section of the state table can be excluded if there are no external outputs from logic gates. A sequential circuit whose state table includes an output section is sometimes called a Mealy machine. A circuit whose state table does not list the outputs explicitly is called a Moore machine. Mealy and Moore are the names of the original investigators of the respective sequential State Diagram The information available in a state table may be represented graphically in a state diagram. In this diagram, a state is represented by a circle and the transition between states is indicated by directed lines connecting the circles. The state diagram of the sequential circuit of Fig. 6-11 is shown in Fig. 6-12. The binary number inside each circle identifies the state the circle represents. The directed lines are labeled with two binary numbers separated by a /. The input value that causes the state transition is labeled first; the number after the symbol / gives the value of the output during the present state. For example, the directed line from state 00 to Ol is labeled 1/0, meaning that the sequential circuit is in a present state 00 while x = 1 and y = 0 and that on the termination of the next clock Figure 6-12 State diagram for circuit of Figure 6-11 182 SEQUENTIAL LoGiC Chap. 6 pulse, the circuit goes to next state O1. A directed line connecting a circle with itself indicates that no change of state occurs. The state diagram provides the same information as the state table and is obtained directly from Table 6-1. There is no difference between a state table and a state diagram except in the manner of representation. The state table is easier to derive from a given logic diagram and the state diagram follows directly from a state table. The state diagram gives a pictorial view of state transitions and is in a form suitable for human interpretation of the circuit operation. The state diagram is often used as the initial design specification of a sequential circuit. State Equations A state equation (also known as an application equation) is an algebraic expression that specifies the conditions for a flip-flop state transition. The left side of the equation denotes the next state of a flip-flop and the right side, a Boolean function that specifies the present state conditions that make the next state equal to 1. A state equation is similar in form to a flip-flop characteristic equation except that it specifies the next state condi- tions in terms of external input variables and other flip-flop values. The state equation is derived directly from a state table. For example, the state equation for flip-flop A is derived from inspection of Table 6-1. From the next state columns we note that flip-flop A goes to the I-state four times: when x = 0 and AB = OI or 10 or 11, or when x = 1 and AB = 11. This can be expressed algebraically in a state equation as follows: A(t + 1) = (A'B + AB + ABYx’ + ABx ‘The righthand side of the state equation is a Boolean function for a present state. When this function is equal to 1, the occurrence of a clock pulse will cause flip-flop A to have a next state of 1. When the function is equal to 0, the clock pulse will cause A to have a next state of 0. The left side of the equation identified the Mip-flop by its letter symbol, followed by the time function designation (¢ + 1) to emphasize that this value is to be reached by the flip-flop one pulse sequence later. The state equation is a Boolean function with time included. It is applicable only in clock sequential circuits since A(t + 1) is defined to change value with the occurrence of a clock pulse at discrete instants of time The state equation for flip-flop A is simplified by means of a map as shown in Fig. 6-13(a). With some algebraic manipulation the function can be expressed in the following form: A(t + 1) = Bx’ + (B'xy'A Sec. 6-4 ANALYSIS OF CLOCKED SEQUENTIAL CIRCUITS 183 —— BY + (Bx) A BOF N= Art +08 Bx + (BxYA (by = Axt (Ary Figure 6-13 State equations for flip-flops A and B If we let Bx’ = S and B'x = R we obtain the relation: A(t + 1)=S+ RIA which is the characteristic equation of an RS flip-flop (Fig. 6-4(4)). This relation between the state equation and the flip-flop characteristic equation can be justified from inspection of the logic diagram of Fig. 6-11. In it we see that the input of flip-flop A is equal to the Boolean function Bx’ and the R input is equal to B’x. Substituting these functions into the flip-flop characteristic equation results in its state equation for this sequential circuit. The state equation for a flip-flop in a sequential circuit may be derived from a state table or from a logic diagram. The derivation from the state table consists of obtaining the Boolean function specifying the conditions that make the next state of the flip-flop 1. The derivation from a logic diagram consists of obtaining the functions of the flip-flop inputs and substituting them into the flip-flop characteristic equation. The derivation of the state equation for flip-flop B from the state table is shown in the map of Fig. 6-13(b). The 1’s marked in the map are the present state and input combinations that cause the flip-flop to go to a next state of I. These conditions ate obtained directly from Table 6-1. The simplified form obtained in the map is manipulated algebraically and the state equation obtained is: B+ 1) = Ale + (AX'YB The state equation can be derived directly from the logic diagram. Prom 6-11 we see that the signal for input 5 of flip-flop B is generated by the function A'x and the signal for input R by the function Ax’, Substituting S = A'x and R = Ax’ into an RS flip-flop characteristic equation given by’ BU +1)=S+R'B wwe obtain the state equation derived above. 184 SEQUENTIAL Logic Chap. 6 The state equations of all flip-flops, together with the output functions, fully specify a sequential circuit. They represent, algebraically, the same information a state table represents in tabular form and a state diagram in graphical form. Flip-Flop Input Functions The logic diagram of a sequential circuit consists of memory elements and gates. The type of flip-flops and their characteristic table specify the logical properties of the memory elements. The interconnections among the gates form a combinational circuit and may be specified algebraically with Boolean functions. Thus, knowledge of the type of flip-flops and a list of the Boolean functions of the combinational circuit provide all the informa- tion needed to draw the logic diagram of a sequential circuit. The part of the combinational circuit that generates external outputs is described algebraically by the circuit output functions. The part of the circuit that generates the inputs to flip-flops are described algebraically by a set of Boolean functions called flip-flop input functions, or sometimes input equations. We shall adopt the convention of using two letters to designate a flip-flop input variable; the first to designate the name of the input and the second the name of the flip-flop. As an example, consider the following flip-flop input functions: JA = BC'x + B'Cx’ KA=Bty JA and KA designate two Boolean variables. The first letter in each denotes the J and K input, respectively, of a JK flip-flop. The second letter A is the symbol name of the flip-flop. The right side of each equation is a Boolean function for the corresponding flip-flop input variable. The implementation of the two input functions is shown in the logic diagram of Fig. 6-14. The JK flip-flop has an output symbol A and two inputs labeled J and K, The combinational circuit drawn in the diagram is the implementa- tion of the algebraic expression given by the input functions. The outputs of the combinational circuit are denoted by JA and KA in the input functions and go to the J and K inputs, respectively, of flip-flop A. From this example we see that a flip-flop input function is an algebraic expression for a combinational circuit. The two-letter designation is a variable name for an output of the combinational circuit. This output is always connected to the input (designated by the first letter) of a flip-flop (designated by the second letter) The sequential circuit of Fig. 6-11 has one input x, one output y, and two RS flip-flops denoted by A and B. The logic diagram can be expressed Sec. 65 COUNTERS 185 B © 7 cI Figure 6-14 Implementation of the Mlipflop input functions: JA = BC'x + BC’ and KA =B+y algebraically with four flip-flop input functions and one circuit output function as follows: SA = Bx’ = RA =B'x SB= Aix RB=AXx' y = ABx ‘This set of Boolean functions fully specifies the logic diagram. Variables SA and RA specify an RS flip-flop labeled A; variables SB and RB specify a second RS flip-flop labeled B. Variable y denotes the output. The Boolean expressions for the variables specify the combinational circuit part of the sequential circuit. ‘The flip-flop input functions constitute a convenient algebraic form for specifying a logic diagram of a sequential circuit. They imply the type of flip-flop from the first letter of the input variable and they fully specify the combinational circuit that drives the flip-flop. Time is not included explicitly in these equations but is implied from the clock-pulse operation. It is sometimes convenient to specify a sequential circuit algebraically with circuit output functions and flip-flop input functions instead of drawing the logic diagram. 6-5 COUNTERS ‘A sequential circuit that goes through a prescribed sequence of states upon the application of input pulses is called a counter. The input pulses, called count pulses, may be clock pulses or may originate from an external source and may occur at prescribed intervals of time or at random. The sequence of states in a counter may follow a binary count or any other sequence of states. Counters are found in almost all equipment containing digital logic. They are used for counting the number of occurrences of an event and are useful for generating timing sequences to control operations in a digital system. 186 SEQUENTIAL LoGiC Chap. 6 OF the various sequences a counter may follow, the straight binary sequence is the simplest and most straightforward. A counter that follows the binary number sequence is called a binary counter. An n-bit binary counter consists of nn flip-flops and can count in binary from 0 to 2" — 1. Asan example, the state diagram of a four-bit binary counter is drawn in Fig. 6-15. As seen from the state sequence listed inside each circle, the flip-flop outputs repeat the binary count sequence with a return to 0000 after the count of 1111. The directed lines between circles are not marked with numbers because a counter is considered to be void of inputs and outputs. This is because input pulses are implied in clock sequential circuits and are not considered as input information. The outputs of a counter are the outputs of its flip-flops so that the output information is available from the present state. The present state of a counter as specified by the binary number inside a circle remains unchanged as long as an input pulse is absent. The directed line points to the next state reached after the occurrence of acount pulse. Counters may have any sequence and may be operated synchronously or asynchronously. They are used extensively in the design of digital systems and are classified as standard items. A wide variety of counter circuits are available in MSI packages. In this section we shall introduce a few simple counters and investigate their operation. A procedure for designing synchronous counters of any sequence is found in Sec. 7-5. Figure 6-15 State diagram of a 4-bit binary counter Sec. 65 counrers 187 Binary Ripple Counter ‘A binary ripple counter is the most elementary digital circuit that performs the most elementary binary function. It consists of a series connection of T flip-flops without any logic gates. Each flip-flop is triggered by the output of its preceeding flip-flop. To understand the operation of the counter, it is necessary to go back to the state diagram of Fig. 6-15 and investigate the sequence of numbers inside circles. Going through all 16 numbers in sequence we note that the least significant bit is complemented with every state transition. The second significant bit is complemented during a transition from 1 to 0 of the least significant bit. Similarly, any significant bit in a binary count sequence is ‘complemented during the transition from I to 0 of its preceeding significant bit. ‘The binary ripple counter performs this binary count sequence with com- plementing T flip-flops as follows: the least significant flip-flop is complemented every count pulse and each succeeding flip-flop is complemented when its previous flip-flop goes from 1 to 0. Two logic diagrams of a four-bit binary ripple counter are shown in Fig. 6-16. Each consists of four T flip-flops with the output of one flip-flop going into the input of the next on its left. The first flip-flop is complemented with each count pulse. The other flip-flops are complemented when the normal output of the preceeding flip-flop changes from I to 0. Each flip-flop serves as a source for triggering the next flip-flop and the Os Gr Os L 1) u [uh | 4 T 7 7 7 Count pulses o | Uo PU | Uo (a) With fip-lops triggered on positive-going edge =e pats 7 logie-1 signal (b) With clocked fip-fops triggered fon the trailing edge of the pulse Figure 616 Asynchronous 4-bit binary counters 188 SEQUENTIAL LoGiC Chap. 6 signal propagates through the counter in a “ripple” fashion; i.e., the flip- flops essentially change one at a time in rapid succession. This is in contrast to a synchronous counter where all flip-flops change state simultaneously with the incoming count pulse. The binary ripple counter of Fig. 6-16(a) uses unclocked flip-flops triggered on the positive-going edge of a changing signal amplitude. In a positive logic system, this means that the flip-flop starts changing states when its input goes from 0 to 1. Therefore, we must connect the com- plement output of a flip-flop to the T input of the next flip-flop, since the complement output goes from 0 to 1 (the required signal) when the normal output goes from 1 to 0 (the required numerical condition). The circuit of Fig. 6-16(b) uses clocked flip-flops with trailingedge triggering. In a positive logic system, this means that the change of state occurs during a signal transition from 1 to 0. Therefore, the normal output of one flip-flop is connected to the CP input of the next. The 7 inputs are connected permanently to a logic-1 signal so that the state transition is controlled entirely by the CP input. The timing diagram for each flip-flop of Fig. 6-16(b) is shown in Fig. 6-17. Note that except for flip-flop Qo, the state transition is not caused by the pulse input but by the signal transition from 1 to 0 of the previous flip-flop. Synchronous Binary Counter In a synchronous counter, all flip-flops are triggered simultaneously by the count pulse. As shown in Fig. 6-18, all CP terminals are connected to the count pulse, but the flip-flop is complemented only if its T input is equal to 1. The condition for state transition is now determined from the present state value of other flip-flops. These conditions are derived from inspection of the state diagram of Fig. 6-15. Going through the sequence of pee SU Figure 6-17 Timing diagram for counter of Figure 6-16 (b) Sec. 65 COUNTERS 189 25 Q 2 Cry Count [er ler ler lep pulses 1 Hh 1 1 7 7 t}—hogie-t 0 0 ° o 4 Figure 6-18 Synchronous 4-bit binary counter binary numbers, we note that a bit is complemented if all previous least significant bits in the present state equal 1. Therefore, a flip-flop in a synchronous counter is allowed to be complemented by a count pulse if all previous flip-flops on its right have normal outputs equal to 1. This is shown in the logic diagram and also expressed algebraically by the following flip-flop input functions TQ. = 1 . TQ: = Qo TQ2 = QoQ TQs = QoQiQz ‘An mbit synchronous binary counter consists of n T flip-flops. The flip-flop input functions can be written in a concise form as follows: i 70; = Q; fori=1,2,3,- Fo where Il is a product sign designating the AND operation. The T input of a flip-flop Q; receives the output of an AND gate whose inputs come from flip-flops Q; from j = 0 to j = i ~ 1. In addition the CP terminal in each flip-flop receives the input count pulses. ‘The advantage of a synchronous counter over a ripple counter is in its higher speed of operation. When the input count pulse arrives, all flip-flops are triggered and may change states simultaneously. The delay from the 190 SEQUENTIAL Locic Chap. 6 time the input is applied until all flip-flops reach the next state is equal to the signal transition time of a single flip-flop. The signal propagates through the AND gates during the time between pulses so that the signals in the 7 inputs of flip-flops are settled to a steady-state value waiting for the next count pulse. In a ripple counter, the signal propagates, or ripples, through all flip-flops and when all the flip-flops change state, as for example from 1111 to 0000, the total delay is equal to four flip-flop transitions. Ripple counters have logic simplicity and therefore cost less. They are preferable if speed of operation is not critical. Decimal BCD Ripple Counter A decimal counter follows @ sequence of 10 states and returns to 0 after the count of 9, Such a counter must have at least four flip-flops to represent each decimal digit, since a decimal digit is represented by a binary code with at least four bits. The sequence of states in a decimal counter is dictated by the binary code used to represent a decimal digit. If BCD is used (Table 1-2), the sequence of states is as shown in the state diagram of Fig. 6-19. This is similar to a binary counter except that the state after 1001 (code for decimal digit 9) is 0000 (code for decimal digit 0). The design of synchronous decimal counters using any code is presented in Sec. 7-5. The design of a decimal ripple counter or of any ripple counter not following the binary sequence is not a straightforward procedure. The formal tools of logic design can serve only as a guide. A satisfactory end product requires the ingenuity and imagination of the designer. The logic diagram of a BCD ripple counter is shown in Fig. 6-20. The four outputs are designated by the letter symbol Q with a numeric sub- script equal to the binary weight of the corresponding bit in the BCD code. ‘The flip-flops are assumed to trigger on the trailing edge; i.e., when the CP signal goes from 1 to 0. Note that the output of Q; is applied to the CP inputs of both Q, and Qs and the output of Qa is applied to the CP input of Qs. The J and K inputs are connected either to a permanent logic-1 signal or to outputs of flip-flops, as shown in the diagram. A tipple counter is an asynchronous sequential circuit and cannot be described by Boolean equations developed for describing clocked sequential circuits. Signals that affect the flip-flop transition depend on the order in which they change from I to 0. The operation of the counter can be explained by a list of conditions for flip-flop transitions. These conditions are derived from the logic diagram and from knowledge of how a JK flip-flop operates . Remember that when the CP input goes from | to 0, the ip-flop is set if J = |, is cleared if K = 1, is complemented if both J = K = 1, and is left unchanged if both J= K = 0. The following are the conditions for each flip-flop state transition: COUNTERS 191 lL... (ce) (ee) 3) —@- —C) Figure 6-19 State diagram of decimal BCD counter es 2, 22 2, ‘Count pulses ler er cr P Vd I vy 1d lox o 7 ox logie-t Figure 6.20 Logic diagram of a decade BCD ripple counter Flip-flop Q, is complemented on the trailing edge of a count pulse Flip-flop Q2 is complemented if Qs = 0 and Q, switches from 1 to 0. It is cleared if Qg = 1 and Q, switches from 1 to 0. Flip-flop Qq is complemented when Q; switches from 1 to 0. Flip-lop Qs is complemented if Q; = 1 and Qy = 1 and Q, switches from 1 to 0. It is cleared if Q, = 0 or Qs = 0 and Q, switches from 1 to 0. To verify that these conditions result in the sequence required by a BCD counter, it is necessary to verify that the flip-flop transitions indeed follow a sequence of states as specified by the state diagram of Fig. 619. Another way to verify the operation of the counter is to derive the timing diagram for each flip-flop from the conditions listed above. This diagram is shown in Fig. 6-21, from which we see that the flip-flops follow the required sequence of states. 192 SEQUENTIAL LoGic Chap. 6 The decimal counter of Fig. 6-20 is a one decade counter; ie., it counts from 0 to 9 only. To count higher than 9 we need decade counters, one for each decimal digit. For example, a three-decade decimal counter is shown in Fig, 6-22. Each box in the diagram represents a BCD decade counter similar to the one in Fig. 6-20 so that the total count ranges from 000 to 999. The inputs to the second and third decades come from Qs of the previous decade. When output Qs of one decade goes from | to 0, it triggers flip-flop Q, of the next decade, while its own decade goes from 1001 to 0000. Thus the carry from Qs in one decade is used to trigger the count in the next decade. 6-6 SHIFT-REGISTER Binary information is stored in groups of flip-flops called registers. A group of m flip-flops, called an n-bit register, is capable of storing n-bits of information. In this section we shall investigate the operation of an often used register called shift-register. This register has many applications in the design of digital systems and is available in IC form as an MSI function. The logic diagram of a four-bit shift-register is shown in Fig. 6-23. It is composed of four clocked flip-flops with D inputs. The register is used for temporary storage of a four-bit datum, Data can be transferred in and out of the register in three different modes dictated by control signals P, Sp, and S,. The three modes are parallel transfer, serial shift-right transfer, and serial shiftleft transfer, respectively. The three transfer modes operate as follows: PARALLEL TRANSFER: with P = I, Sp = 0, S, = 0, the data avail- able in inputs xo through x3 are transferred into Ao through A3, res- pectively. This is called parallel transfer because all flip-flops receive new pane fe aL LL Fes Ls LS a 22 9 0 o o o o ff 7 jo Figure 6-21 Timing diagram for decimal counter of Figure 6-20 Sec. 6-6 SHIFT-REGISTER 193 Qs Qe Or pO, OQ, 2, OQ Q, O2 ‘BCD BCD BCD) Count Counter Counter Counter pulses 10? digit 10! digit 10 digit Figure 6-22 Block diagram of a 3lecade decimal BCD counter 6 A Ai dA [4 [40 - | 0 r [ 0 [ 1 [ 0 1 Db D Db e—p C i? Seriat Serial input input for Sp for Sp nt th shite woth Shite s, left 5 X * Xo Parallel transfer ? Figure 6-23 Logic diagram of a shift-register 194 SEQUENTIAL Logic Chap. 6 data during one clock pulse. Data stored in the register is transferred out in parallel by sampling the outputs of the four flip-flops. SHIFT-RIGHT: with Sp = 1, P = 0, 5; = 0, the data in the register is shifted to the right upon the occurrence of each clock pulse. Each flip-flop receives new data from its neighbor on the left and flip-flop A, receives its data from external input Jy. Data is transferred out by sampling the output of flip-flop Ao. This is called serial transfer because one bit is transferred in and out of the register during one clock pulse. SHIFT-LEFT: with S; = 1, P= 0, Sg = 0, the data in the register is transferred to the left. Each flip-flop receives its new data from the neighbor on its right and Ag receives its data from external input J. Data is transferred out by sampling the output of As. This again is a serial transfer but its direction is toward the left. The logic diagram of Fig. 6-23 may be specified algebraically by four flip-flop input functions as follows: DAg = 1,81, + XoP + A\Sp DA, = AoS, + x:P + ArSp DAz = A\S), + XP + AaSp DAs = AS, + xsP + IpSp The D input of each flip-flop receives data from three different sources. The first, second, and third term in each function give the conditions for shiftleft, parallel transfer, and shift-right, respectively. Only one of the three control signals should be equal to 1 at any time. Otherwise, if two or three control signals are equal to logic simultaneously, the D input to a flip-flop will respond to the OR of the input data bits. There are four flip-flops and nine inputs (plus the CP input) in the logic diagram of Fig. 6-23, so that a state table or diagram will consist of 16 states and 2° input combinations. Only some of the input combinations may occur during normal operation. Nevertheless, a table or diagram show- ing all allowable input combinations will be excessively complicated. The state table for this particular sequential circuit may be broken into three arts, one for each transfer mode, since the three modes are mutually exclusive. For example, the state table for the shift-eft mode is listed in Table 6-2. In this mode, it is assumed that S, = 1 and that the input to the circuit is J;. An output section is not included in the table because the output is equal to the present state of As. A binary number stored in a register is multiplied by 2 when it is shifted once to the left and divided by 2 when shifted once to the right. However, this is true only if the bit shifted out is not lost. To illustrate, consider a five-bit shift-register with the number 01100 stored in it. This See. 6-6 PROBLEMS 195 ‘Table 6-2 State Table of a Shift-Left Register. Gz = 1, Sp = 0, P= 0) Next state Present State 1 0 As 42 Ay Ao | Aa Az Ar Ao | As Ar Ao Oso nO neiOur| eOn OTe OITA | HO o 0 seer aeeeOeaecet et |e UHEeeEO eee e-eeed a |e aiectHO: DELO era ese-eu tL ae QUsseteH eeeL(O ELGEEET Hg |) o 0 o o 1 1 fo 1 4 4 jo rise 9 ee Te- eae O aera eed ae eee O eeeet Ose | o 0 Otro terre eee eee ater HT ne Oe ad eet eee ree| useeeeMaeeeiO eaten aL o 0 eer teen tere ee ee entered | atest stro HSH OHHH OH) 20 tise g sitio ett (40) 0 0 1 0 o 1 fo o 1 1 Jo 1 0 Sate O eee d seed Oiler Oeeet eae CHaeO aeaeCt te Aa) o 0 eee O eee Teer te ence eee aerate Tee] 420) 1 0 Plestee Watters Peeeeec Jee (ee eeeeeig essee deceeeey Eee bao 0 0 eae ober ace te Ae ae | ee sees gE Cee aete Fee gee giegeLtg.eCeged G2 | Cn) se eee aeced eee Pe eee Meese eee 10 binary number is equivalent to decimal 12. When shifted once to the left, the content of the register becomes 11000, the equivalent of decimal 24. Shifting the original number once to the right changes the content of the register to 00110, the equivalent of decimal 6, A shift-register can temporarily store binary data and transfer this data in and out of the register. Strictly speaking, a shift-register transfers data serially a bit at a time. The transfer of data in parallel is an added feature included in most MSI packages so as to provide a more flexible register. PROBLEMS 6-1. Show the logic diagram of a clocked RS flip-flop with four NAND gates. 6-2. Show the logic diagram of a clocked D flip-flop with AND/NOR gates. 63. Show that the clocked D flip-flop of Fig. 6-5(a) can be reduced by one gate, 6-4, Obtain the logic diagram of a master-slave JK flip-flop with AND and NOR gates. Include a provision for setting and clearing the flip-flop asynchronously (without a clock). 65. 66 67. eSJLILILILIL ff g—t 68. SEQUENTIAL LOGIC Chap. 6 Consider 2 JK’ flip-flop; i., a JK flip-flop with an inverter between external input K’ and internal input K (a) Obtain the flip-flop characteristic table. (b) Obtain the characteristic equation. (c) Show that tying the two external inputs together forms a D flip-flop. A set-dominate flip-flop has a set and a reset input. It differs from a conventional RS flip-flop in that an attempt to simultaneously set and reset results in setting the flip-flop. (a) Obtain the characteristic table and characteristic equation for the set-dominate flip-flop. (b) Obtain a logic diagram for an asynchronous set-dominate flip-flop. The logic diagram of Fig. P6-7 has an input x and clock pulses (CP) as shown. Assume positive logic and that the flip-flop changes state at the trailing edge of the positive pulse. Draw the timing diagram of x, x’, 4, A', flip-flop inputs Sand R, and output y. Start by assuming that the flip-flop is initially in the set-state and later justify your assumption. What is the function of this circuit? cP For the circuit of Fig. P6-7: (a) Obtain the flip-flop input functions and the output Boolean function, (>) Obtain the state table and state diagram. (c) Derive the state equation from the state table and again directly from the logic diagram and show that both results are the same. Sec. 66 PROBLEMS 197 6-9. The fulladder of Fig. P6-9 receives two external inputs x and y, the third input z comes from the output of a D flip-flop. The carry output is transferred to the flip-flop every clock pulse. The external S output gives the sum of x, y, and z. Assume that x and y change after the trailing edge of the clock pulse. Obtain the state table and state diagram of the sequential circuit. Full adder c 6-10. Derive the state table and state diagram of the sequential circuit shown in Fig. P6-I0, What is the function of the circuit? 6-11. A sequential circuit has four flip-flops A, B, C, D and an input x. It is described by the following state equations: A (t+ 1) = (CD' + CD)x + (CD + CD’) x’ Bit l=A cu+paB Dit Draw the logic diagram using D flip-flops and obtain the state diagram, 198 SEQUENTIAL LoGiC Chay 6-12. A sequential circuit has two flip-flops (4 and 8), two inputs (x and y), and an output (z). The flip-flop input functions and the circuit output function are as follows: JA = xB + y'B’ KA = xy'Br A KB =xy'+A yA + x'y'B Obtain the logic diagram, state table, state diagram, and state equations. 6-13, Draw the timing diagram for the asynchronous binary counter of Fig. 6-16(a). 6-14, Draw the timing diagram for the decimal counter of Fig. 6-20 when negative logic signals are employed, 6-15, The asynchronous counter shown below uses clocked flip-flops that trigger at the trailing edge of the pulse. Determine the sequence of states starting from ABC = 000. What happens if the counter is initially in state 110 or 111? » B c Count pulses 6-16. A count-down counter is a counter with reverse count; ie., the binary count is reduced by one for every count pulse. (a) Draw the state diagram of a four-bit count-down binary counter. (b) Show that a flip-flop should be complemented when its previous flip-flop goes from 0 to 1, Draw the logic diagram of a four-bit count-down ripple binary counter. (©) Draw the logic diagram of a synchronous four-bit count-down binary counter. (@) Draw the logic diagram of a counter that counts either up or down, Use an external input x so that x = 1 for count up and x = 0 for count down. See. 66 PROBLEMS 617. 618. 6-19, 6-20, 6-21 ‘The state equations of a 2-out-of-5 counter are as follows A@tD=E D(ttlj=A'C + AB B+iy=A E(tti=D C+ =A'B+ AC Determine the count sequence starting from the state ABCDE = 01100. Explain the reason for the name, Given a six-bit shift-right register, the input to the left-most flip-flop is 101101. Assume that the register is initially cleared. Show the contents of the register after each clock pulse, Also draw a timing diagram for the output of each flip-flop. A ring-counter is a shift-right register with the normal output of the right-most flip-flop connected to the input of the left-most flip-flop. Obtain the count sequence of a ring counter starting with the count of 1000. ‘A switch-tail ring-counter is a shift-right register with the complement ‘output of the right-most flip-flop connected to the input of the left-most flip-flop. Obtain the count sequence of the counter starting with the count of 0000. ‘A feedback shift-register is a shift-register whose serial input is a function of the present state of the register. Obtain the state diagram of a feedback shift-right register whose serial input is Iz = 4240 + AbAg (see Fig. 6-23). DESIGN OF CLOCKED SEQUENTIAL 7 CIRCUITS 7-1 INTRODUCTION The design of a clocked sequential circuit starts from a set of specifications and culminates in a logic diagram or a list of Boolean functions from which the logic diagram can be obtained. In contrast to a combinational circuit, which is fully specified by a truth table, a sequential circuit requires a state table for its specification. The first step in the design of sequential circuits is to obtain a state table or an equivalent representation, such as a state diagram or state equations. A synchronous sequential circuit is made up of flip-flops and combina- tional gates. The design of the circuit consists of choosing the flip-flops and then finding a combinational gate structure which, together with the flip-flops, produces a circuit that fulfils the stated specifications. The number of flip-flops is determined from the number of states needed in the circuit. The combinational circuit is derived from the state table by methods presented in this chapter. In fact, once the type and number of flip-flops are determined, the design process involves a transformation from the sequential circuit problem into a combinational circuit problem. In this way the techniques of combinational circuit design can be applied. ‘Any design process must consider the problem of minimizing the cost of the final circuit. The two most obvious cost reductions are reductions in the number of flip-flops and the number of gates. Because these two items seem the most obvious, they have been extensively studied and investigated In fact, a large portion of the subject of switching theory is concerned with finding algorithms for minimizing the number of flip-flops and gates in 200 Sec. 7-1 INTRODUCTION 201 sequential circuits. However, the reader is referred to Sec. S-4 for a discussion of other criteria that must be considered in a practical situation. The reduction of the number of flip-flops in a sequential circuit is referred to as the state reduction problem. State reduction algorithms are concerned with procedures for reducing the number of states in a state table while keeping the external input-output requirements unchanged. Since m flip-flops produce 2 states, a reduction in the number of states may (or may not) result in a reduction in the number of flip-flops. An unpredictable effect in reducing the number of flip-flops is that sometimes the equivalent circuit (with less flip-flops) may require more combinational gates. ‘The cost of the combinational circuit part of a sequential circuit can be reduced by using the known simplification methods for combinational circuits. However, there is another factor that comes into play in minimizing the combinational gates, known as the state assignment problem. State assignment procedures are concerned with methods for choosing binary values to states in such a way as to reduce the cost of the combinational circuit that drives the flip-flops. This is particularly helpful when a sequential circuit is viewed from its external input-output terminals. Such a circuit may follow a sequence of internal states, but the binary values of the individual states may be of no consequence as long as the circuit produces the required sequence of outputs for any given sequence of inputs. This does not apply to circuits whose external outputs are taken directly from flip-flops with binary sequences fully specified, as in a binary counter, Most digital systems considered in this book involve the use of registers with a prescribed number of flip-flops and binary value assignment. State reduction and state assignment procedures offer very little help in reducing the components in such circuits. However, state reduction and assignment play an important part in the design of certain sequential circuits and provide guidance to the problem of equipment minimization. For this reason we shall explain these two problems by an illustrative example in the next section. The enumeration of all possible algorithms and procedures for state reduction and state assignment is beyond the scope of this book.* 7-2 THE STATE REDUCTION AND ASSIGNMENT PROBLEMS We shall illustrate the need for state reduction and state assignment with an example. We start with a sequential circuit whose specification is given in the state diagram of Fig.7-I. In this example, only the input-output sequences are important; the internal states are used merely to provide the "The interested reader will find these two topics covered in detail in any book on switching circuit theory (see references 1-7). 202 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 wa Figure 7-1 State diagram required sequences. For this reason, the states marked inside the circles are denoted by letter symbols instead of their binary value. This is in contrast to a binary counter, where the binary value sequence of the states themselves are taken as the outputs. There are an infinite number of input sequences that may be applied to the circuit; each will result in a unique output sequence. As an example, consider the input sequence 01010110100 starting from the initial state a. Each input of 0 or 1 will produce an output of 0 or 1 and cause the circuit to go to the next state. From the state diagram we obtain the output and state sequence for the given input sequence as follows: with the circuit in initial state a, and input of O produces an output of 0 and the circuit remains in state a. With present state @ and input of 1, the output is 0 and the next state is b. With present state 6 and input of 0, the output is O and next state is c. Continuing this process, we find the complete sequence to be as follows: ste: aabedeffgfga input: 01010110100 output: 00000110100 In each column we have the present state, input value, and output value The next state is written on top of the next column. It is important to realize that in this circuit the states themselves are of secondary importance because we are interested only in output sequences caused by input sequences. Now let us assume that we have found a sequential circuit whose state diagram has less than seven states and we wish to compare it with the Sec. 7-2 ‘THE STATE REDUCTION AND ASSIGNMENT PROBLEMS 203 circuit whose state diagram is given by Fig. 7-1. If identical input sequences are applied to the two circuits and identical outputs occur for all input sequences, then the two circuits are said to be equivalent (as far as the input-output is concerned) and one may be replaced by the other. The problem of state reduction is to find ways of reducing the number of states in a sequential circuit without altering the input-output relations. We shall now proceed to reduce the number of states for this example First, we need the state table; it is more convenient to apply procedures for state reduction here than in state diagrams. The state table of the circuit is listed in Table 7-1 and is obtained directly from the state diagram of Fig. 7-1 ‘An algorithm for the state reduction of a completely specified state table is given here without proof: “Two states are said to be equivalent if, for each member of the set of inputs, they give exactly the same output and send the circuit either to the same state or to an equivalent state. When two states are equivalent, one of them can be removed without altering the input-output relations.” We shall apply this algorithm to Table 7-1. Going through the state table we look for two present states that go to the same next state and have the same output for both input combinations. States g and e are two such states; they both go to states a and f and have outputs of 0 and 1 for x = 0 and x = 1, respectively. Therefore, states g and e are equivalent; one can be removed. The procedure of removing a state and replacing it by its equivalent is demonstrated in Table 7-2. The row with present state g is crossed-out and state g is replaced by state € everywhere it occurs in the next state columns. Present state f now has next states e and f and outputs 0 and 1 for x = 0 and x = 1, respectively. The same next states and outputs appear in the row with present state d. Therefore, states f and d are equivalent; state Ff can be removed and replaced by d. The final reduced table is shown in Table 7-3. The state diagram for the reduced table consists of only five Table 7-1 State Table Next state Output Present state x0 xed x=0 xel 204 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS, Chap. 7 Table 7-2. Reducing the State Table Next state Output Present state x=0 x=0 xel a a » ° o b ¢ 4 o ° c a 4 ° ° 4 e fa ° 1 © a va 0 1 £ ge f ° 1 z a f ° L Table 7-3 Reduced State Table ‘Next state Output Present state x=0 x=0 xed a a > ° o > © a 0 ° © a a 0 o a e 4 o 1 e a a ° 1 states and is shown in Fig. 7-2. This state diagram satisfies the original input-output specifications and will produce the required output sequence for any given input sequence. The following list derived from the state diagram of Fig. 7-2 is for the input sequence used previously. We note that the same output sequence results although the state sequence is different state: a@abededdedea input: 01010110100 output: 00000110100 In fact, this sequence is exactly the same as that obtained for Fig. 7-1 if we replace e by g and d by f. It is worth noting that the reduction in the number of states of a sequential circuit is possible if one is interested only in external input- output relations. When external outputs are taken directly from flip-flops, the outputs must be independent of the number of states before applying state reduction algorithms. See. 7-2 THE STATE REDUCTION AND ASSIGNMENT PROBLEMS 205 wa Figure 7-2 Reduced state diagram ‘The sequential circuit of this example was reduced from seven to five states. In either case, the representation of the states with physical components requires that we use three flip-flops because m flip-flops can Tepresent up to 2” distinct states. With three flip-flops, we can formulate up to eight binary states denoted by binary numbers 000 to 111 with each bit designating the state of one flip-flop. If the state table of Table 7-1 is used, we must assign binary values to seven states, the remaining state is unused. If the state table of Table 7-3 is used, only five states need binary assignment and we are left with three unused states. Unused states are treated as don't-care conditions during the design of the circuit. Since don’t-care conditions usually help in obtaining a simpler Boolean function, it is more likely that the circuit with five states will require less combinational gates than the one with seven states. In any case, the reduction from seven to five states does not reduce the number of flip. flops. In general, reducing the number of states in a state table is likely to result in a circuit with less equipment. However, the fact that a state table hhas been reduced to less states does not guarantee a saving in the number of flip-flops or the number of gates. It is now necessary to assign binary values to each state to replace the letter symbol used thus far. Remember that, in the present example, the binary value of the states is immaterial as long as their sequence maintains the proper input-output relations. For this reason, any binary number assignment is satisfactory as long as each state is assigned a unique number. Three examples of possible binary assignments are shown in Table 7-4 for the five states of the reduced table. Assignment 1 is a straight binary assignment for the’ sequence of states from @ to e. The other two assignments are chosen arbitrarily. In fact, there are 140 different distinct assignments for this circuit (11). 206 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 Table 7-4 Three Possible Binary State Assignments State Assignment 1 Assignment 2 Assignment 3 a 001 000 000 » 010 010 100 © ou ou o10 a 100 101 101 e lol M1 on Table 7-5 is the reduced state table with binary assignment 1 substituted for the letter symbols of the five states.* It is obvious that a different binary assignment will result in a state table with different binary values for the states, while the input-output relations remain the same. The binary form of the state table is used to derive the combinational circuit part of the sequential circuit. The complexity of the combinational circuit obtained depends on the binary state assignment chosen. In Sec. 7-3 we introduce a procedure for obtaining the combinational circuit from the state table. The effect of different binary assignments is demonstrated in Sec. 7-4 (Examples 7-1 and 7-2), where the design of the circuit is completed. Various procedures have been suggested that lead to a particular binary assignment from the many available. The most common criteria is that the chosen assignment should result in a simple combinational circuit for the flip-flop inputs. However, to date, there are no state assignment procedures that guarantee a minimal cost combinational circuit. State assignment is one of the challenging problems of switching theory. The interested reader will find a rich and growing literature on this topic. Techniques for dealing with the state assignment problem are beyond the scope of this book 7-3 FLIP-FLOP EXCITATION TABLES The characteristic tables for the various flip-flops were presented in Sec. 6-2. A characteristic table defines the logical property of the flip-flop and completely characterizes its operation. Integrated circuit flip-flops are sometimes defined by a characteristic table tabulated somewhat differently. This second form of the characteristic tables for RS, JK, D, and T flip-flops is shown in Table 7-6. They represent the same information as thé charac- teristic tables of Figs. 6-4(c) through 6-7(c). Table 7-6 defines the state of each flip-flop as a function of its inputs and previous state. Q() refers to the present state and Q(r + 1) to the next state after the occurrence of a clock pulse. The characteristic table for the RS flip-flop shows that the next state is equal to the present state *A state table with binary assignment is sometimes called a transition table. See. 7:3 FLIP-FLOP EXCITATION TABLES 207 Table 7-5 Reduced State Table with Binary Assignment 1 Next state Output Present state x0 xed x=0 x=l 001 001 010 0 0 o10 ou 100 0 0 ou 001 100 0 ° 100 101 100 0 1 101 001 100 0 1 ‘Table 7-6 Flip-Flop Characteristic Tables sR | oney yr kK | nen o 0 QU) o 0 ow (Os teul 0 o 4 0 an) 1 eit} 1 1 1 2 1 1 ow (a) RS () JK D ore + 1) Tr one) 0 ° ° an 1 1 1 ow (Dd @T when both inputs S and R are 0. When the R input is equal to 1, the next clock pulse clears the flip-flop. When the $ input is equal to 1, the next clock pulse sets the flip-flop. The question mark for the next state when S and R are both equal to 1 simultaneously designates an indeterminate next state. The table for the JK flip-flop is the same as that for the RS when J and K are replaced by Sand R, respectively, except for the indeterminate case. When both J and K are equal to 1, the next state is equal to the complement of the present state; ie., Q(t + 1) = Q'(f). The next state of the D flip-Nlop is completely dependent on the input D and independent of the present state. The next state of the J flip-flop is the same as the present state if 7 = 0 and complemented if T = 1. The characteristic table is useful for analysis and for defining the operation of the flip-flop. It specifies the next state when the inputs and present state are known, During the design process we usually know the 208 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 transition from present state to next state and wish to find the flip-flop input conditions that will cause the required transition. For this reason we need a table that lists the required inputs for a given change of state. Such a list is called an excitation table. Table 7-7 lists the excitation tables for the four flip-flops. Each table consists of two columns Q(t) and Q(t + 1) and a column for each input to show how the required transition is achieved. There are four possible transitions from present state to next state. The required input conditions for each of the four transitions are derived from the information available in the characteristic table. The symbol X in the tables represents a don't-care condition; that is, it does not matter whether the input is 1 or 0. RS Flip-Flop The excitation table for the RS flip-flop is shown in Table 7-7(a). The first row shows the flip-flop in the O-state at time ¢. It is desired to leave it in the Ostate after the occurrence of the pulse. From the characteristic table, we find that if S and R are both 0, the flip-flop will not change state. Therefore, both S and R inputs should be 0. However, it really doesn’t matter if R is made a 1 when the pulse occurs since it results in leaving the flip-flop in the O-state. Thus R can be 1 or O and the flip-flop will remain in the O-state at ¢ + 1. Therefore, the entry under R is marked by the don’tcare condition X. If the flip-flop is in the Ostate and it is desired to have it go to the state, then from the characteristic table, we find that the only way to make Q(r + 1) equal to 1 is to make S = 1 and R = 0. If the flip-flop is Table 7-7 Flip-Flop Excitation Tables on =6oney | sR oy ory ls Kk 0 0 o x 0 o o Xx 0 1 1 0 0 1 1 Xx 1 0 oo. 1 ° acaeeel: 1 1 xo 1 1 x 0 (a) RS ) JK wo) ofr + 1) D orn) oft + 1) Tr 0 ° ° 0 0 0 0 1 1 0 1 1 1 ° 0 1 0 1 1 1 1 1 1 ° oD @T See. 7-3 FLIP-FLOP EXCITATION TABLES 209 to have a transition from the I-state to the Ostate, we must have S = 0 and R = 1 The last condition that may occur is for the Mip-flop to be in the 1-state and remain in the 1-state. Certainly R must be 0; we do not want to clear the flip-flop. However, S may be either a 0 or a 1. If it is 0, the flip-flop does not change and remains in the L-state; if it is a 1, it sets the flip-flop to the I-state as desired. Therefore $ is listed as a don’t-care condition. JK Flip-Flop The excitation table for the JK flip-flop is shown in Table 7-7(b). When both present and next state are 0, the J input must remain at 0 and the K input can be either a 0 or a 1. Similarly, when both present and next state are 1, the K input must remain at 0 while the J input can be a 0 or a I. If the flip-flop is to have a transition from the Ostate to the I-state, J must be equal to 1 since the J input sets the flip-flop. However, input K may be either a 0 or a I. If K = 0, the J = 1 condition sets the flip-flop as required; if K = 1 and J the flip-flop is complemented and goes from the O-state to the I-state as required. Therefore the K input is marked with a don’t-care condition for the 0 to 1 transition. For a transition from the L-state to the O-state, we must have K = 1 since the K input clears the flip-flop. However, the J input may be either a 0 or a I since J = 0 has no effect and J = 1 together with K = 1 complements the flip-flop with a result transition from the I-state to the Ostate The excitation table for the JK flip-flop illustrates the advantage of using this type when designing sequential circuits. The fact that it has so many don’t-care conditions indicates that the combinational circuit for the input functions are likely to be simpler because don’tcare terms usually simplify a function, D Flip-flop The excitation table for the D flip-flop is shown in Table 7-7(c). From the characteristic table, Table 7-6(c), we note that the next state is always equal to the D input and independent of the present state. Therefore, D must be 0 if Q(t + 1) has to be O and I if Q(t + 1) has to be 1, regardless of the value of Q(¢). T Flip-Flop The excitation table for the 7 flip-flop is shown in Table 7-7(d). From the characteristic table, Table 7-6(4), we find that when input 7 = 1, the state of the flip-flop is complemented; when T = 0, the state of the 210 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 flip-flop remains unchanged. Therefore, when the state of the flip-flop must remain the same, the requirement is that T = 0. When the state of the flip-flop has to be complemented, T must equal 1. Other Flip-Flops The design procedure to be described in this chapter can be used with any flip-flop. It is necessary that the flip-flop characteristic table, from which it is possible to develop a new excitation table, be known. The excitation table is then used to determine the flip-flop input functions, as explained in the next section. 7-4 DESIGN PROCEDURE In this section we present a procedure for the design of sequential circuits. Although intended to serve as a guide to the beginner, this procedure can be shortened with experience. The procedure is first summarized by a list of consecutive recommended steps as follows: 1. The word description of the circuit behavior is stated. This may be accompanied by a state diagram, a timing diagram, or other pertinent information. 2. From the given information about the circuit, obtain the state table. 3. The number of states may be reduced by state reduction methods if the sequential circuit can be characterized by input-output relations independent of the number of states. 4. Assign binary values to each state if the state table obtained in step 2 or 3 contains letter symbols. 5. Determine the number of flip-flops needed and assign a letter symbol to each. 6. Choose the type of flip-flop to be used. 7. From the state table, derive the circuit excitation and output tables. 8. Using the map or any other simplification method, derive the circuit output functions and the flip-flop input functions, 9. Draw the logic diagram. The word specification of the circuit behavior usually assumes that the reader is familiar with digital logic terminology. It is necessary that the designer use his intuition and experience in order to arrive at the correct interpretation of the circuit specifications because word descriptions may be See. 7-4 DESIGN PROCEDURE 211 incomplete and inexact. However, once such a specification has been set down and the state table obtained, it is possible to make use of the formal procedure to design the circuit. The state reduction and state assignment problems have been discussed in Sec.7-2, The effect of state assignment on the complexity of the combinational circuit will be demonstrated subsequently in Exs. 7-1 and 7-2. It has already been mentioned that the number of flip-flops is determined from the number of states. A circuit may have unused binary states if the total number of states is less than 2”. The unused states are taken as don’t-care conditions during the design of the combinational circuit part of the circuit. The type of flip-flop to be used may be included in the design specifications or may depend on what is available to the designer. Many digital systems are constructed entirely with JK flip-flops because they are the most versatile available. When many types of flip-flops are available, it is advisable to use the RS or D flip-flop for those applications requiring transfer of date (such as shift-registers), the T type for applications involv- ing complementation (such as binary counters), and the JK type for general applications. The external output information is specified in the output section of the state table. From it we can derive the circuit output functions. The excitation table for the circuit is similar to that of the individual flip-flops, except that the input conditions are dictated by the information available in the present state and next state columns of the state table. The method of obtaining the excitation table and the simplified flip-flop input functions is best illustrated by an example. We wish to design the clocked sequential circuit whose state given in Fig. 7-3. The type of flip-flops to be used are JK. The state diagram consists of four states with binary values already assigned. Since the directed lines are marked with a single binary digit gram is Do Figure 7-3. State diagram 212 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 without a (/), we conclude that there is one input variable and no output variables. (The state of the flip-flops may be considered the outputs of the circuit.) The two flip-flops needed to represent the four states are designated by A and B. The input variable is designated x, The state table for this circuit, derived from the state diagram, is shown in Table 7-8. Note that there is no output section for this circuit. We shall now show the procedure for obtaining the excitation table and the combinational gate structure. The derivation of the excitation table is facilitated if we arrange the state table in a different form. This form is shown in Table 7-9, where the present state and input variables are arranged in the form of a truth table. The next state value for each present state and input conditions is copied from Table 7-8. The excitation table of a circuit is a list of flip-flop input Table 7-8 State Table Next state Present state x=0 x=l 4 B A B 4 B 0 0 ° 0 ° 1 0 1 1 0 0 I 1 0 1 0 1 1 1 1 1 1 0 0 Table 7-9 Excitation Table Inputs of Outputs of combinational circuit combinational circuit Present state Input | Next state Flip-flop inputs A B x A B A KA IB KB 0 0 ° 0 ° 0 x 0 x 0 0 1 0 1 0 x 1 x ° 1 0 1 0 1 x x 1 0 1 1 0 i 0 x x 0 1 0 0 1 0 x 0 0 x 1 0 1 1 1 x 0 1 x 1 1 0 1 1 x 0 x 0 1 1 1 0 0 x 1 x 1 See. 7-4 DESIGN PROCEDURE 213 conditions that will cause the required state transitions and is a function of the type of flip-flop used. Since this example specified JK flip-flops, we need columns for the J and K inputs of flip-flops A (denoted by JA and KA) and B (denoted by JB and KB). The excitation table for the JK flip-flop was derived in Table 7-7(b). This table is now used to derive the excitation table of the circuit. For example, in the first row of Table 7-9 we have a transition for flip-flop A from 0 in the present state to O in the next state. In Table 7-7(b) we find that a transition of states from 0 to 0 requires that input J = 0 and input K =X. So 0 and X are copied in the first row under JA and KA, respectively. Since the first row also shows a transition for flip-flop B from 0 in the present state to O in the next state, 0 and X are copied in the first row under JB and KB. The second row of Table 7-9 shows a transition for flip-flop B from 0 in the present state to I in the next state. From Table 7-7(b) we find that a transition from O to 1 requires that input J = 1 and input K = X. So I and X are copied in the second row under JB and KB, respectively. This process is continued for cach row of the table and for each flip-flop; with the input conditions as specified in Table 7-7(b) being copied into the proper row of the particular flip-flop being considered. Let us now pause and consider the information available in an excitation table such as Table 7-9. We know that a sequential circuit consists of a number of flip-flops and a combinational circuit. Figure 7-4 shows the two JK flip-flops needed for the circuit and a box to represent the combinational circuit. From the block diagram, it is clear that the outputs of the combinational circuit go to flip-flop inputs and external outputs (if specified). The inputs to the combinational circuit are the external inputs and the present state values of the flip-flops. Moreover, the Boolean functions that specify a combinational circuit are derived from a truth table that shows the input-output relations of the circuit. The truth table that describes the combinational circuit is available in the excitation table. The combinational circuit inputs are specified under the present state and input columns and the combinational circuit outputs are specified under the flip-flop input columns. Thus, an excitation table transforms a state diagram to the truth table needed for the design of the combinational circuit part of the sequential circuit. The simplified Boolean functions for the combinational circuit can now be derived. The inputs are the variables A, B, and x; the outputs are the variables JA, KA, JB, and KB, The information from the truth table is transferred into the maps of Fig.7-5, where the four simplified flip-flop input functions are derived: JA JB Bx’ KA = Bx x KB=AQx 24 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 External outputs (none) External inputs Figure 7-4 Block diagram of sequential circuit B 4,00 01 iT 9] 1 x} xix] x 1 alil x |x| x [Lx JB Figure 7-5 Maps for combinational circuit ‘The logic diagram is drawn in Fig. 7-6 and consists of two flip-flops, two AND gates, one equivalence gate, and one inverter. With some experience, it is possible to reduce the amount of work involved in the design of the combinational circuit. For example, it is See. 7-4 DESIGN PROCEDURE 215, cP Figure 7-6 Logic diagram of sequential circuit possible to obtain the information for the maps of Fig. 7-5 directly from Table 7-8, without having to derive Table 7-9. This is done by systematically going through each present state and input combination in Table 7-8 and comparing it with the binary values of the corresponding next state. The required input conditions as specified by the Mip-lop excitation in Table 7-7 is then determined. Instead of inserting the 0, 1, or X thus obtained into the excitation table, it can be written down directly into the appropriate square of the appropriate map. The excitation table of a sequential circuit with m flip-flops, k inputs per flip-flop, and m external inputs consists of m + n columns for the present state and input variables and up to 2”” rows listed in some convenient binary count. The next state section has m columns, one for each flip-flop. The flip-flop input values are listed in mk columns, one for each input of each flip-flop. If the circuit contains j outputs, the table must include j columns. The truth table of the combinational circuit is taken from the excitation table by considering the m + 7 present state and input columns as inputs and the mk + j flip-flop input values and external outputs as ourputs. It is common practice to go from the Boolean functions expressed in algebraic form to a wiring list that gives the interconnections among the external terminals of flip-flops and gates in IC packages. In that case, the design need not go any further than the required simplified circuit output functions and flip-flop input functions. A logic diagram, however, may be helpful for visualizing the implementation of the circuit with flip-flops and gates. 216 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 7-5 DESIGN EXAMPLES The design of clocked sequential circuits is illustrated in this section by means of seven examples. Examples 7-1 and 7-2 demonstrate the fact that different binary assignments produce different combinational circuits. They also show how unused binary states are treated as don’t-care conditions. Example 7-3 shows how to design with RS flip-flops. Example 7-4 shows how to design with T and D flip-flops and how to treat unspecified next states. The design of a circuit initially specified by a timing diagram is demonstrated in Ex. 7-5. Circuits specified by state equations are designed in Exs. 7-6 and 7-7. Design With Excitation Tables EXAMPLE 7-1. Complete the design of the sequential circuit presented in Sec. 7-2. Use the reduced state table (Table 7-3), with binary assignment 1 of Table 7-4. Use JK flip-flops. ‘The state table with the required binary assignment is found in Table 7-5. From this table, we obtain the entries for the present state, input, and next state columns of Table 7-10. From the present and next state columns and from the excitation requirements of a JK flip-flop, we deduce the conditions necessary to excite the flip-flop inputs. The entries for the output column are obtained directly from Table 7-5. Table 7-10 is the excitation table for the circuit. From its information, we can derive the required combinational circuit. There are three unused binary states in the state table: 000, 110, and 111 When an input of 0 or 1 is included with each unused state, six unused combinations result. These combinations have equivalent binary numbers 0, 1, 12, 13, 14, and 15 and are not listed in the table under the columns with labels “present state” and “input.” These columns represent the input variables of the ‘Table 7-10 Excitation Table for Example 7-1 Present state Input] Nexe state Frip-flop inputs Output A Bc x]4 B cls KAl| ae Kelsie Kc| y o 0 1 ofo o 1fo x}o xfx o 0 ee OLE O--ecee-O) O-tee me a nee Ea Ee 0 o 1 0 ofo 1 1fo x}x oft x 0 o 1 0 tft 0 of 1 ¥}x 1 fo x ° o 1 1 ofo o 1fo x]x i fx o 0 OF ieee Te Oem rene mere 0 1 0 0 of]1 o 1} x o fo x] x 0 1 0 0 1/1 0 of x o | o x}o x 1 1 0 1 ofo o 1] x 1 Jo x}]x o ° Rae Olettia[aHOHt Ot ee saHOue aOR Mea | eMail 1 Sec. 75 DESIGN EXAMPLES 217 combinational circuit and therefore, the above numbers are treated as don’t-care minterms. The design of the combinational circuit consists of finding the six flip-flop input functions and the circuit output function. These functions are taken from the excitation table and simplified in the maps of Fig. 7-7. The four input variables in each map are the three flip-flop variables A, B, and C and the input variable x, Each map has X°s in the six squares that belong to the don’t-care minterms. The other don’t-care terms in the maps are placed there because of the characteristic property of JK flip-flops. The abundance of don’t-care terms for the inputs of JK flip-flops, together with the six don’t-care terms from the : Le : [ange ae a]e yaar Figure 7-7. Maps for simplifying combinational circuit of Example 7-1 218 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS chap. 7 ‘unused combinations, contribute a large amount of X’s to the maps and are the main reason for the relatively simple expressions obtained: JA= Bx KA= Ce JB=A'x KB=C+x JC=x' KC=x yen ‘The logic diagram for the circuit is drawn in Fig. 7-8. It consists of three flip-flops, four AND gates, and one OR gate. Cr Be cr ax el) Figure 7-8. Logic diagram for Example 7-1 EXAMPLE 7-2. Repeat Ex. 7-1 with binary assignment 2 of Table 7-4 Table 7-11 is the excitation table for this example. The binary values for the present state and next state are determined from the substitution of assignment 2 of Table 7-4 into Table 7-3. Again there are six unused input ‘minterms. For this assignment, they are 2, 3, 8, 9, 12, and 13. The simplification of the six flip-flop input functions and the circuit output function should include these don’t-care terms. The map simplifications are not shown, but the reader can easily verify the following results (see Prob. 7-16): JA = Bx KA = Bx’ JB = Ce + Cx JC=B ys Ax The logic diagram for the circuit is drawn in Fig. 7-9. It consists of three flip-flops, six AND gates and two OR gates. Sec. 7-5 DESIGNEXAMPLES 219 Table 7-11. Excitation Table for Example 7-2 Present state Input |_ Next state Flip-flop inputs Ourput 4 Bc xia B Ja ka | JB KB lsc Kc] oy 0 0 0 ofo o ofo x }o xfo x 0 o 0 0 1fo 1 ofo x]i x fo x 0 los eeasteot OHO sete Otters Fixe Onn 0 o 1 0 trf1 o 1ft x fx 1 fa x 0 CUizeeg eee Geeet Je (et deeex este Te et Eseees cee (uh sored Yaa ee. geeeg 0 Oe ee eerte eee ea cee | enero eee need oO 0 1 0 4 of1 bt rtx oft x ]x 0 0 1 0 1 1/1 0 t]x o }o x]x o 1 eee teeeeH see OH LO 1c 0 2H 08 | ePeereanien (aimee ge Heer 0 Teena ae fe eeeOa Le eke Oe [ean dae |e ee O) 1 v K BY cx cer Figure 7-9 Logic diagram of Example 7-2 Examples 7-1 and 7-2 demonstrate how a different combinational circuit is obtained for the same circuit when a different binary assignment is made The circuits of Figs. 7-8 and 7-9 are identical as far as the specifications given by Table 7-3 are concemed, The choice of binary states for the letter symbols dictates the complexity of the combinational circuit obtained. EXAMPLE 7-3. Design a sequential circuit whose state diagram is the solution of Prob. 69 and is given by Fig. 7-10. Use RS flip-flops. 220 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 0 oor ovo 1 {oy (pie 10/1 cq Dp W/L won Figure 7-10. State diagram for Example 7-3 The state diagram consists of two states, with binary values already assigned. The directed lines are marked with two inputs and one output. Three of the input combinations leave the circuit in the same state. Only inputs 00 and 11 cause a transition of states The excitation table for the circuit is shown in Table 7-12. The present state, input, next state, and output columns are derived directly from the state diagram of Fig. 7-10. The flip-flop input conditions are obtained from the state transition and from the excitation requirements of an RS flip-flop as listed in Table 7-7(a). The flip-flop output is denoted by A, the inputs by x and y, and the output by z The simplified Boolean functions for the combinational circuit are derived from the maps of Fig. 7-11. The entries for the map are obtained from Table 7-12. The simplified flip-flop input functions and the output functions are: SA = xy RA = x'y xeyoA PS inputs NS Output FF inputs 4 Paar) A : SA RA 0 ° ° 0 ° 0 x 0 0 1 0 1 0 x 0 1 0 0 1 0 x 0 1 1 1 0 1 0 1 0 0 0 1 0 1 1 0 1 1 0 x 0 1 1 0 1 0 x 0 1 1 1 1 1 x 0 Sec. 75 DESIGN EXAMPLES 221 Sige bbb} Gs Lhe ; ton manxy a Figure 7-11 Maps for Example 7-3 The logic diagram of the circuit is shown in Fig. 7-12. EXAMPLE 7-4. Design a sequential circuit with two flip-flops and one input. When the input is equal to 1, the flip-flop outputs repeat the sequence 00, Ol, 10. When the input is equal to 0, they repeat the sequence 11, 10, O1. Design the circuit with: (a) 7 flip-flops and (b)D flip-flops. ‘The state diagram, derived from the word description, is shown in Fig. 7-13(a). Each sequence is repeated as long as the input remains the same. The statement of the problem does not specify the next state when the input changes while the circuit is in state 00 or 11. The state table of Fig. 7-13(b) is derived from the state diagram. The two dashes in the table designate the fact that the next states are not specified for these conditions. The question now arises, how do we treat these unspecified next states? In a practical situation, the designer would go back to the source of the statement of the problem and ask for further clarification. If this is impossible, the designer must decide for himself what to do. One possible alternative is to leave the circuit in state 00 or 11 when the input is 0 or 1, respectively. A second alternative is to force a transition to either state 01 or 10 so the proper sequence may continue. A third alternative is to xoy a Figure 7-12 Logie diagram for Example 7-3 222 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 assume that it does not matter what the next state is going to be. We shall arbitrarily assume the third alternative. The excitation table is shown in Table 7-13. The two unspecified next states are marked by X’s and treated as don’t-care conditions. The flip-flop input values are listed for the Ttype and the Dtype. The input of the T flip-flop must be a 1 if there is a change from 0 to 1 or from I to 0 during the state transition. It is equal to 0 if there is no change of state from the present state to the next state. The input for the D flip-flop is exactly the same as the entry in the next state column. The flip-flop excitation tables from Table 7-7. may be consulted to verify these conditions. The maps of Fig. 7-14(a) are used if the circuit is to have T flip-flops. ‘The simplified input functions are: TA=AOB B= A Ox Present] Next state ert o-- |e Tees Loo-|s|~ 0 @ (by Figure 7-13 State diagram and state table for Example 7-4 ‘Table 7-13 Excitation Table for Example 7-4 pau a Flip-flop Inputs State Input State Type DType AB x 4B ta | me | va | vB o 0 oO eet x x x x o 0 4 o 4 0 1 0 1 o 1 0 rllcuar' 1 0 1 1 o 1 4 1 0 1 1 1 ° 1 0 0 o 1 1 1 0 1 reo et 0 0 1 0 0 0 1 100 1 0 0 1 1 0 Teeeeteieet See x x x x See. 75 DESIGN EXAMPLES 223 Be 2 4,001 To of x fa x]ila aiff [x 1] [xfe TA~aee TR=Aex (a) When T fip-lops are used x ifa xfa} fa x}a 1 x DA=B DB= A'x'4 ABBY’ (>) When D flip-flops are used Figure 7-14 Maps for Example 7-4 ‘The maps of Fig. 7-14(b) are used if the circuit is to have D flip-flops, The input functions obtained are: DA=B DB = A'x' + A'B + Bx’ It is interesting to check the assignment given to the unspecified next states. From the maps of Fig. 7-14(a) we note that, for the circuit with T flip-flops, the X’s are included with the 0's. This means that no change of state occurs; the circuit stays in state 00 or 11 when the input is 0 or I, respectively. For the circuit with D flip-flops, we have chosen 01 to be the next state from present state 00 with x = 0 and 10 to be the next state from present state 11 with x = 1 EXAMPLE 7-5. Design a circuit with one flip-flop and two inputs to conform with the timing diagram of Fig. 7-15. Flip-flop Q is set when A = 1 and B = 0; it is cleared when A = 1 and B = | and is left in the same state otherwise. The reader should immediately realize that this is a simple sequential circuit with functions AB’ and AB, respectively, needed to set and clear the flip-flop. The type appropriate for these functions is either an RS ot JK flip-flop. The flip-flop input functions for an RS type are: SQ = AB RQ = AB Now, let us assume that the solution was not immediately apparent and proceed to determine the flip-flop input functions by means of an excitation 224 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap.7 ee Se Figure 7-15 Timing diagram for Example 7-5 table. The next state of Q in Table 7-14 has to be a 1 when the inputs AB = 10 and a 0 when AB = 11, irrespective of the present state of Q. For the four remaining combinations, the next state of @ must remain the same as present state. From the RS flip-flop input values, we obtain the maps of Fig. 7-16. The simplified input functions from the maps are as predicted. Design With State Equations EXAMPLE 7-6. Design a sequential circuit that behaves according to the following state equations: A(t + 1) = CD'+C'D B+ I=A art =B Der y=c From the four given state equations, we conclude that there are four flip-flops in the circuit denoted by A, B, C, and D and that there are no external inputs or outputs since none are specified. The specified circuit is a shiftregister with the input into A being dependent on the present state of C and D. Such a circuit is called a feedback shift-register. In a feedback shift-register, each flip-flop shifts its content to the next flip-flop when a clock pulse occurs and, at the same time, the states of certain flip-flops determine the next state of the first flip-flop. The most convenient flip-flops to use are the D type. As shown in Sec. 6-4, the state equations represent the same information as a state table. It is possible to obtain the state table and flip-flop excitation requirements for this circuit but, when D flip-flops are used, it will be a complete waste of time. This is because the entries for the flip-flop inputs are exactly the same as the next state and the conditions for the next state are already listed in the specifications. Therefore, the See. 7-5 DESIGN EXAMPLES 225 Table 7-14 Excitation Table for Example 7-5 ee eee Q sQ_RQ AB cad o 8 are ° fa (alee ohlepep fe Ch 7 sonar nas Figure 7-16 Maps for Example 7-5 flip-flop input functions for this circuit are taken directly from the specifications, with the next state symbol replaced by the flip-flop input variable as follows: DA = CD'+C'D DB=A DC=B DD=C EXAMPLE 7-7. Design a sequential circuit with JK flip-flops to satisfy the following state equations: A(t + 1) = A'BICD + A'B'C + ACD + AC'D' B(e + 1) = A'C + CD + A'RC" cet =B Dir + )=D' The combinational circuit can be found directly from the state equations without having to draw the state table or excitation table, This is 226 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 accomplished by means of a matching process between the state equation and the characteristic equation of the JK flip-flop. The characteristic equation of the JK flip-flop in terms of an output variable Q was derived in Fig. 6-6(d) and is repeated here Or + I= CO + KO Input variables J and K" are enclosed in parentheses so as not to confuse the AND terms in the characteristic equation with the two-letter convention which has been used to represent a flip-flop input variable. The matching process consists of arranging and manipulating the state equation until it is in the form of the characteristic equation. Once this is done, the functions for J and K can be extracted and simplified. The input functions for flip-flop A are derived by this method by arranging the state equation and matching it with the characteristic equation as follows A(t + 1) = (B'CD + BIC)A' + (CD + C'D')A = OAH (KA From the equality of the two equations, we deduce the input functions for flip-flop A to be: J = B'CD + B'C= BC K=(CD+CD'y =cD'+CD ‘The state equation for flip-flop B can be arranged as follows: BO + 1)=4'C+ CD) + (4'C)B However, this form is not suitable for matching with the characteristic equation because the variable B’ is missing. If the first quantity in parentheses is ANDed with (B’ + B), the equation remains the same but with the variable B" included. Thus, B(t + 1) = (A'C + CD) (B+ B) + (4'C)B = (A'C + CD')B' + (A'C + CD' + A'C')B = O)B' + (KB From the equality of the two equations, we deduce the input functions for flip-flop B: J=A'C+CD' K = (4'C + CD' + AC’) = AC' + AD See. 7-6 DESIGN OF COUNTERS 227 The state equation for flip-flop C can be manipulated as follows ct + 1) = B= B(C' + C) = BC' + BC MC + (KC The input functions for flip-flop C are: J=B K=B' Finally, the state equation for flip-flop D may be manipulated for the purpose of matching as follows: D(t + 1) = D' = 1.D' + 0D = U)D' + (K')D which gives the input function: J=K=1 The derived input functions can be accumulated and listed together. The two-letter convention to designate the flip-flop input variable, not used in the above derivation, is used below: JA = BIC KA = CD'+C'D JB=A'C+CD' KB = AC + AD’ JC=B KC=B' JD=1 KD=1 The design procedure introduced in Exs. 7-6 and 7-7 is an alternative method for determining the flip-flop input functions of a sequential circuit when D or JK flip-flops are used. To use this procedure when a state diagram or a state table is initially specified, it is necessary that the state equations be derived by the procedure outlined in Sec. 6-4. The state equation method for determining flip-flop input functions must be modified somewhat if the circuit has unused binary states which are considered as don’t-care conditions (see Prob. 7-10). The application of this procedure to circuits with RS and T flip-flops is possible but involves a considerable amount of algebraic manipulation (see Prob. 7-11). 7-6 DESIGN OF COUNTERS State transitions in clocked sequential circuits occur during a clock pulse; the circuit is assumed to remain in its present state if no pulse occurs. For 228 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 this reason, the clock pulse does not appear explicitly as an input variable in a state table. From this point of view, a counter may be regarded as a sequential circuit with no input variables, since the only input is the count pulse, The next state of a counter depends entirely on its present state and the state transition occurs every time a pulse occurs. Because of this property, a counter can be completely specified by a list of the count sequence; that is, the sequence of binary states that it undergoes. Consider for example a BCD counter whose count sequence is listed in the first column of Table 7-15. The next number in the sequence represents the next state reached by the circuit upon the application of a count pulse. The count is assumed to repeat itself in such a way that state 0000 is the next state after 1001. The count sequence gives all the information needed to construct the state table. However, it is unnecessary to list the next states in a separate column because they can be read from the next number in the sequence. The design of the combinational circuit part of a counter follows the same procedure as that outlined in Sec. 7-4, except that the excitation table can be obtained directly from the count sequence. We shall illustrate this procedure by designing a BCD counter with 7 flip-flops. Table 7-15 is the excitation table for the BCD counter. The entries for the flip-flop input conditions are determined from the characteristics of a T flip-flop and by inspecting the state transition from a given count to the next below it. For example, the row with count 0101 is compared with 0110, the next count below it. The bits for flip-flops Qs and Q, do not undergo a change, so 7Qs and TQ, in row O101 are marked with a 0. Flip-flop Qz changes from 0 to 1 and Qy changes from 1 to 0, and, since both have to be complemented to reach the next count, it is necessary that their T inputs be a 1. Therefore, 7Q2 and TQ, in row O101 are marked with a 1. The last row with count 1001 is compared with count 0000 of the first row to obtain the entries for the flip-flop inputs in the last row. ‘Table 7-15. Excitation Table for BCD Counter Count sequence Flip-flop inputs, Qs Os 2 O | TOs TOs 10, Tar eorcrorore Sec. 76 DESIGN OF COUNTERS 228 The flip-flop input functions from the excitation table are simplified in the maps of Fig. 7-17. The six X's in each map represent the don't-care conditions from the six unused states. The simplified flip-flop input functions are TQ, =1 TQ2= Qs, TQs = Q201 TQs = QsQ1 + QuQ201 The circuit requires four T flip-flops, four AND gates, and one OR gate. Counter-Decoder Circuits Counters, together with decoders, are used to generate timing and sequencing signals that control the operations of digital systems. Figure 7-18 shows a simple configuration for generating timing signals. It consists of a counter with m flip-flops that goes through a sequence of up to 2” states. The outputs of the flip-flops go into a decoder (Sec. 4-8) that has up to 2” outputs. As the count sequence progresses from 1 to , decoder outputs f1 to Z,, become logic-1 in consecutive order. These outputs can be used as command signals to perform a ooo 0 0 1 to) 1 1 Oe ae hy x [x [x] x x|x|x e Ih 1]x x 2, TQ,~ 220, TQ, = Q,0,+0,0,0, rf tel eae |e rfa rfafa]a x|[x|x x] x eee rfilx TQ,= 242, TQ,—1 Figure 7-17 Maps for BCD counter 230 DESIGN OF CLOCKED SEQUENTIAL CIRCUITS chap. 7 Clock ‘Sequence counter pulses (in fip flops) Figure 7-18 Block diagram of counter-decoder sequence of operations in a digital system. ‘A counter-decoder circuit can be designed to give any desired number of repeated timing sequence. Consider for example the sequence of six timing signals shown in Fig. 7-19. To generate the six outputs f, to fe, we need a counter that goes through a repeated sequence of six states and a decoder whose output f becomes logic-1 when the counter is in the corresponding state k for k = 1, 2,..., 6. We shall now proceed to design two counter- decoder circuits capable of generating the six timing signals shown in Fig. 7-19. Three flip-flops produce up to eight binary states. To generate six timing eS eee ee i 4 tt Le ee % % aa Figure 7-19 Sequence of 6 timing signals See. 7-6 DESIGN OF COUNTERS 231 signals, we need a counter that goes through a sequence of six states. It does not matter which six states are chosen or in what sequence they are ordered; the outputs of the circuit are taken from the decoder which can be designed to decode any combination of six states. We are confronted here with the state assignment problem; that is, we have to choose a sequence of six states in such a manner as to minimize the number of gates in the combinational circuit part of the counter and in the decoder. Two assignments will be chosen out of the many possible; the first is a straight binary assignment and the second is an assignment derived to produce a counter without the need of gates. We shall employ JK flip-flops since they are the most versatile for this application. Table 7-16 is the excitation table for a count-by-6 counter with the count sequence chosen to be a straight binary count from 000 to 101. The two unused states, 110 and 111, are taken as don't-care terms. The simplified flip-flop input functions are easily derived to be: JA=BC KA=C JB=A'C KB=C Jo=1 KC=1 The Boolean functions for the decoder are simplified somewhat when the unused states are taken as don’t-care terms: t= ABC ty = BC ts = AC’ tg = AC 4 The logic diagram for the circuit is drawn in Fig. 7-20. It requires three flip-flops and eight AND gates. A second possible assignment of six states is shown in Table 7-17. In this assignment, the sequence for flip-flops B and C is a repetition of the binary count 00, 01, 10, while flip-flop 4 is chosen to alternate between 0 Table 7-16 Excitation Table for First Count-by-6 Counter Count sequence lip lop inputs 46 ¢ [mx [we xe [x Ke Ole (eee bee bee eae gee beer ce ear x oo 4 seers) gears este eee Dee isececOse eg ama eer eaecae 1 x Oia ee xa ae Ole Ot |e ee ree gFee |e Oaeaene eed x too 1] ek 4 o x tx i 232 DESIGN OF cl LOCKED SEQUENTIAL CIRCUITS Chap. 7 ts te a cP 4 7 Tl ox Josie Figure 7-20 Count’y-6 circuit with straight binary count Table 7-17 Excitation Table for Second Count-by-6 Counter Count sequence Flip flop inputs a8 c | a xa | we xe | sc Ke eee Ose avo ee eee | ee One cemee [teed eee OO) Oe elt Onsaeaiatz 0 ct |aauenenreeaae eax eee ox toe or Ne olor ae Fier vere Weed Pees eee eager eee ieee | exaaeeeitie| ee meteeeaaa |e Oceeee and 1 every three 111. The simplified JA=B JB=C JC=B' = A'B'C’ nae ty = AB counts, The unused states for this count are O11 and functions for this circuit are: KA=B KB=1 KC=1 ty = ABC’ ts = AC t6 = AB Sec. 7-6 PROBLEMS 233 The logic diagram is drawn in Fig. 7-21. The counter does not require any gates, the decoder requires six AND gates. The circuits of Figs. 7-20 and 7-21 perform identical operations and produce identical timing sequences, as specified in Fig. 7-19, yet the one in Fig. 7-21 requires two less gates. '6 1 1 2 4 Figure 7-21 Second form of a count-by-6 counter PROBLEMS 7-1, Reduce the number of states in the following state table and tabulate the reduced state table. pas Next state state x=0 x= x=0 a f 6 0 b a en 0 £ f ee 0 a z a 1 ‘ a ca 0 f f£ 6 1 « e ae ° + g a 1 72. 73. TA, 18. 16. 17. 7-10. TAL. DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 Starting from state a of the state table in Prob. 7-1, find the output sequence generated with an input sequence 01110010011. Repeat Prob. 7-2 using the reduced table of Prob. 7-1, Show that the same output sequence is obtained. Substitute binary assignment 2 of Table 7-4 to the states in Table 7-3 and obtain the binary state table. Repeat with binary assignment 3. Obtain the excitation table of the JK' flip-flop described in Prob. 6-5. Obtain the excitation table of the set-dominate flip-flop described in Prob. 6-6. Obtain the characteristic table and excitation table of an RST flip- flop. This is a three-input flip-flop with both RS and T capabilities. Only one input can be excited at one time. ‘A sequential circuit has one input and one output. The state diagram is as shown. Design the sequential circuit with (a) T flip-flops, (b) RS flip-flops, and (c) JK flip-flops. 0/0 Design the circuit of a four-bit register that converts the binary number stored in the register to its 2's complement value when input x =I. The flip-flops of the register are of the RST type. This flip-flop has three inputs: two inputs have RS capabilities and one has a T capability. The RS inputs are used to transfer the four-bit number when an input y = 1, Use the T input for the conversion, (a) Derive the state equations for the sequential circuit specified by Table 7-5, Sec. 7-2. List the don’t-care terms. (b) Derive the flip-flop input functions from the state equations (and don’t-care terms) using the method outlined in Bx. 7-7. Check your answers with the solution given in Ex. 7-1. The state equation method for determining flip-flop input functions as See. 7.6 PROBLEMS 235, outlined in Exs. 7-6 and 7-7 can be extended to circuits with Tor RS flip-flops. Chapter $ of the book by Phister (8) shows that the following relations hold: type of characteristic state {flip-flop function flip-flop equation equation __fromg, and, condition T TOT | 60+ 8 T= 8 fe, =% T=8,0+8,0° mone RS S+RO 82 + 8,0 S=g,,R if gig, = 0 S= 020, isis, tO Repeat Exs. 7-3 and 7-4 using the above table. 7-12. Using only T flip-flops, design a shift-register that shifts both right and left. 7-13, Derive the state diagram for the circuit of Ex. 7-6. 7-14, Repeat Ex. 7-1 with binary assignment 3 of Table 7-4 7-15. Verify the circuit obtained in Ex. 7-7 by using the excitation table method. 7-16, Verify the flip-flop input functions obtained for Ex. 7-2. Repeat Ex. 7-4 using RS flip-flops. Repeat Ex. 7-7 using D flip-flops. . Design the sequential circuit described by the following state equations. Use JK flip-flops. A(t +1) = XAB + yA'C + xy BU + 1) = xAC + y'ABE Cle + 1) = x'B + yABE 7-20. Design a synchronous BCD counter with JK’ flip-flops. 7-21. Design a decimal counter using the 2,4,2,1 code and T flip-flops. 7-22. Design a counter-decoder circuit that recycles every 12 input pulses. ‘The decoder has four outputs to detect the occurrence of the 3rd, 6th, 9th, and 12th input pulse, respectively, Use T flip-flops. Also draw the timing diagram for the four decoder outputs, 7-23, Design the following counter-lecoders using JK flip-flops: (a) count- by-35 (b) count-by-5; (c) count-by-7; (d) count-by-8 7-24, Design a counter with the following binary sequence: 0, 5, 7 and repeat. Use RS flip-flops. 7-25. Design a counter with the following binary sequence: 0, 1, 3, 7, 6, 4 and repeat, Use T flip-flops. 7-26. Design a counter with the following binary sequence: 0, 4, 2, 1, 6 and repeat, Use JK flip-flops. 3,264, 236 10. 1. DESIGN OF CLOCKED SEQUENTIAL CIRCUITS Chap. 7 REFERENCES Marcus, M. P., Switching Circuits for Engineers, 2nd ed. Englewood Cliffs, NJ.: Prentice-Hall, 1965. McCluskey, E. J., Introduction to the Theory of Switching Circuits, New York: McGraw-Hill Book Co., 1965. . Miller, R. E., Switching Theory, two volumes. New York: John Wiley and Sons, 1965. Krieger, M., Basic Switching Circuit Theory. New York: The Macmillan Col, 1967. Hill, F. J., and G. R. Peterson, Introduction to Switching Theory and Logical Design. New York: John Wiley and Sons, 1968, Givone, D. D., Introduction to Switching Circuit Theory. New York McGraw-Hill Book Co., 1970. |. Kohavi, Z., Switching and Finite Automata Theory. New York: McGraw-Hill Book Co., 1970. Phister M., The Logical Design of Digital Computers. New York: John Wiley and Sons, 1958. . Paull, M. C., and S. H. Unger, “Minimizing the Number of States in Incompletely Specified Sequential Switching Functions,” JRE Trans. on Electronic Computers, Vol. EC-8, No. 3 (September 1959), 356-66. Hartmanis, J., “On the State Assignment Problem for Sequential Machines 1,” [RE Trans. on Electronic Computers, Vol. EC-10, No. 2 une 1961), 157-65. McCluskey, E. J., and $. H. Unger, “A Note on the Number of Internal Assignments for Sequential Circuits,” JRE Trans. on Electronic Com- puter, Vol. EC-8, No. 4 (December 1959), 439-40, OPERATIONAL AND 8 STORAGE REGISTERS @1 REGISTER TRANSFER ‘A digital system is a sequential logic system which can be constructed with memory elements and combinational gates. As long as the number of flip-flops and inputs is small, the circuit can be designed by means of state tables and excitation tables. A large system, such as a digital computer, would be difficult, if not impossible, to define by a single state table. To overcome this difficulty, large systems are invariably designed using a modular approach. The system is partitioned into a number of standard modules, each of which performs some functional task. A computer proces: sor unit for example, is designed using modules such as registers, decoders, arithmetic circuits, and control circuits. The modules are in turn constructed with submodules until one arrives at a circuit with a small number of elements which can be designed by means of excitation tables or equivalent procedures. With such an approach, each module is separately designed and tested. The various modules are then interconnected with common data- transfer and control-signal paths to form a processor unit. The standard module used for storing binary information is the register. A register was defined in Sec. 1-7 as a group of binary cells. A group of flip-flops constitutes a register since a flip-flop is a binary cell capable of storing one bit of information, In addition to the flip-flops, a register may have combinational gates that perform certain data processing tasks. In its broadest definition, a register consists of a group of flip-flops and the gates that affect their transition, The flip-flops hold the binary information while the combinational gates process it. The shift-register is an example of a 237 238 OPERATIONAL AND STORAGE REGISTERS Chap. & register whose data processing task is the transfer of information. A counter may be considered a register whose function is to count the number of incoming pulses. Data transfer among modules and units is accomplished by means of inter-register transfer operations. These operations consist of a direct transfer of binary information from one register to another. The destination register that receives the information assumes the previous value of the source register. The value of the source register normally does not change because of the transfer. Information transfer from one register to another can be performed in either series or parallel. In series transfer, both the source and destination registers are shift-registers. The information is transferred one bit at a time by shifting the bits out of the source register into the destination register. In order not to lose the information stored in the source register, it is necessary that the information shifted out of the source register be recircu- lated and shifted back at the same time. Parallel transfer consists of a simultaneous transfer of all the bits from the source register to the destina- tion register. Parallel transfer is done during one clock pulse, while serial transfer needs a number of clock pulses equal to the number of bits transferred. Figure 8-1 shows a parallel transfer of binary information from an n-bit source register to four destination registers. Each register is represented by a box and contains D-type flip-flops. Each flip-flop in a destination register uses an AND gate with two inputs. One input contains the corresponding information bit and the other is a control signal. The four control signals determine which register receives the information. For example, when con- trol line A is logic-1 (while control lines B, C, and D are logic-0) and a clock pulse occurs, register A receives the information from the source register. The information in the other three registers and the source register remains the same. In a system with many registers, the transfer from each register to another requires that lines be connected from the output of each flip-flop in one register to the input of each flip-flop in all the other registers. Consider for example the requirement for transfer among three registers as shown in Fig. 8-2, There are six data paths between registers. If each register consists of m flip-flops, there is a need for 6” lines. As the number of registers increases, the number of lines increases considerably. However, if we restrict the transmission of data among registers to one at a time, the number of paths among all registers can be reduced to just one per flip-flop for a total of lines. This is shown in Fig, 83, where the output and input of each flip-flop is connected to a common line through an electronic circuit that acts like a switch. All the switches are normally open until a transfer is required. For a transfer from Fy to Fy, for example, switches S; Sec. 1 REGISTER TRANSFER 239 [ 7 souce ae cel : qi 7 Loxraat 7 4 r Destination 5 Ch $1 > eration r register Destination Figure 81 Transfer of information in parallel from one source to four destinations a] 7 m3 = Figure 8-2. Transfer among three registers and Sq are closed to form the required path, This scheme can be extended to registers with n flip-flops and requires common lines, flip-flop of the register must be connected to one common line. ‘A group of wires through which binary information is transferred one at a time among registers is called a bus, For a parallel transfer, the number 240 OPERATIONAL AND STORAGE REGISTER Chap. 8 6 Ss 4 Figure 8-3 Transfer through one common ine of wires in the bus is equal to the number of flip-flops in the register. The idea of a bus transfer is analogous to a central transportation system used to bring commuters from one point to another. Instead of each commuter using his own private car to go from one location to another, a bus system is used with each commuter waiting in line until transportation is available. Figure 8-4 shows one possible implementation of inter-register transfer through a bus. Three registers, each with 1 flip-flops, are connected to the bus lines through circuits labeled BD (Bus Driver). The output of each flip-flop goes to one input of a BD circuit and the other input of the BD is connected to a control signal. The two inputs in each BD produce a logic AND operation. The outputs of the BD have a wired-OR function (Sec. 5-7); that is, their connection to a common wire produces a logic OR operation. For a transfer from register Ri to register R3, for example, control signals $, and Sz become logic-1. This connects the flip-flops from register RI to the bus lines and at the same time enables the AND gates in R3 to accept the information from the bus lines. A clock pulse to all flip-flops results in the transfer of the binary information from RI to R3, while the information in RI and R2 remains the same. Transfer through a bus is limited to one transmission at a time. If two transfers are required at the same time, two buses must be used. A large digital system will normally employ a number of buses with each of tegisters connected to one or more buses to form the various paths needed for the transfer of information. A bus system may be formed with multiplexer and demultiplexer circuits (Sec. 4-9), A multiplexer circuit selects data from many lines and directs it to a single output line. A demultiplexer circuit receives information from one line and distributes it over a large number of destinations. Thus, multiplexers can function as BD circuits and demultiplexers as the input gating circuits. The selection of the registers is done by the selection lines of the two circuits. The multiplexer selection lines determine the source register and the demultiplexer selection lines determine the destination register. Sec. 82 REGISTER OPERATIONS 241 Figure 8-4 Transfer through a bus 82 REGISTER OPERATIONS ‘An elementary operation is an operation performed during one clock pulse on the information stored in a register. The result of the operation may replace the previous binary information of the register or may be trans- ferred to another register. An inter-register transfer is an elementary operation. Other examples are shift, count, add, and clear. Since an elementary operation is completed during one clock pulse, it is sometimes called a micro-operation. Tt is the most basic operation that exists in digital system. A register capable of performing elementary operations is an operational register. As shown in Fig. 8-5, an operational register consists of a group of flip-flops and a combinational circuit that performs the required operations. All processor registers, including shift-registers and those involved with parallel transfers, are classified as operational registers. For this reason, when dealing with register operations we shall refer to an operational register as a register; that is, the word register will include the group of flip-flops and its associated combinational circuit. The following chapters will show that all data processing performed in digital systems is implemented through a sequence of elementary operat 242 OPERATIONAL AND STORAGE REGISTER Chap. 8 | Outputs ‘Group of fip-Nops I Combinational circuit [+ Inputs Figure 85 Block diagram of an operational register performed on data stored in registers. The purpose of this section is to introduce a symbolic notation to describe register transfers and elementary operations in a concise and precise manner. From a list of register opera- tions, it is then possible to obtain the Boolean functions for the system. The register transfer symbols are defined in Table 8-1. Registers are denoted by capital letters and the individual flip-flops of the register by the letter designation with a subscript. Thus, A denotes a register; Ay denotes flip-flop 3 of register A. The subscript number of each flip-flop must be part of the definition of a register. In this book we shall adopt the convention of numbering the flip-flops of a register in ascending order starting from the right-most position. However, any other convenient num- bering of the flip-flops is possible. Figure 8-6 shows a block diagram of an eight-bit register denoted by A. The individual flip-flops (together with their associated combinational gates) are numbered from right to left with sub- scripts 1 through 8. Table 8-1. Register transfer symbolic notation ‘Symbol Description capital letter denotes a register subscript denotes a flip-flop in a register parentheses () denotes contents of a register double line arrow = denotes transfer of information ‘brackets [] denotes a portion of a register (see Sec. 10-2) brackets <> denotes a memory register specified by an address (see Sec. 8-3) plus + denotes arithmetic addition minus ~ denotes subtraction disjunction V denotes logical OR operation (see Sec. 9-6) conjunction A ‘denotes logical AND operation (see Sec. 9-6) exclusive-or © denotes exclusive-or operation (see Sec. 9-6) bar ~ denotes complementation equality = denotes equality colon denotes termination of a Boolean control function comma , separates two elementary operations which are per- formed simultaneously during one clock pulse Sec. 82 REGISTER OPERATIONS 243 Ae | 47 Figure 8-6 Block diagram of an 8-bit register denoted by the letter symbol A Parentheses denote the contents of a register or flip-flop. Thus (A) signifies the contents of register A and (A,) means the contents of the ith flip-flop of register A. The double line arrow denotes the transfer of information between registers. Thus, (4) = 8B represents a symbolic notation for the parallel transfer of the contents of register A into register B. We shall assume that the contents of register A are not changed during the transfer unless otherwise noted. The serial transfer of information from register A to register B is done with shift-registers as shown in the block diagram of Fig. 8-7. The right- most flip-flop of register A is labeled A, and the left-most flip-flops of registers A and B are labeled A,, and B,,, respectively. When the shift-right command signal Sp is logic-1 and a clock pulse occurs, the contents of registers A and B are shifted once to the right and the value of A, is transferred to flip-flops B, and A,,. This causes the transfer of one bit of information from register A to B and at the same time one bit is recir- culated back to register A. This transfer can be expressed by means of symbolic notation as follows: Sy: Ai) > By 1) > Ags Ay 1) > Ap Be 4 1) 7B; i21,2,3,...0-1 The control function Sp is terminated by a colon and designates a Boolean condition; i.e., the register operations listed after the colon are performed only if Sp = 1. The elementary operations are separated by a comma and are performed simultaneously during one clock pulse. The subscript i denotes the individual flip-flops of the register. Note that an elementary operation, by definition, is completed during one clock pulse and therefore, (Shit eight) Sr Af] sited] Shit aoer 8 Te z Figure 8-7. Serial transfer by shift registers 244 OPERATIONAL AND STORAGE REGISTERS Chap. 8 the above symbolic statement designates a transfer of only one bit. For a complete transfer of all m bits, the command signal Sp must remain a 1 for a period of n clock pulses. Other examples of elementary operations are listed in Table 8-2 for reference. A sequence of elementary operations are usually conditioned by control functions. The operations are assumed to follow the listed sequence when the control functions are not included. For example, 4: (2A, 058 Faty: (A) + (B) > A means that when Boolean variable f, is logic-1, register A is complemented and register B is cleared. When Boolean function F,t2 = 1, the contents of registers A and B are added and the sum is transferred to register A. On the other hand, the following sequence: @2C,B@)=T (+14C,@-12T does not include control functions and therefore denotes a sequence of four elementary operations performed during two clock pulses. During the first clock pulse, the two elementary operations listed in the first line (separated by a comma) are executed: the contents of register B are transferred to register C and to segister 7. During the next clock pulse, the two elemen- tary operations listed in the second line are executed: the content of register C is increased by unity and that of register T is decreased by unity. In Ch. 9 we demonstrate the procedure for obtaining the Boolean functions from the list of register elementary operations. In Chs. 10 and 11, we use register operation sequences to specify the internal operations of digital computers. 83 THE MEMORY UNIT ‘The registers in a digital computer may be classified as either operational or storage. As mentioned previously, an operational register is capable of storing binary information in its flip-flops, and in addition, has combina- tional gates capable of data processing tasks. A storage register is used solely for temporary storage of binary information. This information cannot be altered when transferred in and out of the register. A memory unit is a collection of storage registers, together with the associated circuits needed to transfer information in and out of the registers. The storage registers in a memory unit are called memory registers The bulk of the registers in a digital computer are memory registers, to Sec. 8:3 THE MEMORY UNIT 245 ‘Table 8-2. Examples of symbolic notation and elementary operations ‘Symbolic designation Description 4 al bits of register A 4; the ith Miplop of register A A,s) contents of flip-flops 1 through 8 of register A ase contents of register A are transferred to register B o-c clear register C (transfer 0's to all flip-flops) Gert) Ai contents of flip-flop i + 1 of register A are transferred 10 fip-flop i of register A 1) subregister MT which is part of register I (see Sec. 10-2) memory register specified by the address in the D register (see Sec. 8-3) A) + BA contents of register A are added to contents of regis- a ter B and the sum transferred to register A A)=A complement all flip-flops in register 4 ) = @) contents of register A are equal to contents of register B ACOA swap contents of registers A and C during one clock pulse PQ*130 if Boolean variable P is logic-1, then increment (increase by one) register C which information is transferred for storage and from which information is available when needed for processing. A comparatively few operational registers are found in the processor unit. When data processing takes place, the information from selected registers in the memory unit is first trans- ferred to the operational registers in the processor unit. Intermediate and final results obtained in the operational registers ate transferred back to selected memory registers. Similarly, binary information received from input devices is first stored in memory registers, information transferred to output devices is taken from registers in the memory unit. The component that forms the binary cells of registers in a memory unit must have certain basic properties, the most important of which are: (a) it must have a reliable two-state property for binary representation, (b) it must be small in size, (c) the cost per bit of storage should be as low as possible, and (a) the time of access to a memory register should be reasonably fast. Examples of memory unit components are: magnetic cores, semiconductor ICs, and magnetic surfaces in tapes, drums, or disks. ‘A memory unit stores binary information in groups called words; each word being stored in a memory register. A word in memory is an entity of m bits that moves in and out of storage as a unit. A memory word may represent an operand, an instruction, a group of alphanumeric characters, or any binary coded information. The communication between a memory unit and its environment is achieved through two control signals and two external registers. The control signals specify the direction 245 OPERATIONAL AND STORAGE REGISTERS Chap. 8 of transfer required; that is, whether a word is to be stored in a memory register or whether a word previously stored is to be transferred out of a memory register. One external register specifies the particular memory register chosen out of the thousands available; the other specifies the particular bit configuration of the word in question. The control signals and the registers are shown in the block diagram of Fig. 8-8, The memory address register specifies the memory word selected. Each word in memory is assigned a number identification starting from 0 up to the maximum number of words available. To communicate with a specific memory word, its location number, or address, is transferred to the address register. The internal circuits of the memory unit accept this address from the register and open the paths needed to select the word called. An address register with m bits can specify up to 2" memory words. Computer ‘memory units can range from 1024 words, requiring an address register of 10 bits, to 1,048,576 = 27° words, requiring a 20-bit address register. The two control signals applied to the memory unit are called read and write. A write signal specifies a transfer-in function; a read signal specifies a transfer-out function. Each is referenced from the memory unit. Upon accepting one of the control signals, the internal control circuits inside the memory unit provide the desired function. Certain types of storage units, because of their component characteristics, destroy the information stored in a cell when the bit in that cell is read out. Such a unit is said to be a MEMORY UNIT read words Control emory aldres mits per word ‘Signals Input address TIL ‘Memory buffer register write in out Information Figure 8:8 Block diagram of a memory unit showing communication with environment Sec. 8:3 THE MEMORY UNIT 247 destructive read-out memory as opposed to a nondestructive memory where the information remains in the cell after it is read out. In either case, the old information is always destroyed when new information is written, The sequence of internal control in a destructive read-out memory must provide control signals that will cause the restoration of the word back into its binary cells if the application calls for a non-destructive function. The information transfer to and from registers in memory and the external environment is communicated through one common register called the memory buffer register (other names are: information register and storage register). When the memory unit receives a write control signal, the internal control interprets the contents of the buffer register to be the bit configuration of the word to be stored in a memory register. With a read control signal, the internal control sends the word from a memory register into the buffer register. In each case, the contents of the address register specify the particular memory register referenced for writing or reading. Let us summarize the information transfer characteristics of a memory unit by an example. Consider a memory unit of 1024 words with eight bits per word. To specify 1024 words we need an address of 10 bits, since 21° = 1024. Therefore, the address register must contain 10 flip-flops. The buffer register must have eight flip-flops to store the contents of words transferred in and out of memory. Let us use the following symbolic notation for the various registers: D address register B buffer register (©) contents of D register _ memory register addressed by D () the contents of Figure 89 shows the initial contents of three registers: the D register, the B register, and the memory register addressed by D: ) 0000101010 = decimal 42 (contents of D register) = memory register number 42 () = 01101110 (contents of memory register number 42) (B) = 10010010 (contents of buffer register) The following elementary operations are needed to read the contents of memory register number 42: 0000101010 > D read: () > B The binary address is first transferred to the D register. A read control signal applied to the memory unit causes a transfer from the specified 248 OPERATIONAL AND STORAGE REGISTERS Chap. 8 Address 9 p-———— 1023 | 1 Memory unit 1 B 42[o1toii10 4a Address register (D) 49 L 10010010 Bufler register (B) Figure 89 Initial values of registers memory register to the B register. This transfer is depicted in Fig. 8-10(a). A write control signal applied to the memory unit causes a transfer of information as shown in Fig. 8-10(b). The following elementary operations describe the transfer of the eight-bit word 10010010 into memory register 42: 0000101010 = D, 10010010 > B write: (B) + The address is transferred to the D register and the contents of the word are transferred into the B register. A write control signal transfers the contents of B into the memory register specified by the D register. ————— -———— 1 Memory Unit Memory Unit —

=42[o1101110

=42[ 10010010 f 1 C + } ‘ : e[oroiii0 B[ 10010010 (a)Read : (D) (BY (b) Write: (B) = Figure 8-10 Information transfer dusing read and write Sec. 8.3 THE MEMORY UNIT 249 In the above example we have assumed a memory unit with nondestruc- tive read-out property. Such memories can be constructed with semicon- ductor ICs. They retain the information in the memory register when the register is sampled during the reading process so that no loss of information occurs. Another component commonly used in memory units is the mag- netic core. A magnetic core characteristically has destructive read-out; ie., it loses the stored binary information during the reading process. Examples of semiconductor and magnetic core memories are presented in Sec. 8-4. Because of its destructive read-out property, a magnetic core memory must provide additional control functions to restore the word back into the memory register. A read control signal applied to a magnetic core memory unit causes information transfer as depicted in Fig. 8-11. This can be expressed symbolically as follows: read: th: () = B,0> destructive read-out hh: (B)= restore contents to memory register During the first half-cycle designated by t1, the contents of the memory register are transferred to the B register. Since this is a destructive read-out memory, the contents of the memory register are destroyed. The elementary operation 0 = designates the fact that the memory register is auto- matically cleared; i.e., receives all 0's during the process of reading. Without this additional designation, we would have to assume that the contents of have not changed. To restore the previously stored word into the memory register, we need a second halfcycle t2. Remember that the B register holds the word just read from memory and the D register holds the address of the memory register, so that the transfer during f automatically restores the lost information. A write control signal applied to a magnetic core memory causes a transfer of information as shown in Fig. 8-12. In order to transfer new SSS oo =42 piiermronl 00000000 01101110 7 | | a Destructive read Restore contents Figure 8-11 Information transfer in a magnetic core memory during read command 250 OPERATIONAL AND STORAGE REGISTERS Chap. 8 ——, — T Menon uae} Memory Oni} Memory Unit} =a [orrori10 oo000000 | T0010 010 {ees euieer ena) ne eae a a[sorooto} TooT0o1 Too ina i 0 = ta (@) ><> Clear memory Write word register, Figure 8-12 Information transfer in a magnetic core memory during write command information into magnetic cores, the old information must be erased first by clearing all cells to zero (see Sec. 84). Therefore, the memory register is cleared during the first half cycle and only during the second half-cycle are the contents of the B register transferred to the designated memory register. In symbolic notation we have write: th: OF clear memory register th: @)* transfer word to memory register The sum of the two half-cycles t+ f2 in either the read or write condition is called the memory cycle time The mode of access of a memory system is determined by the type of components used. In a random-access memory, the registers may be thought of as being separated in space, with each register occupying one particular spatial location as in a magnetic core memory. In a sequentiabaccess memory, the information stored in some medium is not immediately acces- sible But is available only at certain intervals of time. A magnetic tape unit is of this type. Each memory location passes the read and write heads in turn, but information is read out only when the requested word has been reached. The access time of a memory is the time required to select @ word and either read or write it. In a random-access memory, the access time is always the same regardless of the word’s particular location in space. In a sequential memory, the access time depends on the position of the word at the time of request. If the word is just emerging out of storage at the time it is requested, the access time is just the time necessary to read or write it. But, if the word happened to be in the last position, the access time also includes the time required for all the other words to move past the Sec. 8.4 EXAMPLES OF RANDOM-ACCESS MEMORIES 251 ‘Table 8-3 Access Time and Access Mode of Memory Types. Typical access Storage type time Access mode magnetic cores Lusec random plated wire 300 nsec random semiconductor IC 400 nsec random ‘magnetic drum 10-40 msec sequential magnetic tape 0.05-500 sec sequential magnetic disks 10-80 msec sequential terminals. Thus, the access time in a sequential memory is variable. Table 8-3 lists typical access time for various memory types. Memory units whose components lose stored information with time or when the power is turned off are said to be volatile. A semiconductor memory unit is of this category since its binary cells need external power to maintain the needed signals. By contrast, a nonvolatile memory unit, such as magnetic core or magnetic disk, retains its stored information after removal of power. This is because the stored information in magnetic components is manifested by the direction of magnetization, which is retained when power is turned off. A nonvolatile property is desirable in digital computers because many useful programs are left permanently in the memory unit, When power is tured off and then on again, the previously stored programs and other information are not lost but continue to reside in memory. 84 EXAMPLES OF RANDOM-ACCESS MEMORIES The internal construction of two different types of random-access memories are presented diagramatically in this section. The first is constructed with flip-flops and gates and the second with magnetic cores. To be able to include the entire memory unit in one diagram, it is necessary that a limited storage capacity be used. For this reason, the memory units pre- sented here have a small capacity of 12 bits arranged in four words of 3 bits each. Commercial random-access memories may have a capacity of thousands of words and each word may range somewhere between 8 and 64 bits. The logical construction of large capacity memory units would be a direct extension of the configuration shown here. The logic diagram of a memory unit that uses flip-flops and gates is shown in Fig. 8-13. The entire unit may be constructed physically with ICs deposited in one semiconductor chip. The binary cell that stores one bit of information consists of an RS flip-flop, three AND gates, and an inverter, The input gates allow information to be transferred into the flip-flop when 252 OPERATIONAL AND STORAGE REGISTERS. Chap. 8 write read |& fr] it: Binary Cell (BC) bie? ied aun peor nding, 90201 saysidar sappy, = 8 = = 5 ‘o pion fac] eaal = t Ed — + J Thon 5 T a + 3 8 + 8 prow Buffer register ‘Output Information Figure 8-13 Semiconductor 1, C. memory unit Sec. 8-4 EXAMPLES OF RANDOM-ACCESS MEMORIES 253 the write signal is logic-1. An output gate samples the output of the flip-flop when the read signal is logic-1. Each small box labeled BC in the diagram includes within it the circuit of a binary cell. The four lines going into each BC box designate the three inputs and one output as specified in the detailed diagram of the binary cell. To distinguish between four words we need a two-bit address. Therefore, the address register must have two flip-flops. To store a word of three bits, we need a buffer register with three flip-flops. The input information to be stored in memory is first transferred into the buffer register. The output information read from memory is available in the buffer register. The address of the word held in the address register goes through a decoder circuit and each of the four outputs of the decoder is applied to the inputs of two gates. One of these gates receives the read signal and the other, the write signal. When the read signal is logic-1, the gates that go to the read input of the binary cells become logic-1 and the contents of the three cells of the word specified by the address register are transferred to the buffer register. When the write signal is logic-1, the gates that go to the write input of the binary cells become logic-1 and the contents of the buffer register are transferred to the three cells of the word specified by the address register. The two operations perform the required read and write transfers as specified in Sec. 83, A magnetic core memory uses magnetic cores to store binary informa- tion, A magnetic core is a doughnutshaped toroid made of magnetic material. In contrast to a semiconductor flip-flop that needs only one physical quantity such as voltage for its operation, a magnetic core employs three physical quantities: current, magnetic flux, and voltage. The signal that excites the core is a current pulse in a wire passing through the core. The binary information stored is represented by the direction of magnetic flux within the core. The output binary information is extracted from a wire linking the core in the form of a voltage pulse. The physical property that makes a magnetic core suitable for binary storage is its hysteresis loop, as shown in Fig. 8-14(c). This is a plot of current vs. magnetic flux and has the shape of a square loop. With zero current, a flux which is either positive (counterclockwise direction) or negative (clockwise direction) remains in the magnetized core. One direction, say counter-clockwise magnetization, is used to represent a 1 and the other to represent a 0. A pulse of current applied to the winding through the core can shift the direction of magnetization. As shown in Fig. 8-14(a), current in the down- ward direction produces flux in the clockwise direction causing the core to go to the O state. Fig. 8-15(b) shows the current and flux directions for storing a 1, The path that the flux takes when the current pulse is applied is indicated by arrows in the hysteresis loop. 254 OPERATIONAL AND STORAGE REGISTERS. Chop. 8 , fox fox ‘Negative Positive. 0 Figure 8-14 Storing a bit into a magnetic core Vols Read 1 Sense Read 0 Time Read current Output of sense wire Figure 8-15 Reading a bit from a magnetic core Reading out the binary information stored in the core is complicated by the fact that ux cannot be detected when it is not changing. However, if flux is changing with respect to time, it induces a voltage in a wire that links the core. Thus, read-out could be accomplished by applying a current in the negative diréction as shown in Fig. 8-15. If the core is in the 1 state, the current reverses the direction of magnetization and the resulting change of flux will produce a voltage pulse in the sense wire. If the core is already in the O state, the negative current will leave the core magnetized in the same direction, causing a very slight disturbance of magnetic flux which results in a very small output voltage in the sense wire. Note that this is a destructive read-out since the read current always returns the core to the 0 state. The previously stored value is lost. Figure 8-16 shows the organization of a magnetic core memory con- taining four words with three bits each, Comparing it with the semicon- ductor memory unit of Fig. 8-13 we note that the buffer register, address register, and decoder are exactly the same. The binary cell now is a magnetic core and the wires linking it. The excitation of the core is accomplished by means of a current pulse generated in a driver (abbreviated DR). The output information goes through a sense amplifier (abbreviated see. 8.4 EXAMPLES OF RANDOM-ACCESS MEMORIES 255 - z @ & bit 1 it 2 bits i \ "19 " ea D6 Tes az 00 3S 7 P o yak i 1 L = ; T_} H £ [or] . 6 Oris rT ~ iH jpR-— ‘a pa ‘Sense amplifier Tt DRY “ cA 7A 4] 3 | PO pater plata) atecta| eens) eee Peete aleal t ee ait et Output information Figure 8-16 Magnetic core memory unit SA) whose outputs set corresponding flip-flops in the buffer register. Three wires link each core. The word-wire is excited by a word-driver and goes through the three cores of a word. A bit-wire is excited by a ‘driver and goes through four cores in the same bit position. The sense-wire links the same cores as the bit-wire and is applied to a sense amplifier that shapes 256 OPERATIONAL AND STORAGE REGISTERS Chap. 8 the voltage pulse when a 1 is read and rejects the small disturbance when a 0 is read. During a read operation, a word-driver current pulse is applied to the cores of the word selected by the decoder. The read current is in the negative direction (Fig. 8-15) and causes all cores of the selected word to g0 to the O state, irrespective of their previous state. Cores which previously contained a 1 switch their flux and induce a voltage into their sense-wire. The flux of cores which already contained a 0 is not changed. The voltage pulse on a sense-wire of cores with a previous 1 is amplified in the SA; sets the corresponding flip-flop in the buffer re During a write operation, the buffer register holds the information to be stored in the word specified by the address register. We assume that all cores in the selected word are initially cleared; i., all are in the 0 state so that cores requiring a 1 need to undergo a change of state. A current pulse is generated simultaneously in the word-driver selected by the decoder and in the bit-driver, whose corresponding buffer register flip-flop contains a 1 Both currents are in the positive direction, but their magnitude is only half that needed to switch the flux to the 1 state. This half current by itself is too small to change the direction of magnetization, But the sum of two half currents is enough to switch the direction of magnetization to the 1 state. A core switches to the I state only if there is a coincidence of two half currents from a word-driver and a bit-driver. The direction of magnetization of a core does not change if it receives only half current from one of the drivers. The result is that the magnetization of cores is switched to the 1 state only if the word and bit wires intersect; that is, only in the selected word and only in the bit position in which the buffer register is a 1. The read and write operations described above are incomplete. This is because the information stored in the selected word is destroyed by the reading process and the write operation works properly only if the cores are initially cleared. As mentioned in Sec. 83, a read operation must be followed by another cycle that restores the values previously stored in the cores. A write operation is preceeded by a cycle that clears the cores of the selected word. The restore operation during a read cycle is equivalent to a write operation which, in effect, rewrites the previously read information from the buffer register back into the word selected. The clear operation during a write cycle is equivalent to a read operation which destroys the stored information but prevents the read information from reaching the buffer register by inhibiting the SA. Restore and clear cycles are normally initiated by the memory internal control, so that the memory unit appears to the outside as having a nondestructive read-out property. Many types of memory units other than the two presented in this section have been used in digital computers. Information about other types Sec. 8.5, DATA REPRESENTATION IN REGISTERS 257 of memories including an extensive bibliography can be found in references 4-6. 8-5 DATA REPRESENTATION IN REGISTERS ‘The bit configuration found in registers represents either data or control information, Data are operands and other discrete elements of information operated on to achieve required results. Control information is a bit or group of bits that specify the operations to be done. A unit of control information stored in digital computer registers is called an instruction and is a binary code that serves to specify the operations to be performed on the stored data. Instruction codes and their representation in registers are presented in Sec, 10-1. Some commonly used types of data and their representation in registers are presented in this section. ary Numbers A register with m flip-flops can store a binary number of 1 bits; each flip-flop represents one binary digit. This represents the magnitude of the number but does not give information about its sign or the position of the binary point. The sign is needed for arithmetic operations, as it shows whether the number is positive or negative. The position of the binary point is needed to represent integers, fractions, or mixed interger-fraction numbers. The sign of a number is a discrete quantity of information having two values: plus and minus. These two values can be represented by a code of one bit. The convention is to represent a plus with a 0 and a minus with al, To represent a sign binary number in a register we need n +1 flip-flops; n flip-flops for the magnitude and one for storing the sign of the number. The representation of the binary point is complicated by the fact that it is characterized by a position between two flip-flops in the register. There are two possible ways of specifying the position of the binary point in a register: by giving it a fixed-point position or by employing a floating-point representation, The fixed-point method assumes that the binary point is always fixed in one position. The two positions most widely used are: (a) a binary point in the extreme left of the register to make the stored number a fraction or (b) a binary point in the extreme right of the register to make the stored number an integer. In either case, the binary point is not physically visible but is assumed from the fact that the number stored in the register is treated as a fraction or as an integer. The floating-point representation uses a second register to store a number that designates the 258 OPERATIONAL AND STORAGE REGISTERS. Chap. 8 position of the binary point in the first register. Floating-point representa- tion is explained in more detail below. When a fixed-point binary number is positive, the sign is represented by a 0 and the magnitude by a positive binary number. When the number is negative, the sign is represented by a 1 and the magnitude may be repre- sented in any one of three different ways. These are: 1. sign-magnitude 2. sign-1’s complement 3. sign-2’s complement In the sign-magnitude representation, the magnitude is represented by a positive binary number. In the other two representations, the number is either in 1's or 2's complement. If the number is positive, the three representations are the same. As an example, the binary number 9 is written below in the three representations, It is assumed that a seven-bit register is available to store the sign and the magnitude of the number. ” 9 sign-magnitude 0 001001 1 001001 sign-I’s complement 0.001001 1 110110 sign-2's complement 0 001001 1 110111 A positive number in any representation has a 0 in the left-most bit for a plus followed by a positive binary number. A negative number always has a 1 in the leftmost bit for a minus, but the magnitude bits are represented differently. In the sign-magnitude representation, these bits are the positive number; in the 1’s-complement representation these bits are the complement of the binary number; and in the 2's complement representation, the number is in its 2’s complement form. The addition and subtraction of two numbers in sign-magnitude repre- sentation is identical to paper and pencil arithmetic, but the machine implementation of this calculation is somewhat involved and inefficient. On the other hand, the rule for addition and subtraction of two numbers in complement representation is much simpler to implement. The algorithm and implementation of sign-magnitude addition and subtraction can be found in Ch, 12. Binary arithmetic with sign-complement representation is treated in Sec. 9- Decimal Numbers The representation of decimal numbers in registers is a function of the binary code used to represent a decimal digit. A four-bit decimal code, for See. 85 DATA REPRESENTATION IN REGISTERS 259 example, requires four flip-flops for each decimal digit. The representation of +4385 in BCD requires at least 17 flip-flops; 1 flip-flop for the sign and 4 for each digit. This number is represented in a register with 25 flip-flops as follows: + 0 0 4 3 8 5 0000000000100001110000101 By representing numbers in decimal, we are wasting a considerable amount of storage space since the number of flip-flops needed to store a decimal number in a binary code is greater than the number of flip-flops needed for its equivalent binary representation. Also, the circuits required to perform decimal arithmetic are much more complex. However, there are some advantages in the use of decimal representation, mostly because com- puter input and output data are generated by people that always use the decimal system. A computer that uses binary representation for arithmetic, operations requires data conversion from decimal to binary prior to per- forming calculations. Binary results must be converted back to decimal for output. This procedure is time consuming; it is worth using provided the amount of arithmetic operations is large, as is the case with scientific applications. Some applications such as business data processing require small amounts of arithmetic calculations. For this reason, some computers perform arithmetic calculations directly on decimal data (in binary code) and thus eliminate the need for conversion to binary and back to decimal. Large-scale computer systems usually have hardware for performing arith- metic calculations both in binary and in decimal representation. The user can specify by programmed instructions whether he wants the computer to perform calculations on binary or decimal data. A decimal adder is intro- duced in Sec, 12-2, There are three ways to represent negative fixed-point decimal numbers. They are similar to the three representations of a negative binary number except for the radix change: 1. sign-magnitude 2. sign-9’s complement 3. sign-10’s complement For all three representations, a positive decimal number is represented by a 0 (for plus) followed by the magnitude of the number. It is in regard to negative numbers that the representations differ. The sign of a negative number is represented by a 1 and the magnitude of the number is positive 260 OPERATIONAL AND STORAGE REGISTERS Chap. 8 in sign-magnitude representation. In the other two representations the magnitude is represented by the 9s or 10’s complement. ‘The sign of a decimal number is sometimes taken as a four-bit quantity to conform with the four-bit representation of digits. For example, one very popular computer uses the code 1100 for plus and 1101 for minus, Floating-Point Representation Floating-point representation of numbers needs two registers. The first represents a signed fixed-point number and the second, the position of the radix point, For example, the representation of the decimal number +46132.789 is as follows Si initial decimal point sign- veel - sae 06132789 004 first register second register (coefficient) (exponent) The first register has a O in the most significant flip-flop position to denote a plus. The magnitude of the number is stored in a binary code in 28 flip-flops, with each decimal digit occupying 4 flip-flops. The number in the first register is considered a fraction, so the decimal point in the first register is fixed at the left of the most significant digit. The second register contains the decimal number +4 (in binary code) to indicate that the actual Position of the decimal point is four decimal positions to the right. This representation is equivalent to the number expressed by a fraction times 10 to an exponent; that is +6132.789 is represented as +.6132789 X 10° * Because of this analogy, the contents of the first register are called the coefficient (and sometimes mantissa or fractional part) and the contents of the second register, the exponent (or characteristic). The position of the actual decimal point may be outside the range of digits of the coefficient register. For example, assuming sign-magnitude representation, the following contents 02601000 104 coefficient exponent represent the number +.2601000 X 10% = + .000026010000, which pro- duces four more 0’s on the left. On the other hand the following contents Sec. 85 DATA REPRESENTATION IN REGISTERS 261 12601000 012 coefficient exponent represent the number —.2601000 X 10'? five more 0's on the right. In the above examples, we have assumed that the coefficient is a fixed-point fraction. Some computers assume it to be an integer, so the initial decimal point in the coefficient register is to the right of the least significant digit. Another arrangement used for the exponent is to remove its sign bit altogether and consider the exponent as being “biased.” For example, numbers between 10*? and 10° can be represented with an exponent of two digits (without sign bit) and a bias of 50. The exponent register always contains the number £ + 50, where F is the actual exponent. The subtrac- tion of 50 from the contents of the register gives the desired exponent. This way, positive exponents are represented in the register in the range of numbers from 50 to 99. The subtraction of 50 gives the positive values from 00 to 49. Negative exponents are represented in the register in the range of 00 to 49. The subtraction of 50 gives the negative values in the range of -50 to -1. A floating-point binary number is similarly represented with two registers, fone to store the coefficient and the other, the exponent. For example, the number +1001.110 can be represented as follows 260100000000, which produces sign [~ initial binary point sign LL 0100111000 00100 coefficient exponent The coefficient register has 10 Mip-flops; 1 for sign and 9 for magnitude. Assuming that the coefficient is a fixed-point fraction, the actual binary point is four positions to the right, so the exponent has the binary value +4. The number is represented in binary as .10011000 X 10?°° (remember that 10'°° in binary is equivalent to decimal 2*). Floating-point is always interpreted to represent a number in the fol- lowing form: cr where ¢ represents the contents of the coefficient register and ¢ the contents of the exponent register. The radix (base) r and the radix-point 262 OPERATIONAL AND STORAGE REGISTERS. Chap. 8 position in the coefficient are always assumed. Consider, for example, a computer that assumes integer representation for the coefficient and base 8 for the exponent. The octal number +17.32 = +1732 X 8? will look like this: sign pn 01732 102 initial octal point coefficient exponent When the octal representation is converted to binary, the binary value of the registers becomes 0001111011010 1000010 coefficient exponent A floating-point number is said to be normalized if the most significant position of the coefficient contains a nonzero digit. In this way, the coefficient has no leading zeros and contains the maximum possible number of significant digits. Consider, for example, a coefficient register that can accomodate five decimal digits and a sign. The number +.00357 X 10° = 3.57 is not normalized because it has two leading zeros and the unnor- malized coefficient is accurate to three significant digits. The number can be normalized by shifting the coefficient two positions to the left and decreasing the exponent by two to obtain: +35700 X 10" = 3.5700, which is accurate to five significant digits. Arithmetic operations with floating-point number representation are more complicated than arithmetic operations with fixed-point numbers and their execution takes longer and requires more complex hardware. However, floating-point representation is more convenient because of the scaling problems involved with fixed-point operations. Many computers have a built-in capability to perform floating-point arithmetic operations. Those that do not have this hardware are usually programmed to operate in this mode. Adding or subtracting two numbers in floating-point representation requires first an alignment of the radix point since the exponent part must be made equal before adding or subtracting the coefficients. This alignment is done by shifting one coefficient while its exponent is adjusted until it is See. 85, DATA REPRESENTATION IN REGISTERS 263 equal to the other exponent. Floating-point multiplication or division requires no alignment of the radix-point. The product can be formed by multiplying the two coefficients and adding the two exponents. Division is accomplished from the division with the coefficients and the subtraction of the divisor exponent from the exponent of the dividend. Double Length Words The registers in a digital computer are of finite length. This means that the maximum number that can be stored in a register is finite. If the precision needed for calculations exceeds the maximum capacity of one register, the user can double the bit capacity by employing two registers to store an operand. One register would contain the most significant digits and the other, the least significant digits. For example, consider a computer whose memory and processor registers contain 16 bits. One bit is needed to store the sign of a fixed-point number, so the range of integers that can be accomodated in a register is between +2'$ = 432,768. This number may be too small for a particular application. By using two registers it is possible to increase the range of integers to #2°1, ‘The same reasoning applies to floating-point numbers when greater length coefficients are needed for greater resolution. Using a computer with 16-bit words as before, a floating-point number can be stored in two words. The exponent may occupy the first seven bits of the first word giving an exponent value of +63. The coefficient would then occupy 9 bits of the first word and 16 bits of the second word for a total of 25 bits. The number of bits in the coefficient can be extended to 41 by using a third word. Double length words and sometimes floating-point operands are stored in two or more consecutive memory registers. They need two or more cycles to be read or written in memory. Some computers have available double length registers and/or special floating-point registers in their processor unit. ‘These registers can usually execute the required arithmetic operations directly with double precision or floating-point numbers. Computers with a single length register can be programmed to operate with double length words. In any case, operands that exceed the number of bits of a memory word must be stored in two or more consecutive memory registers. Some computers use variable length words and allow the user to specify the number of bits that he wants to use for the operands. This implies a memory structure composed of variable length cells instead of fixed length words. This can be done by making the smallest data component address- able and using a memory address that specifies the first and last cell of the operand. 264 OPERATIONAL AND STORAGE REGISTERS Chap. 8 Character Strings The types of data considered thus far represent numbers that a computer uses as operands for arithmetic operations. However, a computer is not only 4 machine that stores numbers and does high-speed arithmetic. Very often, @ computer manipulates symbols rather than numbers. Most programs written by computer users are in the form of characters; ie., a set of symbols comprised of letters, digits, and various special characters. A com- puter is capable of accepting characters (in a binary code), storing them in memory, performing operations on and transferring characters to an output device. A computer can function as a character-string manipulating machine. By a character string is meant a finite sequence of characters written one after another. Characters are represented in computer registers by a binary code. In Table 1-5 we listed three different character codes in common use. Each member of the code represents one character and consists of either six, seven, or eight bits, depending on the code, The number of characters that can be stored in one register depends on the length of the register and the number of bits used in the code. For example, a computer with a word length of 36 bits that uses a character code of 6 bits can store six characters per word. Character strings are stored in memory in consecutive locations. The first character in the string can be specified from the address, of the first word. The last character of the string may be found from the address of the last word, or by specifying a character count, or by a special ‘mark designating end of character string. The manipulation of characters is done in the registers of the processor unit, with each character representing a unit of information, Logical Words Certain applications call for manipulating the bits of a register with logical operators. Logical operations such as complement, AND, OR, exclusive-or, etc., can be performed with data stored in registers. However, logical operations must consider each individual bit of the register separately, since they operate on two-valued variables. In other words, the logical operations must consider each bit in the register as a Boolean variable having the value of 1 or 0. For example, to complement the contents of a register we need to complement each bit of the word stored in it. As a second example, the OR operation between the contents of two registers A and B is shown below ) 1100 contents of A (@) 1010 contents of B (4) v @) 1110 result of OR operation Sec. 85 PROBLEMS 265 The result is obtained by performing the OR operation with every pair of corresponding bits. ‘The datum available in a register during logical operations is called a logical word. A logical word is interpreted to be a bit string as opposed to character strings or numbers. Each bit in a logical word functions in exactly the same way as every other bit; in other words, the unit of information of a logical word is a bit. PROBLEMS 81, Show the configuration of a bus transfer system that contains four registers of three bits each, Use multiplexers and demultiplexer circuits. 8-2. A digital system has 16 registers, each with 32 bits. It is necessary to provide transfer paths for data from each register to each other register. (a) How many lines are needed for direct transfers? (b) How many lines are needed for bus transfer? (c) How many lines are needed if the 16 registers form a memory unit? Compare the three configurations with respect to the time it takes to transfer data between two given registers, 8:3. A three-bit register A has one input x and one output y. The register operations can be described symbolically as follows: Pix Ay, (As) = 9,4) 2 4p41 0 FF 12 Draw the state table for the sequential circuit and its corresponding ‘state diagram. What is the function of the circuit? 8-4, A digital system has three flip-flops; ie., three one-bit registers labeled A, B, C. The state of the flip-flops changes according to the following elementary operations x 0 2A, 1 >B, 1c » BA ©FB )7C tA: (A) A, (B)* B, (AAC Give the sequence of states of the flip-flops for the following sequence of control signals. @) x,y, 2, Y, 2 ZY, &. OXYWZWY 8-5. List the elementary operations needed to transfer bits 1 - 8 of register A to bits 9-16 of register B and bits 1-8 of register B to bits 9-16 of register A, all during one clock pulse. 8-6. Express the transfers depicted in Fig. 1-3, Sec. 1-7 with elementary operations. Include address and buffer registers for the memory unit. 266 87. 8:10, 841. OPERATIONAL AND STORAGE REGISTERS Chop. 8 Express the function of the shift register of Fig. 6-23, Sec. 6-6, with symbolic notation, It is required to construct a memory with 256 words, 16 bits per word, organized as in Fig. 8-16, Cores are available in a “matrix” of 16 rows and 16 columns. (a) How many matrices are needed? (b) How many flip-flops are in the address and buffer registers? (©) How many cores receive current during a read cycle (d) How many cores receive at least half current during a write cycle? ‘A memory unit has a capacity of 40,000 words of 20 bits each. How many decimal digits can be stored in a memory word? How many flip-flops are needed for the address register if the words are numbered in decimal? List the elementary operations for the following transfers using (1) a nondestructive memory and (2) a destructive read-out memory. (a) To store binary 125 in location 63. (b) To read memory register number 118. (©) To transfer contents of memory register 75 to memory register number 90. The magnetic core memory of Fig. 8-16 is called a linear selection memory and has a two-dimensional (2D) organization. There is another magnetic core memory organization called coincident-current selection, which has a three-dimensional (3D) organization, A third ‘memory organization, a combination of the two types, is called a 24D selection. Consult outside sources and describe the other two memory organizations. What modifications or additions are required in a memory unit to add the following capabilities: (a) To be able to transfer incoming and outgoing words serially. (b) To be able to select consecutive locations in memory. (©) To be able to address memory locations with decimal numbers in BCD. (4) To be able to select one particular bit of the selected word. Represent binary +27 and —27 in three representations using a register of 10 bits, Represent the numbers +315 and -315 in binary using the following representations: (a) sign-magnitude, (b) 1's complement, See. 85 REFERENCES 267 (c) 2's complement, and (d) sign-magnitude floating-point normalized. 8-15. Represent the numbers +315 and -315 in decimal using BCD in the following representations: (a) sign-magnitude, (b) 9°s complement, (c) 10's complement, (a) sign-magnitude floating-point normalized. 8-16. Given two binary numbers X and Y in floating-point sign-magnitude representation. The numbers are normalized and placed in registers and an addition is to be performed; ie. X + Y. (2) Which of the two coefficients should be shifted, in what direc- tion, and by how much? (b) How can we determine the value of the exponent of the sum? (©) After the addition of the two coefficients, the sum may be unnormalized. Specify what should be done to the coefficient and the exponent in order to normalize the sum, What is the largest and smallest positive quantity which can be represented when the coefficient is normalized: (a) by a 36-bit floating-point binary number having 8 bits plus sign for the exponent and fraction representation for the coefficient? (b) by a 48-bit floating-point binary number having 11 bits plus sign for the exponent and integer representation for the coefficient? REFERENCES Reed, I. S. “Symbolic Design of Digital Computers,” MIT Lincoln Lab. Tech,, Mem. 23 (January 1953), 2. Chu, Y. Digital Computer Design Fundamentals. New York: McGraw-Hill Book Co., 1962 3. Bartee, T. C., LL, Lebow, and I. $. Reed. Theory and Design of Digital Machines. New York: McGraw-Hill Book Co., 1962. 4. Scott, N. R. Electronic Computer Technology. New York: McGraw-Hill Book Co., 1970 CH. 10, 5. Mayerhoff, A. J. Digital Applications of Magnetic Devices. New York John Wiley & Sons, Inc., 1960. 6. Special Issue on High-Speed Memories, IEEE Trans, on Electronic Com- puters, Vol. EC-15 (August 1966). BINARY ADDITION AND THE 9 ACCUMULATOR REGISTER 9-1 INTRODUCTION General purpose digital computers as well as many special purpose systems quite often incorporate a multipurpose register into the arithmetic section or the central processing unit. Some digital computers have more than one such register. These registers are called by various names, but the name accumulator is the one most widely used. The name is derived from the arithmetic addition process encountered in digital computers. The process of arithmetic addition of two or more numbers is carried out by initially storing these numbers in the memory unit and clearing the accumulator register. The numbers are then read from memory one at a time and added to the register in consecutive order. The first number is added to zero and the sum transferred to the register. The second number is added to the contents of the register and the newly formed sum replaces its previous value. This process is continued until all numbers are added and the sum formed. Thus, the register “accumulates” the sum in a step-by-step manner by performing sequential additions between a new number read from memory and the previously accumulated sum. The accumulator is the most used register in the arithmetic unit. In addition to the function described above, this register performs various other operations either specified by programmed instructions or required to implement other machine instructions. The accumulator stores the operands used in arithmetic and logical operations and retains the results of computa- 268 Sec. 9-4 INTRODUCTION 269 tions formed. When two numbers are multiplied, the multiplier and multiplicand are read from memory into two registers and multiplication is performed by forming a series of partial sums in the accumulator. Similarly, subtraction, division, and logical operations are performed by the accumula- tor logic circuits. Other operations such as complementing, counting, and shifting are also encountered. ‘The purpose of this chapter is to present a simple multipurpose accumulator register, together with its associated combinational circuits. The register proper will be designated by the letter A and will consist of flip-flops 41, Az, ..., 4) numbered in ascending order starting from the rightmost element, We shall assume “the accumulator” to mean the A register plus its associated logic circuits and the A register will be called “the accumulator register.” We concentrate the discussion first on the subject of addition of two binary numbers, since this is the most involved part of the logic design of an accumulator. In Sec. 9-8 a multipurpose register having other elementary operations besides addition is specified. One of the primary functions of an accumulator is the addition of two numbers. The following sections present a few alternative ways to accom- plish this task. One of these alternatives will be chosen for inclusion in the final design of the register. In order to realize the machine performance of binary addition, we must visualize the existence of two registers, one storing the augend and the other the addend. The outputs of these registers are applied to logic circuits that produce the sum; the sum is then transferred into a register for storage. The following numerical example will clarify the process: Previous carry 00110. CG; Augend ool 4; Addend 10011 B Sum 110 S; Carry 00011 Cig y The augend is in register A (accumulator register) and the addend in register B. The bits are added starting from the least significant position to form a sum bit and carry bit. The previous carry bit C, for the least significant position must be 0 and the cary bit C)4 ; for this example is a 1. The value of C;, 1 in a given significant position is copied into C; one higher significant position to the left. The sum bits are thus generated starting from the right-most position and are available as soon as the corresponding previous carry bit C; is known. The sum can be transferred to a third register; however, when the process employs an accumulator, the sum is transferred to the accumulator register, destroying the previously stored augend. 270 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 9-2 PARALLEL ADDITION Binary addition is performed in parallel when all bits of the augend and addend registers are available as outputs. The combinational circuit among the registers that forms the sum depends on the type of flip-flops employed in the accumulator register. Three different combinational circuits are derived in this section. The first employs D flip-flops and the other two, T flip-flops. Circuits using RS or JK flip-flops are equivalent to those employing D or T flip-flops, respectively. Addition with Full-Adders The simplest and most forward way to add two binary numbers in Parallel is to use one full-adder (abbreviated FA) circuit for each pair of significant bits.* This is shown schematically in Fig. 9-1 for a fourbit adder. The implementation of adders with more than four bits is accom plished by extending the registers with more flip-flops and corresponding ADD ‘command, Clock pulses Figure 9-1, Four-bit adder using fulladder circuits “The logic diagram of a fulladder circuit was derived in Sec. 4-3, Sec. 9.2 PARALLEL ADDITION 271 FA circuits. The contents of registers A and B are added in the four FA circuits; their sum is formed in S, to Ss. Note that the input carry to the first stage must be connected to a signal corresponding to logic-O and that the carry-out from the last stage can be used for an input to a fifth stage or as an overflow bit in a four-bit adder. The sum outputs 5; to Sq are transferred to the A register through the D inputs of the flip-flops when an ADD command signal is enabled during one clock pulse. The sum trans- ferred will obviously destroy the previously stored augend so that the A register acts as an accumulator register. Some minor variations are possible. The first stage could use a half-adder circuit instead of a full-adder, which will result in the savings of a few gates. RS or JK flip-flops could be used, but the complement values of S,-S4 must be generated for the clear inputs. Only the normal outputs of the flip-flops are shown going into the FA circuits. If the complement outputs are also used, the inverter circuits which produce the complements inside the FA circuits can be eliminated. The diagram chosen for Fig. 9-1 assumes the availability of a single IC chip, which includes all four FA circuits with the carry leads internally connected. Such an IC is commonly named “four-bit adder.” It has nine input terminals, five output terminals, at least one terminal for the supply voltage, and one for ground. The binary adder in Fig. 9-1 is constructed with four identical stages, each consisting of a pair of flip-flops and their corresponding full-adder circuit. It is interesting to note that each stage is similar to all others, including similar interconnections among neighborhood stages. It is therefore convenient to partition such circuits into n similar circuits, each having the same general configuration. The ability to partition a large system into a number of smaller and possibly similar subsystems has many advantages. It is convenient for the design process because the design of only one typical stage is sufficient for obtaining all the information needed to design the entire register logic. It also brings about standardization of common circuit configurations which can be eventually incorporated into an IC chip. In fact, binary adders as well as accumulator registers are comparatively easier to design and manufacture because they can be partitioned into smaller subregister units. Practical adders and accumulators range somewhere between 12 and 64 bits long. The task of designing a sequential circuit with 64 flip-flops is formidable, but the partition into 64 identical stages reduces the design process down to the point where a sequential circuit with only one flip-flop is considered. We shall utilize this partitioning property in the remainder of this chapter and concentrate our efforts on the design of only fone typical stage. The entire register and its associated logic will then consist of a number of such typical stages, where that number is equal to the number of bits of the register. 272 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 Addition with Complementing Flip-Flops Flip-flops having the property of complementation are very useful for reducing the number of gates required to form the sum. Such flip-flops are the T and JK types. This advantage can be deduced from inspection of the Boolean functions that describe the FA circuit. For a typical stage i, the sum and carry are expressed by the following Boolean functions*: Sj = AjBiC; + A;B;Cy + A,BiC; + ABC (9-1) =A; © B® G (92) Cha. = ApBy + AiG; + BC; (9-3) ‘The sum bit is obtained from the exclusive-or operation of the three input variables. Also note that the augend is in the A register and that the sum is also transferred into the A register. Now, an inherent property of a complementing flip-flop is that its next state produces the exclusive-or of its input variable and the previous state value. Consider the following state table of a single flip-flop A and a variable X applied to its 7 input. Present Tr Next state input state AO x A@ + 1) = AM eX 0 0 0 0 1 1 1 0 1 1 1 ° It is clear from this table that the next state of A is the exclusive-or of the present state of A and the input X. From this observation, it is possible to deduce that if B; ® C; is applied to the T input of an A; flip-flop, the next state of A; will hold the value obtained from the exclusive-or of B;, C;, and the previous state of A;. This is exactly the value of the sum bit as given in Eq. 9-2 It is possible to arrive at the same conclusion by a straightforward design process. A typical adder stage consists of one flip-flop A; that changes its state during the addition process. The single stage will have an input addend bit B, and a previous carry C;. (Note that B; is a flip-flop that does not change state during the addition process and therefore can be considered as an input terminal). A state table for this simple sequential circuit is tabulated in Fig. 9-2(a). A column is included in the table for the flip-flop ‘Equations 9-1 and 9-3 were derived in Sec, 4-3, Equation 9-2 follows directly from Equation 9-1, See.9-2 PARALLEL ADDITION 273 Present Inputs) Next Fipslop state state | input Ai [By Ci] Ag=S| TAL Bi © fo of o [o o jo ijt 1 : - ofroli fi af o ji 1} 0 0 7 : 1 Jo of 1 jo 1 fo ito |r G 1 |i ofo |a TA; =BiCi+ BC: = Bi BCA rir ala low > Figure 9-2 Excitation table and map for a parallel adder stage T input and its necessary excitation. The combinational circuit for this two-state sequential circuit can be derived immediately from the map shown in Fig. 9-2(b). The required flip-flop input function is TA, = B, C] + B,C; = By © G which is the same condition derived previously. The carry output C; + 1 hhas not been shown in the table. It is a function of the present state and the inputs and can easily be shown to be the same as Eq. 9-3. The logic diagram of a typical stage is drawn in Fig. 9-3. The part of the circuit that produces the sum has only two AND gates and one OR gate, compared to the four AND gates and one OR gate required for the implementation of Eq. 9-1 in sum of products form. A JK flip-flop can be used instead of a T, with the two inputs J and K connected to receive the same signal. The circuit of Fig. 93 with a JK flip-flop is the one chosen for the final version of the accumulator to be completed in Sec. 9-8. For an 7 register, n identical circuits are required with the output carry Cj, of one stage connected to the input carry C; of the next stage on its left, except for the least significant position, which requires an input C; of 0, or a cireuit that corresponds to a half-adder. TwoStep Adder A further reduction in the number of gates per stage is realized if the time of execution is extended to a period equal to two clock pulses by requiring two consecutive command signals to complete the addition. Consider two signals ADI and AD2 as shown in Fig. 9-4. Compared with the single command signal used for the circuit in Fig. 9-3, the two-step addition scheme to be developed here requires two clock pulses for com- pletion. On the other hand, the two-step adder requires fewer combinational gates for its implementation. 214 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap..9 oS : CEC : 7 : # ADD t command Figure 9-3 ‘Typical stage of adder with a 7’ Mip-lop crt pulses —~ , rt AD? Figure 9-4 Two consecutive Add signals for a two-step adder The combinational logic necessary for a two-step adder will be derived intuitively. The next state of a T flip-flop is the exclusive-or of its present state and its input. The sum bit is obtained from the exclusive-or of By, C;, and the present state of Aj. Now, if B; is applied during the time 1 + 1, when the first command signal ADI is enabled, one obtains for the next state Aj(r + 1), the value of A; @ B,. Then during the time t + 2, when command signal AD2 is enabled, C; is applied to the input of the flip-flop producing a final next state A;(t + 2) equal to 4, @ B; ®C;, which is equal to the sum bit. The cary bit must be generated in all the stages of the adder prior to the application of the r + 2 pulse. However, after signal ADI has been applied, the state of the flip-flop has the value of A, ® B;. Therefore, the carry to the next stage Cj, 1 must be formed from the (¢ + 1) value of Sec.9-2 PARALLEL ADDITION 275 the flip-flop output. To determine the logic required for generating the carry, we need the table of Fig. 9-5(a). In this table, the value of 4; © B, is first determined. The column under C,4 1 is determined from the bits of A;(O, Bj, and C;. However, the combinational logic required for output Ci+1 is obtained from the columns under B,, 4; (¢ + 1), and C; because only these values are available after command signal ADI and during command signal ADI. Remembering that the value of A; after t + 1 is the exclusive-or of the addend and augend bits, then from the map of Fig. 9-5b, we obtain the Boolean function for the output carry to be: Che 1 = BA; + AC where A; is the state of the flip-flop after command signal ADI has been removed and during the application of command signal AD2. Figure 9.6 shows the logic diagram of a two-step adder. It can be seen that this circuit requires fewer gates per stage as compared with Fig. 9. Aj(41)= AV@B, Cin » monororolo Hee once AD2 }___.4p1 Figure 96 One stage of a two-step adder 276 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 ‘The price one pays for this saving is an increase in the execution time. The process of addition in the two-step adder of Fig. 9-6 is as follows: First the command signal ADI is applied. After t + 1, the carry through all the stages is allowed to propagate. Then command signal AD2 is applied, forming the sum in the A register after clock pulse t + 2. 9-3 CARRY PROPAGATION ‘The addition of two numbers in parallel implies that all the bits of the augend and addend are available for computation at the same instant of time. This is the difference between parallel and serial operations; in the latter the bits of the augend and addend are available for computation only one at a time, Before leaving the subject of parallel adders and going into serial addition, the timing problem involved with the carry propagation will be explained, Electronic circuits take a certain amount of time from the instant the input signal is applied to the instant when the output settles into its appropriate logic value. This interval is defined as the propagation time of the circuit, At any instant, a digital circuit may be in one of three conditions: in one constant value which represents logic-1, in another constant value which represents logic-0, or in transition between the two values. For example, an OR gate with two inputs having the values of 0 and 1 will have in its output the value of 1. Now, if the 0 input is changed to a 1, the output remains a 1 and the propagation time is zero. If, on the other hand, the 1 input changes to a 0, the output will go through a transition from 1 to 0. The maximum time of this transition period is the maximum propagation time of the OR gate, When logic gates are inter- connected and values of inputs cannot be predicted, the propagation time between the inputs and the outputs cannot be predicted exactly. However, we can predict the maximum propagation time by evaluating the longest possible transition through the various gates. The propagation time is vari- able and depends on the cireuit components and the input combination. In clocked sequential systems, one must always wait for the maximum possible Propagation time before the output signal is triggered into a flip-flop. Let us consider the operation of a parallel adder as shown in Fig. 9-1 or 93. Initially, one number is in the A register and the other number is transferred into the B register. As soon as the outputs of the flip-flops in register B settle down to their appropriate values, all the outputs A; and B; are available as inputs to the combinational circuit. The outputs of the combinational circuit at any time have the value either of logic- or logic-1, or are in transition between these two states. These outputs settle down to Sec. 9-3 CARRY PROPAGATION 277 a constant value only after the signal propagates through the appropriate gates. In other words, the outputs of the combinational circuit represent the sum bits only after the signals have been propagated. Consider the sum bit from a single stage, say stage 15. A,s and Bys are available in their final form, but C15 will not settle to its final form until the correct Cys is available in its final form. Similarly, Cy, has to wait for C3 and so on down to Cz. Thus only after the carry is propagated can we apply the ADD command pulse so that at the occurrence of the next clock pulse the correct sum bits are transferred into the A register. Let us look at a specific example. Assume that two 36-bit numbers have to be added. Therefore, registers A and B must have 36 flip-flops each. Looking at one typical stage from Fig. 9-3, we see that the carry to the next stage C; 1 must propagate through two levels. The first level consists of three AND gates; the second, of one OR gate. Let us assume that the maximum propagation time through each gate is 20 nsec (1 nano-second 10° seconds). Fromi the time that the previous carry C; settled down until the next carry C;, 1 settles down, the signals must propagate through one level of AND gates and one OR gate with a maximum propagation time of 40 nsec, The carry must propagate through all 36 stages, giving a maximum delay of 1440 nsec.* Thus, the ADD command signal must wait at least 1.44 usec from the time the signals of register B settle down before it can be applied. Otherwise, the outputs of the combinational circuits may not represent the correct sum bits. Assume that the master-clock has a fre- quency of 2 MHz, so that a clock pulse occurs every 500 nsec. If clock pulse tg (at time 0) was used to transfer the addend to register B, and assuming that the flip-flops have a maximum propagation time of 50 nsec, the total delay encountered is 1490 nsec. This is equivalent to a delay of three clock pulses and the ADD signal should not occur prior to clock pulse ¢3 The carry propagation time is a limiting factor on the speed by which two numbers can be added in a parallel digital system. Since all other arithmetic operations are implemented through successive additions, the time consumed during the addition process is very critical. One of the major problems encountered in the design of an accumulator is the reduction of the time for addition. One solution is to design digital circuits with reduced propagation time. For instance, the reduction in the maximum propagation time through a gate from 20 to 10 nsec reduces the carry propagation time from 1440 to 720 nsec. But physical circuits have a limit to their capa- bility. Logic designers, however, have come out with ingenious schemes to *The propagation in the last stage is through the gates that form the sum; we neglect the propagation time through the inverter of Fig. 9-3. 278 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 reduce the time for addition, not necessarily by means of faster circuits but by the incorporation of additional logic circuits.** One way of reducing the carry propagation time is by a method known as carry look-ahead, In this method as in all other methods, increase in speed is traded with increase in equipment and complexity. We have discussed previously the advantages of partitioning a digital system into smaller and possibly similar subunits. So far in our discussion the accumulator register and its associated logic were partitioned into 1 typical stages, each having ‘one flip-flop and associated combinational logic. Now consider a partition of an nbit register into j groups of k bits each so that j-k = n. As a specific example, let n = 36, j = 12, k = 3. Each group will consist of three flip-flops and their associated combinational logic for a total of 12 groups. The carry for the least significant group is given by the following Boolean functions: =o (0-4) AB, (9-5) A2By + (Az + Ba) Cr (9-6) Cy = AsBs + (Az + Bs) Cy (9-7) The input carry to the second group is Cy. Now, instead of waiting for Cy to propagate through the first group, which will require five levels of gates, it can be generated from a combinational circuit with inputs A,, 42, As, and By, By, By. The Boolean function for this circuit is easily obtained by substituting C, from Eq. 9-5 into Eq. 9-6, and then substituting C from Eq. 9-6 into Eq. 9-7. The final result, after rearranging, is a Boolean expression which is a function of the output variables of the flip-flops in group 1. Cy = AsBy + ApAsBz + AyArA,By+ AA3B,Br + AaB,By + AyA,B,By + A,B,B2B3 Since the Boolean function is expressed in the sum of products form, it can be implemented with one level having seven AND gates followed by one OR gate. The maximum propagation time for this curcuit is 40 nsec (using the figure from the previous example). The carry input to the third group can similarly be obtained from a two-level realization of a Boolean expres- sion as a function of C3, As, As, Ae, and By, Bs, Bg. Thus each group produces not only its own carries but also the carry for the next group ahead of time. By this scheme the maximum total carry propagation is 440 nsec for the carry to reach the input of the last (12th) group plus 120 nsec for the carry to propagate out of the last group, for a total of 560 nsec. This is compared with the delay of 1440 nsec obtained for single-stage **The detailed description of all possible schemes is beyond the scope of this book. The interested reader is referred to the literature (references 1-5) for details. Sec.9:3 SERIAL ADDITION 279 partitioning. Obviously, if the register is partitioned into groups with more flip-flops, the carry propagation may be reduced even further, but again one must pay the price of more equipment. As mentioned previously this is not the only scheme available. Even this scheme can be improved by parti- tioning into groups within groups. 94 SERIAL ADDITION A digital system is said to operate in a serial fashion when information is transferred and manipulated one bit at a time. The contents of one register are transferred to another by shifting the bits out of one register and into the other. Similarly, any operation to be done on a single register or between two registers is manipulated one bit at a time and the result transferred serially into a shift register. As a general rule, digital systems that operate in a serial mode require far less equipment than systems operating in a parallel mode, but their speed of operation is slower. A block diagram of a serial adder is shown in Fig. 9-7. It consists of a shift-right accumulator register and a logic circuit capable of adding two bits and a previous carry. The augend is stored in the accumulator register and the addend transferred from a serial memory unit which, for this applica- tion, can be thought of as just another shift register. The sum formed is transferred into the accumulator and replaces the previously stored augend. ‘Accumulator Register Adder circuit be Shift right Figure 9-7. Block diagram of a serial adder At the start of the addition process, the two least significant bits of the augend and addend are presented as inputs to the adder circuits. As the accumulator register is shifted to the right, the sum bit formed in the adder circuit enters the left-most flip-flop of the accumulator, and the next bit of the addend appears from the serial memory unit. The accumulator register proper is a shift-right register and its implementation is straightforward. The block marked “adder-circuit” may be thought of as a FA circuit that pro- duces the sum but must also possess, in addition, one storage element for temporary storage of the carry bit. Since the adder circuit must include a memory element, it must be sequential. This sequential circuit may be derived intuitively when we realize that the carry-out of the FA can be stored in a flip-flop whose output can be used as the previous carry for the 280 BINARY ADDITION AND THE ACCUMULATOR REGISTER, Chap. 9 J LJs, PA Accumulator Register . Ls, Clock ci puloes ae 0 1 ADD Command Figure 98 Four-bit serial adder next significant bits. Thus the circuit of Fig. 9-8 is obtained with the “adder circuit” of Fig. 9-7 implemented as a sequential circuit consisting of a single D flip-flop together with a combinational FA circuit. The operation of the serial adder of Fig. 9-8 will now be described. The accumulator initially holds the augend, and the addend is transferred from a serial memory unit or from another shift-register. At the start of the addition process, the least significant bits are presented to the input of the FA circuit, the carry flip-flop C is cleared, and the ADD command signal enabled. The signal propagates through the combinational circuit and produces a sum bit and a carry bit. The sum bit is returned to the most significant position of the accumulator register and the carry bit is applied to flip-flop C. When a clock pulse arrives, the sum bit is transferred into the left-most flip-flop of the accumulator, the carry into flip-flop C, the augend shifted one position to the right, and the next bit of the addend arrives from memory. The output of flip-flop C now holds the previous carry. When the next clock pulse occurs, the new sum bit to enter the left-most position of the accumulator is the one corresponding to the second significant position. This process continues until the fourth clock pulse transfers the last bit of the sum into the accumulator. At this time, the ADD command signal is disabled. Thus the addition is accomplished by passing each pair of bits together with the previous carry through a single FA circuit and transferring the sum to the accumulator The question now arises, is it possible to reduce the number of gates in the FA of Fig. 9-8 by using a different type of flip-flop? The section of Sec. 9-4 SERIALADDITION 281 the circuit that produces the sum cannot be simplified because the flip-flop that holds the addend bit is different from the one that receives the sum bit. However, the combinational circuit that produces the carry is applied to a flip-flop that holds the previous carry and may be simplified if some other type of flip-flop is used. The D flip-flop was originally chosen because it resulted in the most straightforward design. It is possible to show that if a JK ot RS flip-flop is used, a reduction in the number of combinational gates will ensue. The input functions of a JK flip-flop can be derived directly from a state diagram. As mentioned previously, the portion of the serial adder that forms the sum bit is a sequential circuit with at least one flip-flop. Designating the carry flip-flop by the variable C, and noting that the inputs to the circuit are the augend bit A, and the addend bit By, the state table of Fig. 9-9(a) is obtained. The next state of C is the carry for the next significant bits and is derived from the present state of C (input carry) and the two inputs. A column for the output S could be included in this table but was deleted because the sum output will obviously be the same as Eq. 9-1. The excitation table for the two inputs of the flip-flop is then obtained from inspection of the present-state and the next-state. The input functions are simplified in the maps of Fig. 9-9(b), from which we obtain JC = A,B, KC = A\By Present Next state inputs _| state ca & Le 0 0 of o 0 0 1Jo Oi EeEeOn LO) Oita 1 0 of o toes la Tere oe eal eet vi @ 1 xfx]x]x x] x] x[x e{ [a 5 JC = AB, @) Bi Figure 9-9 Excitation table and maps for the JK carry flip-flop in a serial adder 282 BINARY ADDITION AND THE ACCUMULATOR REGISTER, chap. 9 Thus, only two AND gates are required to store the next carry in a JK (or RS) flip-flop as compared to four gates for the D flip-flop. This result could be obtained intuitively from inspection of the state table of Fig. 9-9(a) by noting that the next state is 1 when both inputs are 1, that the next state is O when both inputs are 0, and that the carry flip-flop does not change states otherwise. 9-5 PARALLEL VERSUS SERIAL OPERATIONS Before we continue with the design of a multipurpose parallel accumulator register, let us pause to consider the difference between parallel and serial operations. In a parallel adder, all inputs and outputs of the register are available for manipulation at the same time. The register in the serial adder communicates with external circuits only through its left-most and right- most flipflops and the addition must be processed with each pair of significant bits one at a time. This property is inherent in the two processes, not only in adder circuits but in other operations as well. The registers in serial digital systems are shift registers. The external circuit that performs the operation receives the bits sequentially at its inputs. The time interval between adjacent bits is called the “bit-time,” and the time required to present the whole number is called the “word-time.” These timing sequences are generated either by the serial memory unit or by the control section of the system. In a parallel operation, the ADD command signal is enabled during one pulse interval to transfer the sum bits into the accumu- lator upon the occurrence of a single clock pulse. In the serial adder, the ADD command signal must be enabled during one whole word-time, and a pulse applied every bit-time to transfer the generated sum bits one at a time, Moreover, the external circuits that perform operations and change contents of registers are always combinational in parallel systems. In serial computers, these external circuits would be sequential (requiring extra flip- flops) if their outputs depended not only on the present values of the inputs but also on values of previous inputs. This has been shown in the serial adder, where the output sum bit is a function of the previous carry, which in turn is a function of the previous inputs. A comparison between the parallel adder shown in Fig. 9-1 and the serial adder of Fig. 9-8 will give some indication of the relative cost and speed between the modes of operation. Consider again a 36-bit accumulator and a master-clock pulse generator with a frequency of 2 MHz. Assume that the maximum propagation time through any gate is 20 nsec and that the settling time of a flip-flop is 50 nsec. For the parallel adder, we had to wait three clock pulses, or 1.5 usec, to complete the addition. The serial adder required the application of 36 clock pulses for a total time of 18 usec to complete the addition, a speed advantage of the parallel over the Sec. 9-6 ELEMENTARY OPERATIONS 283, serial adder of 12 to 1. On the other hand the number of FA circuits is 36 to 1, thus the added equipment is greater than the speed advantage. Obviously, we can increase the speed of the parallel adder considerably if faster circuits are used and the carry propagation reduced. Circuits that add ‘two 36-bit numbers in 100 nano-seconds can be thus attained. The speed of the serial adder can also be increased by increasing the frequency of the master-clock generator. The limit on this frequency is a function of the maximum propagation time of the flip-flops plus the combinational gates. In this example, the next sum bit will be available after 50 nsec for the flip-flops to settle down plus 40 nsec for the sum and carry to propagate through two levels of gates, for a total of 90 nsec. This gives a maximum frequency for the master-clock of 11 MHz, a bit-time of 90 nsec, and a word-time for 36 bits of 3.24 usec. From this example one can clearly see that although, as a general rule, serial operations tend to be slower than parallel operations, there may be some operations that are done faster serially if one uses high-speed circuits compared to slower circuits used in ‘the parallel counterpart. 9-6 ELEMENTARY OPERATIONS The most important function of an accumulator is to add two numbers. Other arithmetic operations can be processed using this basic operation in conjunction with other elementary operations. Subtraction can be done by addition and complementation, multiplication is reduced to repeated addi- tions and shifting, and division can be processed by repeated subtractions and shifting. The 2's complement of a number stored in a register can be obtained by complementing the register and then incrementing it by 1. Logical operations on the individual bits of a register are useful for extracting and packing parts of words. A digital computer incorporates these elementary operations together with other operations in its instruction list. Instructions such as multiply and divide are usually processed internally in computer registers by repeated applications of elementary operations. How- ever, the availability of the elementary operations to the user allows him to specify his own sequence of repeated operations and thus extend the processing capabilities of the machine. To save equipment, some small computers do not include multiplication or division instructions as part of their hardware, In this case, it is up to the user to specify in his program the sequence of elementary operations needed for their execution. A set of elementary operations for an accumulator register is listed in Table 9-1. The command signals p, to py are generated by control logic circuits and should be viewed as inputs to the accumulator for the purpose of initiating the corresponding elementary operation. The nine control signals are assumed to be mutually exclusive; ie., one and only one signal is enabled at any interval between two clock pulses. 284 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 ‘Table 9-1. List of elementary operations for an accumulator Control signal Operation Symbolic designation >, arithmetic addition A) + BA Ps clear OA Ps complement A) =4 Pa logical AND AA@=a Pe logical OR AV) =A Pe exclusive-or A) eG =A Ps shift-right ip = 4; G2 1,2..401 ye shiftlett Gi = 4; 1223.00 Ps increment +14 z check if zero if (A) = 0; then z There are two types of operations listed in the table, The unary operations—clear, complement, shift, and increment—require only one operand, the one stored in the A register. The binary operations-arithmetic addition and the three logical operations—require two operands, one stored in A and the other in B. All the operations will change the state of the A register except the last function listed in the table (an output function). Most of the elementary operations are self-explanatory. Arithmetic addition of two numbers has been discussed extensively in previous sections. The clear operation clears all flip-flops to 0. The complement operation changes the states of all individual flip-flops. Shift registers have been discussed previously. The increment operation increases the contents of the accumu- lator register by one. The logical operations have their usual meaning but they operate on individual bits of the register. For example, if the contents of a four-bit accumulator is 0011 and the contents of the B register is 0101, the AND operation changes the contents of the accumulator to 0001, the OR operation to 111, and the exclusive-or to 0110. Note that the most elementary operation of a simple transfer from register B into the accumulator is not included in the table. This simple transfer can be implemented by clearing the accumulator and then ORing to it the contents of B A symbolic notation is given to each operation in the table. A double arrow designates a transfer into a register. The letter A is used for the accumulator register and the letter B for the register that stores the operand received from an external unit. A letter without a subscript refers to all n flip-flops of the register, while a subscripted letter refers to a single flip-flop. When enclosed in parentheses, the letter refers to the contents of the register or subregister. In order to distinguish between arithmetic plus and logical OR, the logical operation is given the V symbol. The A symbolizes logical AND operation. The overbar symbolizes a bit-by-bit complementation operation. The symbolic notation of Table 9-1 conforms with the notation introduced in Sec. 8-2 and defined in Table 8-1. See. 9-7 ACCUMULATOR DESIGN 285 ‘A block diagram of the digital system that implements the elementary operations listed in the table is shown in Fig. 9-10. The accumulator includes a parallel A register together with its associated combinational circuit. The register proper consists of 1 flip-flops 41, 42, .-- Any numbered in ascending order starting from the right-most position. The inputs to the accumulator are: (1) a continuous clock pulse train for synchronization, (2) the nine command signals p; to Py with only one p; being enabled at any clock pulse period, (3) ” inputs By, Bz». . ., By from register B, (4) n outputs Ay, Az, ..., Aq from register A, and (5) a single output z, whose output is logic-l when the contents of the accumulator register is 0. The parallel accumulator will be partitioned into n similar states with each stage consisting of one flip-flop A; and its associated combinational circuit. In subsequent discussions only one typical stage will be considered with the understanding that an n-bit accumulator consists of m such typical stages with similar interconnections among neighbors. The extreme right-most and left-most stages have no neighbors and require special attention. 4B Register Combinational |__| __ Commas cea signals >, Outputs of A Register jster |__| ___ Clock. A Regist ie ACCUMULATOR Figure 9-10 Block diagram of accumulator 9-7 ACCUMULATOR DESIGN ‘The implementation of each elementary operation listed in Table 9-1 is carried out in this section. The individual combinational circuits so obtained are incorporated into a unified system in the next section, The specific 286 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 combinational circuits obviously depend on the type of flip-flop chosen for the A register. We shall employ JK flip-flops in the design that follows and leave the design with other types of flip-flops for problems at the end of the chapter. For each elementary operation, there corresponds one and only one command signal p; that activates it. This signal is generated by an external control unit and is considered as an input entering each stage of the accumulator. The presence of this input (ie., when it is logic-1) dictates the required change of state of the A register upon the occurrence of the next clock pulse. Its absence (when it is logic) dictates no change of state. Therefore, the state diagrams to be used subsequently do not list the py input with the understanding that the next state is conditional upon its presence. Moreover, a binary operation takes the present values of 4; and B; and stores the result of the operation in 4; Only the A, flip-flop changes its state in response to a command signal; the corresponding bit of B; remains unaltered. Therefore, B; is considered as an input to a single stage and is listed as such in the state diagrams, py: Arithmetic Addition. This operation has been discussed extensively in previous sections. We shall select the binary adder circuit of Fig. 9-3 with JK flip-flops instead of T. Since the JK flip-flop behaves as a complementing flip-flop when both inputs are excited simultaneously, it follows that both J and K must receive the same input received by the T flip-flop of Fig. 9-3. The input ADD command signal can now be replaced by the symbol p,. The input functions to the flip-flop and the output carry to the next stage are: (BC; + BIC) Ps (98a) (BC) + BIC) P (9-8b) Bi + AC; + BC; 09) The clear command signal clears all flip-flops of the A register; in other words, it changes the contents of the register to 0. To cause this transition in a JK flip-flop we need only apply command signal p, to the K input during one clock pulse period. The logic diagram for this operation shown in Fig. 9-11 is obvious and required no design effort. The input functions are: (9-102) (9-106) Sec. 9-7 ACCUMULATOR DESIGN 287 pote of — Figure 9-11 Clear pa: Complement ‘The complement command signal changes the state in each flip-flop of the A register. To cause this transition in a JK flip-flop we need to apply command signal pz to both inputs. The logic diagram is shown in Fig. 9-12. The input functions are: JA; = Ps (-11a) KA; = Pa (11) 4: Logical AND This operation forms the bit-by-bit logical AND between A; and B; and transfers the result to A;. The state table and input excitation for this operation are given in Fig. 9-13(a). The input functions are derived in the maps of Fig. 9-13(b): JA, =0 (9-12a) KA; = Bipy (9-126) and the logic diagram is shown in Fig. 9-13(c). The procedure for obtaining the state diagram, input excitation, the simplified input Boolean functions, and the logic diagram is straightforward and follows the procedures outlined in Ch. 7. Figure 9-12 Complement 288 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 Present Next | Flip-top a Bt sate Toput | state | "inputs A,B | A, | Tar KA; x] x obfeeon | ornare x| 4 [a o 1fofo x Tete oie epae (x ute ° mien 1 1 1 x 0 z » «) A aaa K . zt) Figure 9.13 Logical AND () Ps: Logical OR This operation forms the bit-by-bit logical OR between A; and B; and transfers the result to Aj. Figure 9-14 shows the state table, input excita- tion, the map minimization, and the logic diagram for the OR elementary operation. The input Boolean functions are: JA, = Bps (9-132) KA; =0 (9-136) pg: Exclusive-or This operation forms the bit-by-bit logical exclusive-or between A; and By and transfers the result to 4,. The pertinent information for this operation is shown in Fig. 9-15. The Boolean input functions are: JA; = Bps (9-142) KA; = Bips (9-140) pp? Shift-right This operation shifts the contents of the A register one place to the right. Clearly, this operation transfers the content of Aj; into A; for i=1, 2, , n-1, with the transfer into A, unspecified. The ‘The data transferred to Ay when shifting right or to Ay when shifting teft determines the type of shift used: This is discussed in Sec. 9-9. ACCUMULATOR DESIGN 289 $00.97 Bi [i Present Next | Flip-op sane tat| Se | “ita Pe Pe A; Bi | 4; | Ai KA 0 ofo|o x JAi=B; Or teree |e ae ieee Ayece vote | tect [ae eee, Regecete ae vce |e @ 0 c T Lc , © : Figure 9-14 Logical OR Present Next) Flipsiop state Input | state | inputs aw papa eal Tx o ofofo .x o afafa x JAi=8; Tee oe eed | ateee Peee lee aa eee fae aaeai @ 0 ks Ee as 2, © e Figure 9.15. Exclusiveor excitation table for this operation is shown in Fig. 9-16(a). Note that Aj+1 is an input to stage i and that the next state is equivalent to the input, The map minimization is obtained in Fig. 9-16(b) and the logic diagram in Fig. 9-16(c), where the shiftcight is executed only when ‘command signal p, is enabled. The input functions are: Jay = Apa Pr KA; = Aj + Pr (9-15a) (9-150) 290 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 Aisa Aus 1 x|x Present Next | Flipsop state input | state | __ inputs ati x|x| 4 Lt 44 | A | ai RA Ai Aiey KA Aa o ofolo x o o afaja x Eset) OH aux eed Ieee. epee | ee ee Py Ass @ a r Kos 4 «nL © Pe Ai_y %% Ait Figure 9-16 Shiftsight pa? Shiftteft This operation shifts the contents of the A register one place to the left. ‘The design procedure here is the same as the shift-right operation with Aj.4 teplacing A; , and will not be repeated. The logic diagram is shown in Fig. 9-16(c), with command signal pg initiating the shift-left operation. The input functions are similar to Eq. 9-15: JA; =A; Ps (9-16a) KA, =Aj_ yPs (9-16b) Pa? Increment This operation increments the contents of the A register by one; in other words, the register behaves like a synchronous binary counter with py being the count command signal. In Sec. 6-5 it was shown that the flip-flop input functions for a synchronous binary counter are: See.9-7 ACCUMULATOR DESIGN 291 JA, = (9-17) where IT is a product sign designating the AND operation and the js are the outputs of all less significant flip-flops. The Boolean function specified by Eq. 9-17 is implemented by one AND gate per stage. The number of inputs in each gate is a function of the flip-flop position in the register. If we number the position of the individual flip-flops from 1 to n starting from the right-most element, the number of inputs to an AND gate is equal to its position number. Thus, for stage number 2 the gate requires two inputs, one from A, and one from py. But the last stage requires an AND gate with n inputs, one from each of the previous flip-flops and one from py. It is interesting to note that the maximum propagation delay with this configuration is equal to the propagation time of only one AND gate. This arrangement is similar to the carry look-ahead scheme discussed in Sec, 9-3 and achieves the least waiting time for the carry to propagate through all the stages. It is possible to decrease the number of inputs to the AND gates if one can afford a longer propagation delay. The concept of partitioning into similar subunits with carry lookahead is applicable to counting circuits as well, since a counter is a special adder that adds a constant equal to one. The implementation of Eq. 9-17 implies no parti- tioning; i.e., all m flip-flops form a single group. By partitioning into groups with a smaller number of flip-flops, the number of inputs to the AND gates is reduced. The accumulator circuit to be designed in this chapter is purposely partitioned into n identical groups with one flip-flop per group. Therefore a modification of Eq. 9-17 is needed to make each AND gate in each stage similar to any other stage. This can be achieved by defining an input signal £; into a stage and an output signal B,, , out of each stage as shown in Fig. 9-17. In this configuration, a total of n-l AND gates are used, but each gate has only two inputs. The disadvantage is that the maximum propagation delay increases considerably, It is equal to the time required for the signal to propagate through n-1 gates. On the other hand, it standardizes the counting circuit needed for each stage into an AND gate with only two inputs. The flip-flop input functions for a typical stage are easily obtained from inspection of Fig. 9-17, JA; =E; (0-182) KA; = E; (9-186) with the additional requirement that each stage generates an output Ej 4 1 to be used as an input for the next stage on its left. 292 BINARY ADDITION AND THE ACCUMULATOR REGISTER Chap. 9 Fin = EA (9-19) Ey = Ps Figure 9-17 Counter circuit with carry propagation Check if zero This signal is an output from the accumulator and its function is different from the operations considered so far. It checks the content of the accumulator and produces an output z whose value is logic-1 when all the flip-flops of the register contain zeros. Obviously, this can be imple- mented with one AND gate having m inputs coming from the complement output of each flip-flop. The propagation delay of this circuit is the time required for the signal to propagate through this single AND gate. Again for the sake of standardization and partitioning into m identical stages we shall adopt the alternative implementation shown in Fig. 9-18. Here we are splitting the n-input AND gate into n AND gates of two inputs each and increasing the propagation delay to the time required for the signal to propagate through n gates. Each stage will then generate an output function 2; 41 given by Eq. 9-20 to be used as an input for the next stage on its left. Za = At my =1 (9-20) in41 72 See. 9-8 COMPLETE ACCUMULATOR 293 42 41 T Once secrcccrccc4 \ \ jeceeseneaacs) Figure 9-18 Chain of AND gates for checking zero content of register 9-8 COMPLETE ACCUMULATOR The circuits for each individual operation derived in the last section must be integrated to form a complete accumulator unit. The reader is reminded that this digital unit is a multipurpose register capable of performing various elementary operations and although we have named it accumulator, any other name would serve as well. Now, the command signals p: to po are mutually exclusive and therefore, their corresponding logic circuits can use interaction among the many outputs, an OR gate is inserted in each flip-flop unit. In other words, the logical sum of Eqs. 9-8, 9-10 to 9-16, and 9-18 is formed for each J and K input. JA i = BiCjp. + BiCipr + Ps + Byes + Bye + Aj 4 iPr +4; pa + By KA; = B,Cip: + BiCip1 + Pa + Bs + Bipa + Bipe + Alas PaAi Pat E, (92a) (9-21b) It is also necessary that in each stage we generate the output functions specified by Eqs. 9-9, 9-19, and 9-20, Che = AB + AiG; + BiG; (9.22) Ejay = EA; (9-23) Zia = 2A; (9-24) ‘These output functions are generated in stage i and applied as inputs to the next stage i+1. The first stage placed in the least significant position has no neighbor on its right and must use as inputs logical values C, = 0, Ey = po, and z, = 1. Boolean functions 9-21 to 9-24 give all the information necessary to obtain the logic diagram of each stage of the accumulator register. 294 BINARY ADDITION AND THE ACCUMULATOR, Chap. 9 The logic diagram of one typical stage of the accumulator register is shown in Fig. 9-19. Again, it should be emphasized that an n-bit register is comprised of m such stages, each interconnected with its neighbors. The single stage has one flip-flop 4,, an OR gate in each J and K input, and a variety of combinational circuits. There are six inputs to each stage: B, is the input from the corresponding bit of register B, C; is the carry from the previous stage on the right, 4;. is the output of the flip-flop from the stage on the right, A,, , is the output of the flip-flop from the stage on the left, E; is the carry input from the increment operation, and z; is used to form the output z. There are eight control inputs p, to ps, one for each elementary operation listed in Table 9-1. There are four outputs: A; is the output of the flip-flop, C41 is the carry for the next stage on the left, Ej 1 is the increment carry for the next stage on the left, and z;« 1 is applied to the next stage on the left to form the chain for the output z. There are three other inputs into each stage not directly related to the flow of information: clock pulses from the master-clock generator, at least one power supply input, and a ground terminal. This makes the total number of input and output leads in one stage equal to 21; the entire circuit could be incorporated into one IC chip with 21 terminals. The logic diagram is a direct implementation of the Boolean functions listed in Eqs. 9-21 to 24 and is self-explanatory. The first stage / = 1 and the last stage i = n need special consideration because they do not have neighbors on one side. The circuit shown in Fig. 9-19 can be used for these two stages and, although it may seem inefficient, it nevertheless facilitates the design to require the use of only standard modules. Some of the terminals in these two stages require inputs or outputs different from those specified for a typical stage. The input C; in the first stage should be connected to 0, input £; to py and z; to 1. ‘The output z;.; in the last stage is a logic-1 when the contents of the A tegister are zero. Output Ey « 1 is not of any use. Output C, 41 can be used as an overflow indicator. Inputs A;., of the first stage and Aj, , of the last stage need special consideration during shifting. Their connections determine the type of shift encountered.* The three slowest functions of the accumulator are: the arithmetic addition, the incrementing process, and the chain that forms the z output. ‘This is because the signals of these functions have to propagate through combinational gates before their outputs settle to the required values. Thus, enough time must be allowed for the carries to propagate through the combinational circuits (2 X m gates for addition and n-l gates for incre- menting) before the command signal p or py is enabled. Similarly, out- put z will not have the correct value until the signal propagates through n AND gates. Only in low-speed digital systems can the designer afford to use the elementary carry propagation circuits employed in Fig. 9-19. It was "This subject is discussed further in the next section. sec. 98 COMPLETE ACCUMULATOR 295 used in this chapter for simplicity and tutorial reasons. Improvement of the carry propagation time is required in many practical systems. This can be done by partitioning the accumulator into groups of more than one flip- au—C | Cn 4 —F ——_ B 2. 024 Ps 3 @)+@ea4 Pp 4. ArA Ps 5. )H1SA4 Ps 6. minuend > B ()+@4 Py For 1's complement representation: Step Elementary operation Command signal 1. subtrahend > B 2. OFA Pr 3. @)+@r4 Ps 4. A)=A Ps Ly minuend > B 6. + @)>4, py egegiere i L:(A)+13A Ps Remarks subtrahend transferred in clear A transfer subtrahend to A complement form 2’s complement of subtrahend minuend transferred in forms the difference in A Remarks subtrahend transferred in clear A transfer subtrahend to A form 1’s complement of subtrahend minuend transferred in add and set overflow flip-flop L if end-carry forms the difference in A The difference formed in the accumulator may be transferred directly out or to register B and then to the external system, or may be left in the A register if needed for the next operation. Note that the only variation between the two sequences is in the step at which the incrementing sec.99 ACCUMULATOR OPERATIONS 301 command py is applied. An extra overflow flip-flop Z is required for the 1's complement and only if this flip-flop is set by an end-carry is the accumulator incremented. A more efficient subtraction procedure results if the complement of the subtrahend is formed in register B instead of the accumulator. In this method the minuend is the first operand to be transferred to the accumulator; the subtrahend is left in register B. The subtrahend in register B is then complemented and added to the accumulator. For example, the arithmetic subtraction of two numbers from a third, ¥ minus Y minus Z, is implemented with fewer steps by this procedure than by the previous method. Comparison An often-used function in digital data processing is the comparison of two numbers to determine their relative magnitude. The two numbers may be of the same or opposite signs. It is possible to design a digital circuit with three outputs to perform this function and produce a logic-I in one and only one of three output terminals, depending on whether the first number is greater than, less than, or equal to the second number,* If an accumulator is available, the function can be implemented by the appli- cation of a sequence of elementary operations. The relative magnitude of two signed binary numbers X and Y may be found from the subtraction of Y from X and checking the sign bit of the difference in Aj. If it is a 1, the difference X — Y is negative and X < ¥; if it is a O either X > Y or X = Y. A check of the z output distinguishes between these two possi- bilities, with z = 1 making it an equality. However, this last procedure is valid only in the 2's complement representation. In the 1’s complement case, a zero may manifest itself in two forms; either by a number with all 0's or by a number with all 1's, Thus accumulators with 1’s complement representation require an additional circuit for the z output in order to detect the second type of arithmetic zero. Shifting Operations The elementary operations of shift-right and shifteft as defined in Table 9-1 did not specify the data transferred to the left-most and right- most flip-flops of the register. The information transferred to input A; + 1 of the nt flip-flop or the input A;., of the 1%t determine the type of "See Sec, 4-6, 302 BINARY ADDITION AND THE ACCUMULATOR, Chap. 9 shift implemented. A logical shift is one that inserts 0’s into the extreme flip-flops. Therefore a logical shift-right command necessitates the applica- tion of a logic-O to terminal 4;, in the nth stage, while a logical shift-eft requires the application of a logic0 to terminal A. of the 15t stage. A circular shift circulates the bits around the two ends. Therefore, for a circular left shift, one has to connect the output A,, of the n'® stage to input A;., of the It stage. A circular right shift requires that out- put A, of the first flip-flop be connected to input A, 4; of the last. An arithmetic shift shifts only the magnitude of the number, without changing its sign. This type of shift is also called scalling or shift with sign extension. An arithmetic left shift multiplies a binary number by 2, and an arithmetic right shift divides it by 2, Remember that flip-flop m holds the sign of the number and that the magnitude of a negative number may be represented in one of three different ways. In sign-magnitude representation, the arith- metic shifts are logical shifts among the first m-1 flip-flops. In sign-2’s complement representation, a right shift leaves the sign bit unchanged and a left shift transfers 0s into stage 1. In sign-1’s complement representation, a right shift leaves the sign bit unchanged and a left shift transfers the sign bit into the least significant bit position.* Logical Operations The logical operations are very useful for manipulating on individual bits or on part of a word stored in the accumulator. A common application of these operations is the manipulation of selected bits of the accumulator specified by the contents of the B register. Three such functions are the selective set, selective clear, and selective complement. A selective-set opera- tion sets the bits of the A register only where there are corresponding 1’s in the B register. It does not affect bit positions which have 0's in B. The following specific example may help to clarify this operation. 1100 B 1010 A before 1110 after The two left-most bits of B are I’s and so the corresponding bits of A are set. One of these two bits was already set and the other has been changed from 0 to 1, The above example may serve as a truth table from which one can clearly deduce that the selective-set operation is just an OR elementary operation, The selective-complement operation complements bits in A only where there are corresponding 1’s in B. For example: “The proofs of these algorithms are left for a problem at the end of the chapter. See. 9.9 ACCUMULATOR OPERATIONS 303 1100 B 1010 A before 0110 A after Again the two left-most bits of B are 1’s and so the corresponding bits of A are complemented. This example again can serve as 2 truth table from which one can observe that the selective-complement operation is just an exclusive-or elementary operation. The selective-clear operation clears the bits of A only where there are corresponding 1’s in B. For example: 100 B 1010 A before 0010 after Again the two left-most bits of B are 1’s and so the corresponding bits of A are cleared to 0. One can deduce that the logical operation performed on the individual bits is B;4, To implement the selective-clear operation, it is necessary to complement register B and then AND to A 1100 B ool B 1010 A before 0010 A after ANDing If register B cannot be complemented, one can use De Morgan’s theorem to express the operation as (B; + A/)', which requires three elementary opera- tions, complement A, OR B to A, and complement A again, The logical operations are useful for packing and unpacking parts of words. For example, consider an accumulator register with 36 flip-flops receiving data from a punch card with 12 bits per column. The data from any three columns can be packed into one word in the accumulator. Other input systems transfer data in alphanumeric code consisting of six bits per character, so that six characters can be packed in the register. For such data, one may want to pack or unpack the individual items for various data processing applications. The packing of data may be accomplished by first clearing the A register and then performing an OR operation with a single item stored in B. If the individual items are placed in the least significant position of B, the next operation on A requires shifting the item to the left before ORing another item from B. To pack three columns of a card with 12 bits in each column, one requires three OR operations and three logical shift-left operations, with the shifting done on 12 bits at a time. The unpacking of words requires the isolation of items from a packed word. An individual item may be isolated from an entire word by an AND operation (sometimes called masking) with a word in B that has all 1's in the corresponding bits of the item in question and 0’s everywhere else. 304 BINARY ADDITION AND THE ACCUMULATOR, Chap. 9 9-10 CONCLUDING REMARKS In this chapter, a multipurpose accumulator register was specified and designed. The usefulness of such a register was demonstrated by several data Processing examples. The most common operation encountered in an accumulator is arithmetic addition of two numbers. For this reason and because, relatively, this is the most involved function to design, a major portion of the chapter was devoted to this subject. To simplify the pre- sentation, unsigned binary numbers were first considered. Only later in Sec. 9-9 was it shown that the same circuit can be used to add or subtract ‘two signed binary numbers, Both serial and parallel adders were introduced and their differences discussed. The serial adder was found to require less equipment than its parallel counterpart, but, except for a few small digital systems, most other computers employ parallel adders because they are faster. The timing problems encountered in parallel adders as a result of the carry-propagation delay was explained and a method for decreasing this delay was briefly mentioned. The many possible techniques available for high-speed parallel addition are too numerous for inclusion in this text. The serious reader is advised to supplement this material with the references at the end of the chapter. A set of elementary operations was specified in Sec. 9-6, This set is typical of operations of many accumulators found in digital computers. In some small computers the entire arithmetic unit or a large portion of it encompasses one such register. In larger systems, the accumulator is only a part of the central processing unit and sometimes more than one accumu- lator is available. In multiple accumulator systems it is customary to abbreviate the names of the registers by a letter and a number such as Al, A2, A3, or RI, R2, R3, etc. Although only nine elementary operations were specified for the accumulator register in this chapter, one must realize that many other operations could be included. Other possible operations are: transfer from and operations with other registers, shifting by more than one bit, add-and-shift combined operation (useful for multiplication), direct, circuits for binary subtraction, and other logical operations. These opera- tions are not chosen arbitrarily but emerge from the system requirements. The procedure for determining required register operations from the system specifications is clarified in Ch. 11, where a specific digital computer is speci- fied. From these specifications, the register operations are determined. The accumulator register specified in Sec. 9-6 was designed in Secs. 9-7 and 9-8. Two reduction procedures were utilized to simplify the design: the Partition of the register into smaller and similar subregisters and the separa- tion of the functions into mutually exclusive entities. The partitioning of the n-bit register into n similar stages, and the initial separation of the logic design for each elementary operation as outlined in Sec. 9-7, has reduced ‘See. 9-10 CONCLUDING REMARKS 305 the design process to a very simple procedure. This methodology is basic in digital logic design and is the most important single lesson to be learned from this chapter. This procedure is employed again in Ch. 11. It is demonstrated there again that the partitioning and separation techniques simplifies logic design and facilitiates the derivation of the combinational circuits among the various registers. It should be pointed out that digital computers may have processing units that operate on data different than the binary operands considered here, Such data may be floating-point operands, binary-coded decimal operands, or alphanumeric characters, An accumulator that operates on such data will be somewhat different and more involved than the one derived in this chapter. Floating-point data format was presented in Sec. 8-5 and decimal adders are discussed in Sec. 12-2. The reader interested in the arithmetic algorithms for these data representations is referred to Chu (3). In Sec. 9.9 a few examples were presented to demonstrate the data processing capabilities of an accumulator register. It is apparent that if these processes are to be of any practical use, the register cannot remain isolated from its environment. A block diagram was presented in Fig. 9-20 to help visualize the source and destination of data and the source of command signals for the elementary operations. Admittedly, this environment was very loosely defined and needs further clarification. In the next chapter, this point will be taken again and the position of the accumulator among other registers and computer units will be clarified. PROBLEMS 9-1. Register A in Fig. 9-1 holds the number 0101 and register B holds oul. (a) Determine the values of each S and C output in the four FA circuits. (b) What signals must be enabled for the sum to be transferred to the A register? (©) After the transfer, what is the content of A and what are the new values of S154 and C2-Cs? 9-2, Derive the logic diagram of a one-step and a two-step binary adder with @ JK flip-flop. Redraw Figs, 9-3 and 9-6 using only (a) NAND gates (b) NOR gates 9-4, Obtain the logic diagram of a one-step parallel adder with RS flip-flops, Show that the number of combinational gates is about the same as the FA in Fig, 9-1, 306 9-5. 9-6. 9-7, 98, 9.9, 9-10, o-LL, 9-12, BINARY ADDITION AND THE ACCUMULATOR, Chap. 9 Design three different parallel binary subtractors to subtract the content of B from the content of 4. Use each of the three methods discussed in Sec, 9-2. (a) Fullsubtractors and D flip-flops. (b) One-step subtractor with T flip-flops. (c) Two-step subtractor with T flip-flops. Show the logic diagram of a one-stage parallel binary adder- subtractor circuit. Use the same combinational circuit to form the sum or the difference, and let command signals ADD or SUB determine whether the normal or complement output of A, is used to form the carry or borrow for the next stage. Use D flip-flops. A fourbit two-step adder, of which one stage is shown in Fig, 9-6, has the following binary numbers stored in the registers. Register A: 0101; Register B: O111. Determine the values of Ay through Ay and Cz through Cs (a) before the ADI command signal is applied, (b) after the ADI signal is applied, and (©) after the AD2 signal is applied. (Assume that C, = 0, and note that the carries have no meaning except for step b.) The Boolean function of the carry lookahead derived in Sec, 9-3 was for the input to the second group. Derive the carry C, for the third group as a function of Cy, 44, As, Ag, and By, Bs, Bg. From this Boolean function, generalize the carry circuit needed in each group of three flip-flops. What is the maximum carry propagation delay for a 16-bit accumu- lator that uses the circuit of Fig. 9-1? Assume the maximum pro- Pagation time of any gate to be 50 nsec. In the serial adder of Fig, 9-8, the addend is 0111 and the accumu lator holds the binary number 0101, Draw a timing diagram showing the clock pulses, the ADD command signal, the outputs A, through Aq, and the output of the carry flip-flop C during the process of addition, Draw the logic diagram of a serial adder using D flip-flops for the shift register, a 7 flip-flop to store the carry, and NOR gates for the combinational circuit, A 24-bit serial adder uses flip-flops and gates with maximum pro- Pagation delay of 200 and 100 nsec, respectively. The addend is transferred from a serial memory unit at a rate of 100,000 bits per second, What should the maximum clock-pulse frequency be and how long would it take to complete the addition? ‘$2. 9-10 PROBLEMS 307 9-13. 9-15. 9-16, 9-17, 9-18, 9-19, 9-20, Design a serial counter; in other words, determine the circuit needed for an increment elementary operation with a serial register. Note that it is necessary to have a carry flip-flop that can be set initially to 1 to provide the 1 bit in the least significant position. Design a 40-bit register (using 7 flip-flops) with an increment ele- mentary operation. Partition the register into 10 groups of four flip-flops each and show the logic diagram of one typical group with a carry look-ahead for the next group. Calculate the maximum Propagation time, assuming that each AND gate has a maximum delay of 20 nsec, An IC chip contains a four-bit accumulator with 15 operations q1 to 44s. Only four terminals are available for specifying the 15 opera- tions. Show the logic diagram of the decoder inside the chip and tabulate the code used to specify each operation. What should be the code when no operation is specified? Derive the logic diagram of an accumulator with the same elemen- tary operations as in Fig. 9-19 using (@) T flip-ops (b) RS flip-flops Design one typical stage of an A register (using JK flip-flop) with the following elementary operations: a (A)-194 decrement 42 (4)-(B) >A subtract qs (AVA) 2A selective clear (a) Perform the four different arithmetic computations in sign-2’s complement representation of (+13) + (#7). (b) Repeat with sign-I’s complement representation, In the algorithm for addition with sign-2’s complement representa- tion stated in Sec, 9-9, the effect of overflow was neglected. (a) Using the above mentioned algorithm and a seven-bit accumu- ator, perform the following additions: (+35) + (+40) and (-35) + (40). Show that the answers are incorrect and explain why. (b) State an algorithm for detecting the occurrence of an overflow by inspecting the sign bits of the addend, augend, and sum. (©) Repeat parts (a) and (b) for sign-l’s complement representation. (a) Obtain the logic diagram of a combinational circuit that adds three bits and two input carries A and B. Show that the sum output is the exclusive-or of the input variables, (b) Draw a block diagram of a parallel adder for adding three binary numbers X + Y + Z stored in three registers, Use the three-bit adder obtained in part (a). (©) Show the sum and carries generated from the three-bit adders during the addition of 15 + 15 + 15, 308 9-21. 9-22, 9-23, 9:24, 9-25, 9-26. 9.27. 9.28, BINARY ADDITION ANO THE ACCUMULATOR, Chap. 9 The binary numbers that follow consist of a sign bit in the left-most position followed by either a positive number after 0 or a negative number in 2’s complement after a 1. Perform the subtraction indi- cated by taking the 2’s complement of the subtrahend and adding it to the minuend. Verify your results. (a) 010101 - 000011 (b) 001010 - 111001 (©) 111001 - 001010 (d) 101011 - 100110 (e) 001101 - 001101 Repeat Prob. 9-21, assuming that the negative numbers are in 1's complement representation. List the sequence of elementary operations required to perform two subtractions; ie., X - Y - Z. Assume that operands can be positive or negative and that all negative numbers are in their 1’s complement form. The operand X enters the B register followed first by Y and then Z. Assume further that the B register can be complemented with a control signal q. The circuit model available is the one shown in Fig. 9-10 together with an overflow flip-flop L that accepts the value of C, 4 1 during the p; add-command, State an algorithm for comparing the relative magnitudes of two signed numbers stored in two registers, Do not use subtraction. ‘Assume that the binary numbers are represented in sign-magnitude. Justify the algorithms stated in the text for arithmetic shift-right and shiftleft of numbers stored in a register in (@) sign-magnitude (b) sign-2s complement (©) sign-1’s complement List. the sequence of elementary operations together with their symbolic notation for a selective-clear operation. Only the elemen- tary operations of Table 9-1 are available. It is necessary to design an accumulator that shifts by more than one bit. The number of shifts is specified by a binary number stored in a separate register. The shift command signals P, and Py are enabled only during one pulse period. Derive the logic diagram of the external register. (Hint: Use the register as a binary down- counter, enable the shift when the count is not zero, and disable when the count reaches zero.) List the sequence of elementary operations required to pack six alphanumeric characters into one 36-bit word in the accumulator. Sec. 9-10 REFERENCES 309 REFERENCES 1, Flores, 1. The Logic of Computer Arithmetic. Englewood Cliffs, N. J Prentice-Hall, Inc,, 1963. 2. MacSorley, O. L. “High-Speed Arithmetic in Binary Computers,” Pro: ceedings of the IRE, Vol. 49 (January 1961), 67-91 3. Chu, Y. Fundamentals of Digital Computer Design. New York: McGraw- Hill Book Co,, 1962. 4, Lehman, M., and N, Burla, “Skip Techniques for High-Speed Cary Propagation in Binary Arithmetic Units,” RE Trans. on Electronic Computers, Vol, EC-10 (December 1961), 691-98. 5. Gilchrist, B. J., B. J. H. Promerente, and S. Y. Wong. “Fast Carry Logic for Digital Computers,” IRE Trans, on Electronic Computers, Vol, EC-4 (December 1955), 133-36, 6. Bartee, T. C., and D. J. Chapman, “Design of an Accumulator for a General Purpose Computer,” [EEE Trans, on Electronic Computers, Vol. EC-14 (August 1965), 570-74, 7. Quatse, J. T., and R. A. Keir, “A Parallel Accumulator for a General Purpose Computer,” JEKE Trans. on Electronic Computers, Vol, EC-16 (April 1967), 165-71. 8. Lucas, P, “An Accumulator Chip,” JEEE Trans, on Computers, Vol. EC-18 (February 1969), 105-14, 9. Gordon, C. B., and J, Grason. “The Register Transfer Module Design Concept,” Computer Design, Vol. 10, No, 5 (May 1971), 87-94. 1 0 COMPUTER ORGANIZATION 10-1 STORED PROGRAM CONCEPT Digital systems may be classified as special or general purpose. A special purpose digital system performs a specific task with a fixed set of opera- tional sequences. Once the system is built, its sequence of operations is not subject to alterations, Examples of special purpose digital systems can be found in numerous peripheral control units one of which is, for example, a magnetic-tape controller. Such a system controls the movement of the magnetic tape transport and the transfer of digital information between the tape and the computer. There exists a large class of special purpose systems that cannot be classified as digital computers. The name “digital computer” is customarily reserved for those digital systems that are general purpose. A general purpose digital computer can process a given set of operations and, in addition, can specify the sequence by which the operations are to be executed. The user of such a system can control the process by means of a rogram, ie., a set of instructions that specify the operations, operands, and the sequence by which processing has to occur. The sequence of operations may be altered simply by storing a new program with different instructions. ‘A computer instruction is a binary code that specifies some register transfer operations. A program in machine code is a set of instructions that forms a logical sequence for processing a given problem. Programs may be written by the user in various programming languages, such as FORTRAN, COBOL, etc. However, a program to be executed by a computer must be in machine language; that is, in the specific binary code acceptable to the particular computer. There are specific machine-language programs known as 30 See. 10-1 STORED PROGRAMCONCEPT 311 compilers that make the necessary translation from the user programming language such as FORTRAN to the required machine code. An instruction code is a group of bits that tell the computer to perform a specific operation. It is usually divided in parts, each having its own particular interpretation. The most basic part of an instruction code is its operation part, The operation code of an instruction is a group of bits that define an operation such as add, subtract, multiply, shift, complement, etc. ‘The set of machine operations formulated for a computer depends on the processing it is intended to carry out. The total number of operations thus obtained determines the set of machine operations. The number of bits required for the operation part of the instruction code is a function of the total number of operations used. It must consist of at least m bits for a given 2” (or less) distinct operations. The designer assigns a bit combination (@ code) to each operation. The control unit of the computer is designed to accept this bit configuration at the proper time in a sequence and supply the proper command signals to the required destinations in order to execute the specified operation. As a specific example, consider a computer using 32 distinct operations, one of them being an ADD operation. The operation code may consist of five bits, with a bit configuration 10010 assigned to the ADD operation. When the operation code 10010 is detected by the control unit, a command signal is applied to an adder circuit to add two numbers. The operation part of an instruction code specifies the operation to be performed, This operation must be executed on some data, usually stored in computer registers, An instruction code, therefore, must specify not only the operation but also the registers where the operands are to be found as well as the register where the result is to be stored. These registers may be specified in an instruction code in two ways. A register is said to be specified explicitly if the instruction code contains special bits for its identification, For example, an instruction may contain not only an opera- tion part but also a memory address. We say that the memory address specifies explicitly a memory register. On the other hand, a register is said to be specified implicitly if it is included as part of the definition of the operation; in other words, if the register is implied by the operation part of the code, Consider, for example, a digital computer with two accumulator registers RI and R2 and 1024 words of memory. Assume that the operation part of an instruction consists of five bits and that an ADD operation has the code 10010. A precise definition of the ADD instruction can be formulated in a variety of ways, with each specific definition requiring a unique instruction code format and a different number of explicitly and implicitly specified registers or operands. We shall now proceed to consider four possible formats for an arithmetic addition instruction. 312 COMPUTER ORGANIZATION Chap. 10 A. zero-address instruction is an instruction code that contains only an operation part and no address part, as shown in Fig. 10-1(a). There are no bits to specify registers and, therefore, we say that all registers are implicitly specified, Since a binary operation such as addition requires two operands, both of which must be available in registers, the computer must have at least two accumulators. A possible interpretation of the instruction code may be: When the operation code 10010 is detected by the control unit, the contents of register RI ate added to the contents of register R2 and the sum is stored in register R2. Thus, the registers in a zero-address instruction are implicitly included in the definition of the operation. A one-address instruction format is shown in Fig. 10-1(b). This format consists of a five-bit operation code and a ten-bit address, The number of bits chosen for the address part of an instruction is determined by the maxi- mum capacity of the memory unit used. Since each word stored must have a specific address attached to it, there must be at least 7 bits available in an address that must specify 2” distinct memory registers. In this particular example, the memory consists of 1024 words. Therefore, 10 bits are used for the address part of the instruction, If the operation part of the code is an arithmetic addition, the address part usually specifies a memory register where one operand is to be found. The register where the second operand is to be found, and the register where the sum is to be stored, must be implicitly specified. A possible interpretation of the instruction code whose 12345 10010] ‘operation (a) Zero-address instruction 156 15 [1001 0foo01001010) operation address (b) One-address instruction 156 1516 25 10010]0001000010[0001001011 operation address-one address1wo (¢) Two-address instruction 156 1s 1001000100111 operation operand (A) Instruction with one operand specified Figure 10-1 Instruction code formats ‘Sec. 10-4 STORED PROGRAMCONCEPT 313, format is shown in Fig. 10-1(b) may be: Add the contents of the memory register specified by the address part of the instruction to the contents of register R1 and store the sum in register Ri. The instruction code 100100001001010 shown in Fig. 10-1(b) has a five-bit operation part 10010 signifying an ADD operation and an address part 0001001010 equal to decimal 74, The control unit of the computer must accept this bit configu- ration at the proper timing sequence and send control signals to execute the instruction, This involves a transfer of the address part into the memory address register, reading of the contents of memory register 74, and arith- metically adding these contents to the accumulator register R1. A two-address instruction format is shown in Fig. 10-1(c). The instruc- tion consists of an operation part and two addresses. The first 5 bits signify an arithmetic addition operation and the last 20 bits give the address of two memory registers, These two addresses may either specify the location of the two operands or the location of one operand and the sum. In either case, one register remains to be implicitly specified. A possible interpreta- tion of the instruction code with the format shown in Fig. 10-1(c) may be: Add the contents of the memory register specified by address-one to the contents of the accumulator register RI and store the sum in the memory register specified by address-two. A three-address instruction is also possible. In this case, three registers can be explicitly specified to give the location of the two operands and the sum. Another possible instruction format is shown in Fig. 10-1(d). The second part of this instruction code specifies, not the register in which to find the operand, but the operand value itself. A possible interpretation of this code format may be: Add the binary operand given in bits 6 to 15 to the contents of register R1 and store the result in R1. In this case, register R1 is implicitly specified to hold one operand and to be the register that stores the sum, The second operand is explicitly supplied in bits 6 to 15 of the instruction code. The arithmetic addition operation, as well as any other binary operation, must specify two operands and the register where the result is stored. On the other hand, a unary operation such as shiftright must specify one register and possibly the number of bits to be shifted. Similarly, a simple transfer between two registers is an operation that must specify the two registers upon which the operation is to be performed. Thus, the number of registers to be specified in an instruction depends on the type of operation. The number of registers specified explicitly depends on the format chosen by the computer code designer. Most designers have found it efficient to use a one-address instruction format. Such computers include an accumula- tor register in the processing unit that is invariably implied implicitly by the operations. Computers with one-address instructions use the address part of the instruction to specify the address of a memory register where one 314 COMPUTER ORGANIZATION Chap. 10 operand is to be found. The other operand, as well as the sum, is always stored in the accumulator. The address part of a simple transfer operation will specify explicitly one register in the address part of the instruction code, while the accumulator is used as the second required register. Unary operations may specify in their address part either a memory register or the accumulator. A flexibility for choosing the number of addresses for each operation is, utilized in computers that employ variable-length instruction codes. In these computers, the variable length of the instruction code is determined by the operation encountered. Some operations have no address part, some have one address, and some have two addresses. This instruction format results in a more efficient code assignment. Both instructions and operands are stored in the memory unit of a general purpose digital computer. Usually, an instruction code having the same number of bits as the operand word is chosen, although various other alternatives are possible. Instruction words are stored in consecutive location in a portion of the memory and constitute the program. They are read from memory in consecutive order, interpreted by the control unit, and then executed one at a time. The operand words are stored in some other location of memory and constitute the data. They are read from memory from the given address part of the instructions. Every general purpose computer has its own unique set of instructions repertoire. The stored program concept (the ability to store and execute instructions) is the most important property of a general purpose computer. 10-2 ORGANIZATION OF A SIMPLE DIGITAL COMPUTER This section introduces the basic organization of digital computers and explains the internal processes by which instructions are stored and executed, The basic concepts are illustrated with the use of a simple computer as a model. Although commercial computers generally have a much more complicated logical structure than the one considered here, the chosen structure is sufficiently representative to demonstrate the basic organizational properties common to most digital computers. The organiza- tion and logical structure of the simple computer is described by first defining the physical layout of the various registers and functional units. A set of machine-code instructions is then arbitrarily defined, Finally, the logical structure is described by means of a set of register-transfer opera- tions that specifies the execution of each instruction, Physical Structure A block diagram of the proposed simple computer is shown in Fig. 10-2. The system consists of a memory unit, a control section, and six registers. Sec. 10-2 ORGANIZATION OF A SIMPLE DIGITAL COMPUTER 315, The memory unit stores instructions and operands. The control section generates command signals to the various registers to perform the necessary register-transfer operations. All information processing is done in the six registers and their associated combinational circuits. The registers are listed in Table 10-1, together with a brief description of their function. The letter designation is used to represent the register in symbolic relations. ‘Table 10-1 List of Registers in the Simple Computer Letter Name of designation register Funetion A Accumulator ‘general purpose processing register B Memory-Buffer holds contents of memory word c Program-Control holds address of next instruction D Memory-Address holds address of memory word I Instruction holds current instruction P Input-Output holds input-output information The input-output P register is a very simple way to communicate with the external environment. It will be assumed that all input information such as machine-code instructions and data is to be transferred into the P register by an external unit and that all output information is to be found in the same register. The memory-address D register holds the address of the memory register. It is loaded from the C register when an instruction is read from memory, and from the address part of the / register when an operand is read from memory. The memory-buffer B register holds the contents of the memory word read from, or written in, memory. An instruction word placed in the B register is transferred to the / register. A data word placed in the B register is accessible for operation with the A register or for transfer to the P register. A word to be stored in a memory register must first be transferred into the B register, from where it is written in memory. The program-control C register holds the address of the next instruction to be executed. This register goes through a step-by-step counting sequence and causes the computer to read successive instructions previously stored in memory. It is assumed that instruction words are stored in consecutive memory locations and read and executed in sequence unless a branch instruction is encountered. A branch instruction is an operation that calls for a transfer to a nonconsecutive instruction. The address part of a branch instruction is transferred to the C register to become the address of the next instruction. To read an instruction, the contents of the C register are transferred to the D register and a memory read cycle is initiated. The instruction placed in the B register is then transferred into the J register. 316 COMPUTER ORGANIZATION Chap. 10 rogram Control Register (c) read Memory Unit write Memory address TeBster (py Ld address part Trstruction Register (1) — Recumulator Register (4) Operation Decoder and Control Tapa ouput Section iste Register (py To all computer Output Inpur data registers ‘data and instructions Figure 10-2 Block diagram of a simple computer Since the C and D registers frequently contain the same number, it may seem as if one of them can be removed. Both registers are needed, however, because the C register must keep track of the address of the next instruc- tion in case the current instruction calls for an operand stored in a memory register. The address of the operand loaded into the D register from the address part of the J register destroys the address of the current instruction from the D register. The accumulator A register is a general purpose processing register that operates on data previously stored in memory. This register is used for the execution of most instructions. It is implicitly specified in binary, unary, and memory transfer operations. The instruction I register holds the instruc- tion bits of the current instruction. The operation part of the code is decoded in the control section and used to generate command signals to the various registers. The address part of the instruction code is used to read an operand from memory or for some other function as given in the definition of the instruction. See. 10-2 ORGANIZATION OF A SIMPLE DIGITAL COMPUTER 317 Machine-Code Instructions ‘A list of instructions for the simple computer model is given in Table 10-2. This list is chosen to illustrate various types of instructions and does not represent a practical instruction repertoire. The reader may notice that the first nine instructions are identical to the list of elementary operations in Table 9-1 used for the accumulator register. ‘Table 10-2. List of Instructions for the Simple Computer Operation Address Code Name art Function 0001 Aad M (A) + () =A 0010 OR M (A) V () A oon AND M (A) A () =A o100 Exclusive-or M (A) © () =A o101 Clear none o-4 o110 ‘Complement none AA out Increment none Atle 1000 Shift-ight Rone (Ap 4) > Ai 1001 Shiftlett none 4.4; 1010 Input Mm P) = 1011 Output M () =P 1100 Load M (M>) =A 1101 Store M ) > <> 1110 Branch unconditional M M=c mu Branch-on-zer0 M if A) =0 then MC if (A) #0 proceed to next instruction The code format used for the instructions consists of an operation part and a one-address designated by the letter M. The first four instructions represent a binary operation; one is an arithmetic addition and three are logical operations. The address part M of the binary instructions specifies a memory register where one operand is stored. The accumulator holds the second operand and the sum. Instructions five through nine are unary operations for the accumulator register and do not use the address part. Instructions 10 through 13 consist of memory-transfer operations. The address part of the instruction specifies a memory register; the second register is implied implicitly. For the input and output operations, the implied register is P. For the load and store operations, the implied register is A. The last two instructions are branch-type instructions. The branch- unconditional instruction causes the computer to continue with the instruc- tion found in memory register specified by the address part M. The function of this instruction is to transfer the contents of the address part M 318 COMPUTER ORGANIZATION Chop. 10 to the C register, since the latter always holds the address of the next instruction. The branch-on-zero instruction checks the contents of the 4 register. If they are equal to 0, the next instruction is taken from memory register whose address is given by the address part M. If the contents of the A register are not equal to 0, the computer continues with the next instruction in normal sequence. Instructions and data are transferred into the memory unit of the simple computer through the P register. It is assumed for simplicity that the P register is loaded with a binary word from an external system and that the repeated execution of the input instruction (code 1010) transfers the words into consecutive memory locations. To start the execution of the stored program, the operator sets the C register to the first address and a “start” switch is activated. The first instruction is read from memory and executed. The rest of the instructions are read in sequence and executed in consecu- tive order unless a branch instruction is encountered. Instructions are read from memory and executed in registers by a sequence of elementary operations such as inter-register transfer, shift, increment, clear, add, complement, memory read, and memory write, The control section generates the sequence of command signals for the required elementary operations. Logical Structure Once the start switch is activated, the computer sequence follows a basic pattern—an instruction whose address is in the C register is read from memory and transferred to the / register. The operation part of the instruction is decoded in the control section. If it is one with an address part, the memory may be accessed again to read a required operand. Thus words read from memory into the B register can be either instructions or operands, When an instruction is read from memory, the computer is said to be in an instruction fetch cycle. When the word read from memory is an operand, the computer is said to be in a data execute cycle. It is the function of the controt section to keep track of the various cycles. ‘An instruction is read from memory and transferred to the J register during a fetch cycle. The register-transfer elementary operations that describe this process are: (Ord transfer instruction address ()>B, (+1>C memory read, increment C @)>, @B)rI restore memory word, transfer instruction to J register A. magnetic-core destructive-read memory is assumed and therefore, a See. 10-2 ORGANIZATION OF ASIMPLE DIGITAL COMPUTER 319 restoration of the word back to the memory register is required.* The register is incremented by 1 to prepare it for the next instruction, The fetch cycle is common to all instructions. The elementary operations that follow the fetch cycle are determined in the control section from the decoded operation part. The binary and memory-transfer instructions have to access the memory again during the execute cycle. The unary and branch instructions can be executed right after the fetch cycle. In all cases, control returns back to the fetch cycle to read another instruction, The register- transfer elementary operations during the execute cycle for the four binary operations are: ({M\) =D, U[Op]) + Decoder transfer address part (D>) 2B read operand (B) = restore operand (4) + @) 74 if operation is add (A) V @) 2A if operation is OR (A) A (8) 74 if operation is AND (4) @@) 74 if operation is exclusive-or Go to fetch cycle. ‘The symbol (/{M]) designates the contents of the address part M of the I register. Only one of the binary operations listed is executed depending on the value of (/{Op]}); ie., the operation part of the / register. When the operation code is a memory-register transfer type, the execute cycle is described by the following elementary operations: Input @{M]) = D, ([Op]) = decoder transfer address 0>, (@)=B clear memory register, transfer word from P (8) >

store word in memory register Go to fetch cycle Output: @M}) + D, ({Op]) > decoder transfer address () 2B read word ()>, @)>P restore and transfer word Go to fetch cycle Load: (i) =D, [Op] + decoder transfer address, (D>) +B read word @)*, @)A4 restore and transfer word Go to fetch cycle *See Sec. 83. 320 COMPUTER ORGANIZATION Chap. 10 Store: GU) =P, [Op] > decoder transfer address O>, (8B clear memory register, transfer word (B) =

store word in memory Go to fetch cycle The input and store operations are similar, as are the output and load operations, the difference being only that the Pregister is used for the former and the A register for the latter When the operation code represents a unary or a branch-type instruction, the memory is not accessed for an operand and the execute cycle is skipped. The instruction is completed with the execution of one elementary operation, which can occur at the same time that the instruction word is restored back into memory during the conclusion of the fetch cycle. This elementary operation is listed in Table 10-2 under the column heading “function.” The unary operations are executed in the A register. The branch-unconditional instruction causes a transfer of the address part into the C register. The branch-on-zero instruction causes the same transfer if (A) = 0; if (A) # 0, the address of the next instruction in the C register is left unchanged. The computer sequence of operations is summarized in a flow chart in Fig. 103, Note that the completion of an instruction always follows a return to the fetch cycle to read the next instruction from memory. ‘An Example We shall illustrate the use of the machine-code instructions of the simple computer by writing a program that will accept instructions or data words from the P register and store them in consecutive registers in memory starting from address 0. The program that will process this input informa- tion is to be stored in memory registers starting from address 750. The program in machine-code instructions is listed below. The mnemonic name of the operation is entered instead of its binary code, Memory Instruction Location Operation Address Function 750 Input 000 = 751 Load 750 (<750>) = 4 782 Increment Uti 153 Store 750 (A) = <750> 154 Branch-unconditional 750 Branch to location 750 Sec. 10-2 ORGANIZATION OF A SIMPLE DIGITAL COMPUTER 321 Start Pecform fetch cycle Check operation ype ‘memory transfer ‘ype ‘ype Perform an execute Execute eycle of the instruction ‘operation decoded, of the operation (See list of ae ‘elementary operations (See table 10.2) in text) Figure 10-3 Flow diagram for the sequence of operations in the simple computer ‘The first word from the P register goes to the memory register in location 0. The instruction in location 750 is then loaded into the A register, incremented by 1, and stored back. Thus the new contents of location 750 become: “Input 001.” The program then branches back to address 750. The execution of the modified instruction at location 750 causes a transfer of the second word from the P register to the memory register in location 001. The program loop is repeated again, causing consecutive input words to be stored in consecutive memory locations. Although this example illustrates a very simple machine-code program, there are at least three difficulties associated with its execution. The first difficulty is the lack of provision for terminating the process. Obviously, if the input data goes beyond location 749, it will destroy its own program. One solution to this problem is to provide a “Stop” instruction and stop the computer after a programmed test. The details of such a test are left for an exercise. The second difficulty is concerned with the problem of initialization: if the program is used a second time, the initial address part of the instruction in location 750 will not necessarily be 0. The third 322 COMPUTER ORGANIZATION Chap. 10 difficulty may be stated as a question: how is the first program that enters other programs into the computer brought to memory? In other words, how is the above listed program accepted from the P register if it is not initially stored in memory? This problem is fundamental with general Purpose computers and is concerned with the initial starting procedures referred to as “cold start.” It is solved by a procedure called “boot- strapping,” usually a manual operation done by the operator on the computer console that loads a basic initial program into memory, which in turn is capable of calling a more extensive program from an input device. 10-3 LARGE GENERAL PURPOSE COMPUTERS A block diagram of a large general purpose computer is shown in Fig. 10-4. ‘The memory unit is usually very large and divided into modules; each module is able to communicate with each of the three processors. The central processor is responsible for processing data and performing arith- metic, logical decisions, and other required processing operations. The system control supervises the flow of information among units, with each Central SYSTEM Processor CONTROL ——| Memory MODULES: Peripheral Data Communication Protessor Processor Input/Output Devices Data Communication Networks Figure 104 Block diagram of a large digital computer See, 10.3 LARGE GENERAL PURPOSE COMPUTERS 323 unit usually having its own control section to take care of internal func- tions, The memory, processor, and control units are similar in small and large systems except that the large one has more capabilities and is much more complicated. Special equipment included in large computers, not normally found in small ones, are processors to control the information flow between the internal computer and external devices. A large computer can use a range of equipment to perform input—output functions. These include card reader and punch, paper-tape reader and punch, teletype keyboard and printer, optical character reader, visual display, graph plotter, and data communication networks. Some input and output devices also provide additional storage to support the internal memory capacity. Such devices include magnetic tapes, magnetic disks, and ‘magnetic drums. The purpose of the peripheral processor is to provide a pathway for the transfer and translation of data passing between input, output, external storage, and internal memory modules. The peripheral processor is sometimes called a channel controller, since it controls and regulates the flow of data to and from the internal and external parts of the computer. Computers with time-sharing capabilities must provide communication with many remote users via communication lines. A separate data communi- cation processor is sometimes provided to handle input and output data of many communication networks operating simultaneously. Information exchange with these devices is at so slow a rate that it is possible to service tens or even hundreds with a single unit. Its principles of organization and its relation to the rest of the computer are very similar to those of the peripheral processor. However, it is necessarily more complex since it communicates with more devices at the same time, The data communication processor provides a communication link between each remote user and the central processor. Peripheral Processor The input-output (1/O) register in the simple computer described in Sec. 10-2 connects directly to the memory-buffer register for direct transfer of data in and out of memory. This means that the entire computer is idle while waiting for data from a slow input device or for data to be accepted by a slow output device. The difference in information flow rate between the central processor (millions of operations or transfers per second) and that of the input-output devices (a few to few hundred-thousand transfers per second) makes it inefficient for a large computer to have direct commu- nication between the memory modules and external equipment. The peri- pheral processor makes it possible for several external pieces of equipment to be operating essentially simultaneously, providing data to and taking data from internal storage 324 COMPUTER ORGANIZATION Chap. 10 The data format of input and output devices usually differs from the main computer format, so the peripheral processor must restructure data words from many different sources compatible with the computer. For example, it may be necessary to take four 8-bit characters from an input device and pack them into one 32-bit word before the transfer to memory. Each input or output device has a different information interchange rate. In the interest of efficiency, it is inadvisable to devote central processor time to waiting for the slower devices to transfer information. Instead, data is gathered in the peripheral processor while the central processor is running. After the input data is assembled, it can be transferred into memory using only one computer cycle. Similarly, an output word transferred from memory to the peripheral processor is transferred from the latter to the output device at the device transfer rate and bit capacity. Thus the peri- pheral processor acts as a transfer-rate butter. As each I/O device becomes ready to transfer information, it signals the peripheral processor by use of control bits called “flags.” With many 1/O devices operating concurrently, data becomes available from many sources at once. The peripheral processor must assign priorities to different devices. This assignment may be on a simple first-come, first-served basis. However, devices with higher transfer rate are usually given higher priority. An I/O transfer instruction is initiated in the central processor and executed in the peripheral processor. For example, the instruction 1/0 M, where I/O is an input-output operation code and M is an address, is executed in the central processor by simply transferring the contents of memory register M to a peripheral processor register. The contents of memory register M specify a control word for the peripheral processor to tell it what to do. A typical control word may have a binary code information as follows: operation | device | address | count ‘The operation code specifies input, output, or control information such as start unit operating, rewind tape, etc. The device code is an identification number assigned to each I/O device. Each device reacts only to its own code number. The memory address part specifies the first memory register where the input information is to be stored or where the output informa- tion is available. The count number gives the number of words to be transferred. To continue with a specific example, let us assume that the operation code specifies an input and that the device code specifies a card reader. The peripheral processor first activates the unit and goes to fulfill other 1/O tasks while waiting for a signal from the card reader to indicate that it is Sec. 103, LARGE GENERAL PURPOSE COMPUTERS 325 ready to transfer information. When the ready signal is received, the peri- pheral processor starts receiving data read from cards. If each column punched in the card is transferred separately, the peripheral processor will accept 12 bits at a time. The 12-bit character from each card column can be immediately converted to an internal 6-bit alphanumeric code by the peripheral processor. Characters are then assembled into a word length suitable for memory storage. When a word is ready to be stored, the peripheral processor “steals” a memory cycle from the central processor; ive., it receives access to memory for one cycle, The address part of the control word is transferred to the memory-address register and the data which was assembled goes to the memory-buffer register and the first word is stored. The address and count numbers in the control word are decreased by 1, preparing for storage of the next word. If the count reaches 0, the Peripheral processor stops receiving data. If the count is not 0, data transfer continues until the card reader has no more cards. A peripheral processor is similar to a computer control section in that it sends signals to all registers to govern their operation. Among its duties are synchronizing data transfer with the main computer, keeping track of the number of transfers, checking flags, and making sure of data transfer to prescribed memory locations. It is important to note that this method of data transfer solves the information transfer-rate-difference problem mentioned earlier. The central Processor operates at high speed independently of the peripheral processor. The former’s operation interferes with the latter’s only when an occasional memory cycle is requested. The slow operations needed when communicating with input and output transfers are taken care of by the interface control between peripheral processor and the I/O devices. Program Interrupt The concept of program interrupt is used to handle a variety of problems which may arise out of normal program sequence. Program interrupt refers to the transfer of control from the normal running program to another service program as a result of an internally or externally generated signal, The interrupt scheme is handled by a master program which makes the control transfers. To better appreciate the concept, let us consider an example. Suppose in the course of a calculation that the computer finds it must divide by zero (due, of course, to a careless programmer), what should be done? It would be easy enough to indicate the error and halt, but this would waste time. Suppose there is a flip-flop which would set whenever a divide by zero operation occurs. The output of this flip-flop could send a 326 COMPUTER ORGANIZATION Chap. 10 signal to interrupt the normal running program and load into the memory- address register the beginning address of a program designed to service the problem. This service program would take appropriate diagnostic and correc- tive steps and decide what to do next. The sensing of the flip-flop and the transfer to a service program is handled by the master program and is called a program interrupt. There are many contingencies that demand the immediate attention of the computer and cause a program interrupt by setting a particular flip-flop in an interrupt register. The setting of any interrupt flip-flop results in transfer of control to the master program. This program would then check to see which flip-flop was set, steer control to the appropriate service program in memory and, upon completing the task, clear the flip-flop. The master program also insures that the contents of all processor registers remain unchanged during the interrupt so that normal processing could resume after the interrupt service has been completed. Program interrupts are initiated when internal processing errors occur, when an external unit demands attention, or when various alarm conditions occur. Examples of interrupts caused by internal error conditions are register overflow, attempt to divide by zero, an invalid operation code, and an invalid address. These error conditions usually occur as a result of a premature termination of the instruction execution. The service program determines the corrective measures to be taken. Examples of external request interrupts are I/O device not ready, 1/0 device requesting transfer of data, and I/O device finished transfer of data. These interrupt conditions inform the system of some change in the external environment. They normally result in a momentary interruption of the normal program process which is continued after servicing or recording the interrupt condition. Termination of I/O operations is handled by inter- rupts; this is the way the peripheral processor usually communicates with the central processor. Interrupts caused by special alarm conditions inform the system of some detrimental change in environment. They normally result from either a programming error or hardware failure. Examples of alarm condition inter- Tupts are running program is in an endless loop, running program tries to change the master program, or power failure. Alarm conditions due to Programming error result in a rejection of the program that caused the error. Power failure might have as its service routine the storing of all the information from volatile registers into a magnetic-core memory in the few milliseconds before power ceases. Data Communication A data communication processor is an I/O processor that distributes and collects data from many remote terminals. It is a specialized 1/0 processor Sec. 10.3 LARGE GENERAL PURPOSE COMPUTERS 327 with the I/O devices replaced by data communication networks. A communication network may consist of any of a wide variety of devices such as teletypes, printers, display units, remote computing facilities, or remote I/O devices. With the use of a data communication processor, the computer can execute fragments of each user’s program in an interspersed manner and thus have the apparent behavior of serving many users at once. In this way the computer is able to operate efficiently in a time-sharing environment. The most common means of connecting remote terminals to a central computer is via telephone fines. The advantages of using already installed, comprehensive coverage lines are obvious. Since these lines are narrow-band voice channels (analog) and computers communicate in terms of signal levels (digital), some form of conversion must be used. This converter is the so-called Modem (from MODulator-DEModulator). The Modem converts the computer pulses into audio tones which may be transmitted over phone lines and also demodulates the tones for machine use. Various modulation schemes as well as different grades of phone lines and transmission speeds are used. In a typical time-sharing system, the central computer is connected to many remote users. As in the case of the multitude of I/O units, the central computer must be able to single out terminals for singular communi- cation. As before, each terminal is assigned a device code and is selected by matching the code broadcast by the computer. The binary code most commonly used for data transmission is the ANSCII seven-bit code (Table 10-3), with an eighth bit used for parity. This code consists of 95 graphic characters that include upper and lower case alphabetic characters, numerals zero to nine, punctuation marks and special symbols, and 33 control characters, 10 of which are used in communication control. The control characters do the control operations necessary to route data properly and put it into the proper format. They are grouped into three functions: communication control, format effectors, and information separators. The 10 communication control functions listed in Table 104 will be useful in a subsequent description of terminal operation. Format effectors are functional characters that control the layout of printing or display devices. They include the familiar typewriter format controls such as backspace (BS), horizontal (HT) and vertical tabulation (VT), carriage return (CR), and line feed (LF). Information separators are used to separate blocks of data in a logical, hierarchical order such as sentences, paragraphs, pages, multiple pages, etc. These standard characters may be used in any manner the data communication designer wishes; although they are standard from a symbolic standpoint, their functions are by no means standard. The information sent to or received from a remote device is called a message. Text (graphic characters) is usually prefaced by control characters. 328 COMPUTER ORGANIZATION Chap. 10 Control information plus text constitute a message. The length and content of the text depends upon the application. Control information is needed to direct the data communications flow and report its status. ‘A data communication processor receives information either by individual bits in series or by a parallel transfer of all bits of a character called a byte. A byte is usually accumulated in an input register and then trans- ferred as one character. The data communication processor accumulates bytes from many remote devices and builds messages by programmed pro- cedures, It uses information tables describing the data network character- istics supplied by the particular installation. It interprets and translates codes for control characters but does not interpret the text transmitted. ‘Table 10-3 American National Standard Code for Information Interchange (ANSCII) (formerly USASCH and ASCII) : (er vaornomnieanaaaseeH pear a a pene em HO TMMNenO mT NeT anole Hey ibs) oeoes Fi genres acer suomi aire ley LA dy fRowp ooo00 0 NUL DLE SP o @ Pp = P ooo1 1 SOH DCI ! 1 A Q A q oo1o0| 2 sx ee” 2 Bo Ro oe cori] 3 [exp # 3 c s c 5 0100] 4 |ror nce s 4 dD tT a ¢ oroi| s |enonk% $ £ Ue o 0110] 6 Jack syne 6 F Vor ¥ orii{ 7 |oxem’ 7 ¢ Wg w roool « |as cwnc 8 H xX n x rooif 9 |ur mw) 9 1 yay tore] i [ur su + eerie rorif uu |vr pee ; « ¢ k rio0) 2 |mwoms , < bya f 1101] 13 cR Gs - a aan ritrol w Jso rs. > N A nm rina] is [x uw 7 7 0 ~ 0 pe The process by which contact is established with the remote terminal is called Poll/Select. The data communication processor continuously polls all terminals by sending the code ENQ (0000101), which asks “Are you ready to send?”. The remote terminal may indicate its readiness in a number of ways. If the polled unit is a simple teletypewriter, a send mode flag may be set. In more complex terminals, buffer memories may be included within the unit to transmit slowly gathered data at high speeds. In this case, the Sec. 10.3, LARGE GENERAL PURPOSE COMPUTERS 329 ‘Table 10-4 Communication Control Characters Code Binary Meaning SOH 0000001 Start of heading sTx 0000010, Start of text ETX 000011 End of text OT 0000100 End of transmission ENQ 0000101 Inquiry ACK 0000110 Acknowledge DLE 0010000 Data link escape NAK 0010101 Negative acknowledge SYN 010110, Synchronous idle ETB 010111 End of transmission block unit assumes a transmit ready status only after its buffer memory has been filled with a complete message. It then waits for a poll from the data center to activate the message transmission. If the polling signal arrives and the terminal is not in a ready mode, it sends back a negative acknowledge (NAK) to indicate that it is not ready. The data center is then satisfied until the next poll. If the terminal is ready, it sends back an ACK, and communication proceeds. A very simple format that may be used between the data communication processor and a remote terminal will now be given. The data communica- tion processor establishes contact by sending the following characters, where ¥ is a station address and ENQ is a control character listed in Table 10-4: Y Y ENQ Assuming the remote device has information to be transmitted, it sends the following characters, where Y is its identification address and X is text: SOH Y Y STIX X X....X EIX If the data communication processor receives the data without errors, it sends back an acknowledge character and terminates: Y Y ACK EOT The speed of most remote terminals (especially teletype units) is extremely slow compared with computer speeds. This property allows multi- plexing of many users to achieve greater efficiency in the time-sharing systems. Multiplexing is a systematic repeated selection of multiple inputs, which combines them into a single output. It is analogous to a multiple position rotary switch rotating at high speed, sampling one input at a time. This technique allows many users to operate at once and share a single communication line while being sampled at speeds comparable to the speed of the main computer. 330 COMPUTER ORGANIZATION ‘Chap. 10 Microprogramming It was shown in Sec. 10-2 that a computer instruction is executed internally by a sequence of elementary operations. It is possible to derive combinational circuits to implement the elementary operations during appro- priate control pulses. By wiring circuits permanently to do the elementary operations, however, the computer is limited to a fixed instruction reper- toire prescribed by the original design, A microprogrammed controlled computer has an independent control over the processes dictated by the instructions. It incorporates a control memory that holds microprograms for the interpretation of machine instruc- tions. The microprogram specifies a sequence of elementary operations called micro-operations, The sequence of elementary operations needed to execute a machine instruction is not wired by unalterable circuits but instead is specified by a set of microinstructions in the microprogram. The usual procedure is to choose the micro-operation sequence for each computer instruction by specifying a set of microinstructions, The micro- programmed concept increases the flexibility of a computer in that it allows the user to define his own instructions to fit his needs. A_ microprogrammed controlled computer needs a small computing section within the main computer to program and execute the micro- programs. This inner computing unit replaces the control section of a conventional stored program computer. It is usually comprised of a control memory, input and output registers, and a control decoder. The control memory stores the individual microinstructions, which are interpreted by the control decoder. The most common storage media used for microprograms is the read-only memory. This is a memory unit whose binary cells have fixed, permanently wired values. Common algorithms are permanently stored in the read-only memory for the required micro-operation sequences. The control memory may aiso be a read—write memory, in which case any sequence of micro-operations the user wishes to define may be programmed in microinstructions. A block diagram of a basic microprogrammed system is shown in Fig. 10-5. An instruction fetched from main memory specifies the first address of the microprogram that implements this instruction. The address for subsequent microinstructions may come from a number of sources. A simple way to specify the next micro instruction is to include its address with each microinstruction. This method is suitable for well-defined sequences of operations, where the algorithm for a certain machine instruc- tion can be executed by consecutive micro-operations. It may be necessary, however, to make certain microinstruction executions dependent on the results of previous micro-operations. For this reason, the next address encoder receives information from either the present microinstruction, the main computer, or both. See, 10.3, LARGE GENERAL PURPOSE COMPUTERS 331 Microprogram ‘Address Register Maer T coats MPUTER| Control Control Register [—*] Decoder REGISTERS Next Address Encoder Timing and Sequencing Figure 10-5 A generalized model of microprogram control A microinstruction is a control word which is decoded by the control decoder. The control decoder generates signals for the main computer to perform micro-operations. The control word specified by the address register appears in the control register. This control word has many possible formats, depending on the specific microprograming scheme adopted. In specifying the format of a control word, the following questions must be considered: 1. How many control functions will be specified by a single memory word? 2. How will the control functions be executed? For example, it is possible to scan m bits of an n-bit control word sequentially or process all the 7 bits concurrently. 3. How will control words be decoded? 4. How will the execution sequence be established? 5. Where will the address of the next control word be derived? Once these and other questions have been answered, design may proceed in exactly the same manner in which a stored program computer is designed (see Ch. 11). Most microprograms are executed by calling the first address of the microprogram, sequencing through the micro-operations stored in the control 332 COMPUTER ORGANIZATION Chap. 10 memory, and then transferring back to main program control. A. micro- program is thus nothing more than a subroutine at the elementary operation level. 10-4 CLASSES OF MACHINE INSTRUCTIONS Instructions are represented inside the computer in binary-coded form since this is the most efficient method of representation in computer registers. However, the user of the computer would like to formulate his problem in higher-level language such as FORTRAN or COBOL. The conversion from the user-oriented language to its binary-code representation inside the computer can itself be performed by a computer program. This is referred to as the compiling process; the program which performs the conversion is called a compiler. It would be instructive to demonstrate the process of compiling with a simple example. Consider a FORTRAN assignment statement X = Y + Z to be translated into the language of the simple computer described in Sec, 10-2. This statement, being one of many in the program, is punched on a card and entered into the computer. The FORTRAN statement is stored in memory with an alphanumeric binary code and occupies six consecutive character locations. The six characters are: X = Y + Z, and a symbol used for the end of statement. The translation from the set of characters to machine code is processed by the compiler program as follows. First the characters are scanned to determine if the statement is indeed a valid assignment statement. Then the characters X, ¥, and Z are recognized as variables and each is assigned a memory address, The + character is interpreted to be an arithmetic operator and the = character as a replace- ment operator. The compiler program translates the statement into. the following set of machine instructions Operation Address Binary code Function Load 7 1100000010001 Load value of Y into Ac Add 18 0010000010010 Add value of Z to Ac Store 19 1101000010011 Store result in X It is assumed that the compiler chose addresses 17, 18, and 19 for the variables Y, Z, and X, respectively. A value entered for a variable through an input statement, or calculated from an assignment statement, is stored in the memory register whose address corresponds to the variable name. The compiler keeps an internal table that correlates variable names with memory addresses and refers to the corresponding address when the value of the variable is used or when the variable is given a new value. Thus, the value Sec. 10-4 CLASSES OF MACHINE INSTRUCTIONS 333 of the sum is stored in the memory register whose address corresponds to the name X. ‘The binary-code instructions are the ones actually stored in memory registers. The first four bits are derived from the operation code listed in Table 10-3. The last 10 bits represent the address part and are obtained from the conversion of the decimal number to its binary equivalent. The number of instructions available in a computer may be greater than a hundred in a large system and a few dozen in a small one. Some large computers perform a given function with one machine instruction, while a smaller one may require a large number of machine instructions to perform that same function. As an example, consider the four arithmetic operations of addition, subtraction, multiplication, and division. It is possible to provide machine instructions for all four operations. It is also possible to provide only the Add operation as a machine instruction, with the other arithmetic operations being implemented by a sequence of many Add instructions. Functions executed internally with a single computer instruc- tion are said to be implemented by hardware. Functions implemented with a subroutine program (i.e., a sequence of machine instructions), are said to be implemented by software. Large computers provide an extensive set of hardware instructions, smaller ones depend more on software implementa tion. Certain series of computer models possess similar external character- istics but are different in operating speed and computer costs. The high- speed expensive models have a large number of hardware-implemented instructions. The low-speed, less expensive models have only some basic hardware instructions, with many other functions being implemented by software subroutines, The physical and logical structure of computers is normally described in a reference manual provided with the system. Such a manual explains the internal structure of the computer, including the processing registers available and their logical capabilities. It will list all hardware-implemented instructions, specify their binary-code format, and provide a precise defini- tion of each instruction. The instructions for a given computer can be conveniently grouped into three major classes: (1) those that perform operations and change data value such as arithmetic and logical instructions, (2) those that move information between registers without transforming it, and (c) those concerned with testing of data and branching out of normal program sequence. Operational Instructions Operational instructions manipulate data in registers and produce inter- ‘mediate and final results necessary for the solution of the problem. These instructions perform the needed calculations and are responsible for the bulk of activity involved in processing data in the computer. The most 334 COMPUTER ORGANIZATION Chap. 10 familiar example of operational instructions are the four arithmetic opera- tions of addition, subtraction, multiplication, and division. These are the ‘most basic functions from which scientific problems can be formulated using numerical analysis methods. Although implemented by hardware instructions in most computers, division and sometimes multiplication are implemented by software in some small computers. Arithmetic instructions Tequire two operands as input and produce one operand as output. If the memory registers containing all three operands are specified explicitly, a three-address instruction is needed. This instruction may take the form: Add My, Ma, Ms where the mnemonic name Add represents the operation code and M,, Mz, Mg are the three addresses of the operands. The execution of this instruc- tion requires three references to memory in addition to the one required during the fetch cycle. The elementary operations to implement the instruc- tions are: () = B read Ist operand (KM,>) > B, (B) = A read 2nd operand, transfer Ist to A (4) + @)>A form the sum in the processor ()=B transfer sum for storage (8) = store sum in 3rd register For simplicity, a nondestructive memory unit has been assumed, and the transfer of the address parts from the instruction register to the memory- address register have been omitted. Most computers use a reduced number of explicit memory registers in the instruction format and assume that some of the operands use implict registers as discussed in Sec. 10-1 and Prob. 10-2. Hardware algorithms for processing machine multiplication and division require three registers. The B register is used for holding one operand; the A register, for executing repeated additions or subtractions; and a third register, for storing the second operand and/or the result. The third register is sometimes called the MQ-tegister because it holds the multiplier during multiplication and the quotient during division. These two arithmetic instructions require a large number of successive elementary operations of additions and shifts, or subtractions and shifts, for their execution. When the computer central control decodes the operation part of such a lengthy instruction, it is customary for it to transfer a special control signal to a local control section within the arithmetic unit. This local controller supervises the execution of the instruction among the arithmetic registers and then transfers back a signal to the central control upon completion of the instruction execution. (See Sec. 12-6.) The four basic arithmetic operations are said to be binary because they use two operands. Unary and ternary arithmetic operations are also possible. Sec. 10-4 CLASSES OF MACHINE INSTRUCTIONS 335 The square root of a number is a unary operation because it needs only one operand. A function such as A + B * C isa ternary operation because it uses three operands, A, B, and C, to produce a result equal to the product of the second and the third operand added to the value of the first. The square root function is well known to occur in many scientific problems, and the above-mentioned ternary function occurs repeatedly in matrix mani- pulation problems. Most computers perform these two function by software means. Some high-speed computers find it convenient to include such operations in hardware instruction. The disadvantage involved in the added equipment is offset by the gain obtained from the added speed provided by the direct implementation. The data type assumed to be in the processing registers during the execution of arithmetic instructions is normally specified implicitly in the definition of the instruction. An arithmetic instruction may specify fixed- point or floating-point data, binary or decimal data, single-precision or double-precision data. It is not uncommon to find a computer with four types of Add instructions such as Add binary integer, Add floating-point, Add decimal, or Add double-precision. Fixed-point binary numbers are assumed to be either integer or fractions; negative numbers are represented in registers by either sign-magnitude, 1's complement, or 2s complement. Floating-point arithmetic instructions require special hardware implementation to take care of alignment of frac- tions and scalling of exponents. Some computers prefer to use software subroutines for the floating-point operations to reduce the amount of circuits in the processor. Decimal data are represented by a binary code, with each decimal digit occupying four flip-flops of a register. Computers oriented towards business data processing applications find it more con- venient to have arithmetic instructions that operate on decimal data. The number of bits in any computer register is of finite length and therefore, the results of arithmetic operations are of finite precision. Some computers provide double-precision arithmetic instructions where the length of each operand is taken to be the length corresponding to two computer registers. There are many other operational-type instructions not classified as arith- metic operations. The common property of such instructions is that they transform data and produce results which conform with the rules of the given operation. The logical instructions AND, OR, etc., are operational instructions that require two input operands to produce a value for the output operand. Bit-manipulation instructions such as bit-set, bit-clear, and bit-complement transform data in registers according to the rules discussed in Sec. 9-9. Unary operations such as clear, complement, and increment are operational instructions that require no operand from memory and, there- fore, do not use the address part of the instruction code. The shift instruction can utilize the part of the instruction code normally used to specify an address for storing an integer that specifies the number of places 336 COMPUTER ORGANIZATION Chap. 10 to be shifted. Other operational instructions are those concerned with format editing operations such as code translation, binary to decimal conversion, packing and unpacking of characters, and editing of data as, for example, during the preparation of output characters to a printer. Inter-Register Transfer Instructions Inter-register transfer instructions move data between registers without changing the information content. These instructions are essential for input and output movement of data and serve a necessary function before and after operational instructions. These instructions must specify a source and destination register. The information at the source is invariably assumed to remain undisturbed, However, the previous information at the destination register must be destroyed after the transfer in order to accomodate the new information. An inter-register transfer instruction requires two addresses, for its specification when the source and destination are both memory registers. However, the source or destination can be specified implicitly if they are chosen to be processor registers. A memory-to-memory inter- register transfer instruction is of the form: Move My, Mz where “Move” is a mnemonic name for the transfer operation code, M; is the address of the source register, and M, is the address of the destination register. This instruction can be implemented with a destructive-read type of memory as follows: Read: () > B read source register (8) = restore word () remain in B register Write: 0> clear destination register () are still in B register (8) > ) > 4 Yet, to perform this function internally, the computer must perform the register transfers which are part of the fetch cycle and those necessary for reading the operand from memory. Only then is the specified function executed. Test and Branch Instructions Computer instructions are normally stored in consecutive _ memory registers and executed sequentially. This is accomplished, as explained in Sec, 10-2, by incrementing the address stored in the program-control register during the fetch cycle so that the next instruction is taken from 338. COMPUTER ORGANIZATION Chap. 10 the next memory register when execution of the current instruction is completed. This type of instruction sequencing uses a counter to calculate implicitly the address of the next instruction. Although this method is employed in practically all computers, it is possible also to have a computer with an explicit next-instruction scheme. By this method, the program- control register is eliminated. Instead, the address of the next instruction is explicitly specified in the present instruction. As an example, consider the two-address instruction: Add My, Mz with M, specifying an operand and M, specifying the address of the next instruction, rather than an operand. The function of the instruction may be defined to be () + (A) = A, which requires the following elementary operations for its execution: a[M,)) =D transfer address of operand () > B read operand (B) > , (B) + (A) +A restore word, add to A To fetch the next instruction from memory, the computer must proceed to use the second address Mz as follows: (M2 }) > D transfer address of next instruction (

) =B read instruction (8) = , (8) = restore word, transfer instruction to J register The new instruction, together with the address of the one following it, is now stored in the instruction register. When the implicit counting instruction sequence method is used, the computer must include branch instructions in its repertoire to allow for the possibility of transfer out of normal sequence. The simplest branch instruc- tion is the unconditional, which transfers control to the instruction whose address is given in the address part of the code. The unconditional, as well as one-onditional branch instruction, were illustrated in the simple computer of Sec. 10-2. Conditional branch instructions allow a choice between alternative courses of action, depending on whether certain test conditions are satisfied. The ability of branching to a different set of instructions as a result of a test condition on intermediate data is one of the most important characteristics of a stored program computer. This Property allows the user to incorporate logical decisions in his data processing problem. Test conditions that are usually incorporated with a computer instruction Tepertoire include comparisons between operands to determine the validity of certain relations such as equal, unequal, greater, smaller, etc. Other useful test conditions are associated with the relative magnitude of a number to check if it is positive, negative, or zero. Counting the number of times a Sec. 10-4 CLASSES OF MACHINE INSTRUCTIONS 338 program loop is executed is a useful test for branching out of the loop. Other typical test conditions encountered in digital computers are: input and output status bits, such as device ready or not ready; processor status error bits, such as overflow after an arithmetic addition; and status of console switches set by the operator. These status bits are stored in flip-flops and tested by conditional branch instructions. The usual code format of a conditional branch instruction is of the form: Branch-on-condition M where “Branch-on-condition” is the mnemonic for the binary operation code which specifies the condition implicitly. When the condition being tested is satisfied, a branch to the instruction at address M is made. The program continues with the next consecutive instruction in sequence when the condition is not satisfied. Some typical conditional branch instructions are: Branch to M if (4) > 0. Branch to M; if () < (A). Branch to M if (X) = 0; where X is a processor register. Branch to M if the overflow indicator is on. These conditional branch instructions are executed by transferring the address part M (M, in example 2) from the instruction register to the program-control register if the test condition is satisfied. Otherwise, the address of the next instruction stored in the program-control register remains unaltered. It is possible to have different definitions for the conditional branch instruction. For example, the branch to the instruction at address M- may be made if the test condition is not satisfied. It is also possible to have a two-address branch instruction such as: Branch on zero My, Mz If (4) = 0 Branch to My If (A) #0 Branch to Ma It is also possible to have zero-address branch instructions. These instruc- tions are called skip-type instructions. Their function is as follows: If the test condition is satisfied, the next instruction is skipped, otherwise the next instruction in sequence is executed. Consider the following three consecutive instructions: Skip-on-zero accumulator Branch-Unconditional M, (This instruction is executed if (A) # 0) Add M, (This instruction is executed if (A) = 0) ‘The first is a skip instruction which has no address part. If the condition of the skip instruction is satisfied, the next instruction is skipped and the contents of 340 COMPUTER ORGANIZATION Chap. 10 are added to the accumulator. If the condition is not satisfied, program control is transferred to the instruction at address My. A very important branch instruction that any computer must provide is associated with entering and returning from subroutines. A subroutine is an independent set of consecutive instructions that performs a given function. During normal execution of a program, a subroutine may be called to perform its function with a given set of data, The subroutine may be called many times at various points of the program, Every time it is called, it executes its set of instructions and then transfers control back to the main program, The place of return to the main program is the instruction whose address follows the address of the instruction that called the subroutine. This address was in the program-control register just prior to the subroutine ‘transfer. These concepts can be clarified with the following example. Consider a main program stored in memory starting from location 200, and a subroutine located in locations 500 to 565. “Enter-Subroutine” is a mnemonic name for an operation code that calls a subroutine whose first instruction is found in the address part of the calling instruction. Main Program instruction location operation address 200 Enter-subroutine 500 201 Add 351 253 Enter-subroutine 500 254 OR 365 ete. Subroutine location instruction 500 first instruction of subroutine 565 Exit from subroutine (last instruction) Sec. 10.5, CHARACTER MODE 341 The instruction in location 200 calls a subroutine located at address 500. The execution of this instruction causes a branch to location 500 and thet control continues sequentially to execute the instructions of the subroutine. When the last instruction in location 565 is decoded, the control unit interprets the operation code whose mnemonic name is “Exit from Sub- routine” as a branch instruction back to the main program at location 201, the location following the instruction that called the subroutine. When the main program reaches location 253, the process of entering and returning from the same subroutine is repeated again, except that now the return address is to location 254, The “Exit from Subroutine” instruction must know to branch back to location 201 the first time and to location 254 the second time, The simplest method for providing the return address is to store the contents of the program-control register (which holds the address of the next instruction) in some temporary register before leaving the main program and entering the subroutine. This temporary register may be a reserved memory register, a reserved processor register, or the memory register prior to the subroutine program (location 499 in this example). One Possible way to execute the two subroutine branch instructions with ele- mentary operations is: Enter-subroutine M: CimMp)rog@r+x Exit from subroutine: (no address part) wc where J is the instruction register, C is the program-control register, and X is a reserved processor register used implicitly for subroutine transfers. 10-5 CHARACTER MODE A character is a binary code that represents a letter, a decimal digit, or a special symbol. A computer is said to operate in the character mode when characters are the units of information stored in memory registers or used during manipulation of information in the processor. This is in contrast with computers that operate in the word mode where the unit of informa- tion stored in registers represents data types used during the solution of numerical problems. The processing of scientific problems requires data types such as integers or floating-point operands, which require large word lengths, normally in excess of 24 bits. The processing of business data or other symbol-manipulation applications requires the use of character codes whose unit of information consists of six or eight bits. The type of instructions useful in word mode operation is discussed in Sec. 10-4. Character mode instructions are similar, except that the unit of. 342 COMPUTER ORGANIZATION Chap. 10 information is a character instead of a word. The arithmetic operations are the ones most often used when operating in the word mode. The instruc- tions most commonly used in character mode are editing operations and data movement of character strings. The following are some applications that require character mode representation and operation. 1. The input and output data employed by the human user is invariably in decimal form and in a special format such as integer, floating-point, etc, The user-oriented data is represented in computer registers with a character code. It has to be translated into the corresponding data type used internally in computer registers. 2. A user-oriented programming language such as FORTRAN consists of letters, decimal digits, and special symbols. The computer stores the input program in memory registers as string of characters. 3. The translation from the user’s program to machine-language code requires many character mode operations on the input character string. 4, Business and commercial applications deal with names of people, names of items, identification numbers, and similar information, represented in computer registers with a character code. The pro- cessing of such information requires a variety of character mode operations, The differences between the needs of scientific computation and business data processing brought about the design and production of specialized computers oriented toward either word or character mode representation and operation, Later computer models, recognizing some common character- istics and needs of the two modes, have incorporated both types in one machine. Computers may be grouped in four categories according to their representation and use of basic units of information such as words and characters. 1, Word machine. This type of computer can operate only in the word mode. Memory and processor register lengths are fixed; an instruction always specifies an entire word. This type of machine is usually called a “scientific computer.” However, character mode operations are needed during input, output, and translation of user-oriented programs. These operations are usually manipulated by indirect and inefficient means using the existing word-oriented instructions. 2, Character machine, This computer type uses a length of memory and processor registers equal to the number of bits used to represent a character. Data is stored as a string of characters and is referenced from memory by specifying the address of the first character in the string and some indication as to the last character. Last character Sec. 10-6 ADDRESSING TECHNIQUES 343 indication may be the address of the last character in the string, a character count, or, for variable-length data, a mark in one bit of the character code that designates end of data. This type of machine uses decimal arithmetic with a four-bit code to represent a decimal digit (the remaining bits of the character code are either removed or used for other purposes). Instruction codes have their operation part in one character; addresses are placed in succeeding character locations in memory. Data execution is usually accomplished through manipulation of one character at a time. This type of machine is usually called a “business-oriented computer.” 3. Word machine with character mode, This type of computer is basically a word machine, except that character mode representation and operation is also provided. The memory and processor registers are of fixed word length but may, for character mode, be considered as storing,a fixed number of distinguishable characters. For example, a computer with a word length of 48 bits and a character code of 6 bits can store eight characters in a single word. Character mode instructions have in their address part, in addition to the bits specifying the address of a word, a three-bit character designation field that specifies a character within the word. 4, Byte machine, A byte is a unit of information eight bits long. It can represent a variety of different information items depending on the particular operating mode of the computer. The various units of information are chosen to represent multiples of bytes, their length is recognized by the type of instruction used. For example, data may be represented as follows: decimal digit half byte or four bits alphanumeric character one byte or eight bits binary integers two bytes for single precision four bytes for double precision binary floating-point numbers eight bytes Instructions may be of variable length, with the operation code occupying one byte. The number of addresses belonging to the instruction is determined from the type of operation code and may be from no-address to possibly two or three. Word-oriented instructions will normally use fixed-length binary numerical data. Character mode instructions may use variable-length decimal data or character strings. 10-6 ADDRESSING TECHNIQUES The sequence of operations that implement a machine instruction has been divided in Sec, 10-2 into an instruction fetch cycle and a data execute 344 COMPUTER ORGANIZATION Chap. 10 cycle. The instruction is read from memory during the fetch cycle and placed in the instruction register. During the execute cycle, the operand (if needed) is read from memory and the instruction is then executed. If no operand is needed, the instruction is executed without a second reference to memory. To describe some additional features used to determine the address of operands, the sequence of operations needed during the execute cycle is more conveniently subdivided into two phases: the data-fetch and the instruction-execution. The instruction-execution phase consists of those operations that execute the instruction after the operands are available. The data-fetch phase is responsible for the selection of memory registers used for data when the instruction is executed. It is skipped for those instruc- tions that do not use memory registers for data, but may otherwise consist of a fairly involved sequence of elementary operations. The data-fetch phase considered so far consists of one reference to memory for reading or storing an operand. This phase becomes somewhat more complicated when the computer uses certain addressing techniques such as indirect and relative addressing. ct Address Added to the operation part of an instruction may be a second part designated as the address part M. However, the binary field M may some- times designate an operand or the address of an operand. When the second part of the instruction code is the actual operand, as in Fig. 10-1(d), the instruction is said to have an immediate address, or a literal designation. When the address part M specifies the address of an operand, the instruc- tion is said to have a direct address. This is in contrast with a third possibility called indirect address, where the field M designates an address of a memory register in which the address of the operand is found. It is customary to use one bit in the instruction code, sometimes called a tag or a mode bit, to distinguish between a direct and an indirect address. Consider for example the instruction code format shown in Fig. 10-6. It consists of a four-bit operation code, a fivebit address part M, and an addressing mode bit. The mode bit is designated by d and is equal to 0 for a direct address and to 1 for an indirect address. The operation code of the Load dM 110 qiifo0010] operation = for indirect Figure 10-6 Instruction code format with mode bit See. 10-6 ADDRESSING TECHNIQUES 345 : as Address in binary 1 Memory Unit Nee ce : masa a : Se an : ooo : soso : ion : op. | M 00010 | OO00001001 2 ——— a ; Memory buffer register @ “Accumulator Register @ Figure 10-7 Example to demonstrate indirect addressing instruction specifies a load function, The mnemonic designation which has been used for such an instruction with the mode bit equal to 0 is: Load M When the mode bit is a 1, it is customary to write the instruction as Load * M where * stands for indirect. The function of this instruction can be specified in symbolic notation as () = 4 for direct address (<()>) >A for indirect address () is a symbol for the contents of the memory register whose address is M. It follows that the symbol shown for the indirect address is the contents of the memory register whose address is (). The indirect address instruction requires two references to memory to fetch an operand. The first reference is needed to read the address of the operand; the second is for the operand itself. This can be demonstrated with an example using the numerical values of Fig. 10-7 to implement the 346 COMPUTER ORGANIZATION Chap. 10 load instruction specified in Fig. 10-6. For the direct mode, d = 0 and the A register receives the contents of memory register 2, which is 0000001001. For the indirect mode as shown, d = 1 and the content of memory register 2 is the address of the operand. This address is 9 and therefore the A register receives the contents of memory register 9, which is 0000111101. The indirect load instruction requires the following sequence of elemen- tary operations after the fetch cycle: Content of registers when a change occurs 1. GM) = D (D) = 00010 = 21 2.()>B (<2>) =0 (B) = 0000001001 3. (B) = (<2>) = 0000001001 4. BIM]) *D () = 01001 5.()>B (<9>) =0 (B) = 0000111101 6. (B) > (<9>) = 0000111101 1. (B) 2A (A) = 0000111101 Steps 1 to 6 encompass the data-fetch phase; step 7 executes the operation, The above sequence performs the following elementary operations: Step 1 transfers the address part of the instruction register to the memory-address register. Step 2 reads the word into the memory-buffer register. This word contains the address of the operand. Step 3 restores the word to the memory register. Step 4 transfers the address part of the memory-buffer register (address of operand) to the memory-address register. Step 5 reads operand into the memory-buffer register. Step 6 restores the word to the memory register. Step 7 transfers operand to accumulator register. Relative Address A. telative address does not indicate a storage location itself, but a location relative to a reference address. The reference address may be a special processor register or a special memory register. When the address field M of the instruction code is relative address, the actual address of the operand, called the effective address, is calculated during the data-fetch phase as follows: effective address = contents of a special register + M Sec. 10-6 ADDRESSING TECHNIQUES 347 The effective address may be considered to consist of the sum of a base address and a modifier address. It is convenient to distinguish between two types of relative addressing schemes. The first type contains a baseaddress register within the control unit and allows the field M to designate the modifier address. The second type uses the field M as the base address and a special register, called an index register, acts as the modifier. We shall proceed to explain in more detail the second type and then return to the first. ‘An index register is a processor or a special memory register whose contents are added to the address part M to obtain the effective address of the operand, A digital computer may have several index registers. It is customary to include an index-register field in the instruction code to specify the particular one selected. This is demonstrated in the instruction code format shown in Fig. 10-8. The index-register field designated by X may specify register X1, X2, or X3 with the binary code 01, 10, or 11, respectively. The code X = 00 specifies no index register. The effective address is obtained during the data-fetch phase with the elementary operation (M}) + (ny) D n= 1, 2,3 When X = 0, no index register is used and the effective address is M: @[M]) = D Fig. 10-9 shows a numerical example for implementing the load instruction whose format is specified in Fig. 10-8. The relative address is M = 2, and the index-register field specifies X2. The contents of X2, equal to 7, are added to M to obtain the effective address 9. The contents of memory register 9 are read from memory and transferred to the A register. Computers with index registers contain a variety of instructions which manipulate with these registers. The instructions are similar to the ones considered in Sec. 10-4 and can be divided into the usual three classes: 1, Inter-register transfer instructions load and store contents of index registers. Some examples are: Load-index M, X () = Xn Load-index immediate M, X M Xn transfer A to index register X (A) = Xn where X designates the index-register field and M is the address part of the instruction. 2. Operational-type instructions involve the usual unary and some binary operations. Some examples are: Clear X O* Xn Increment X (Xn) +1 Xn 348 COMPUTER ORGANIZATION Chap. 10 Decrement X (an) - 13 Xn Increment X, D (in) + D > Xn Decrement X,D (Xn) - D > Xn where the field D contains explicitly the amount of incrementing or decrementing requested. The increment or decrement instructions with- out the D field specify the value of 1 implicitly. 3. Test and branch instructions usually test for a specific value of Xn to cause a branch, Some examples are Branch on index zero M, X if (Xn) = 0, then M > Decrement and branch on zero M,X (Xn) - 1 > Xn if (Xn) = 0, then M > C The convenience of indexing can be demonstrated by a simple example. The addition of 100 numbers stored in consecutive memory registers starting from location 50 can be implemented with 100 instructions as follows: Clear A Add 50 Add 51 Add 52 Add 149 The use of index registers reduces the number of instructions considerably. The following program uses one index register to increment the effective address and another to count the number of additions. Location Instruction Function 200 load immediate 100, X1 100 = XI 201 load immediate 50, X2 50 > X2 202 clear A o=4 203 add 0, X2 () + (4) 24 204 increment x2 (2) +1 > x2 205 decrement and (1) - 1X1 branch on zero. 207, XI __if (X1) = 0, then 207 + C 206 branch 203 203 +c 207 next instruction The second type of relative addressing scheme uses a base-address register in the control unit and allows M to designate an address relative to it. As an example, consider an instruction format having seven bits for the address part. Suppose that the memory unit consists of 16,384 words, each of which require an address of 14 bits. The seven least significant bits of the address can be specified by M, while the most significant seven bits can be Sec. 10.6 ADDRESSING TECHNIQUES 349 Load XM 1100 [10] 00010) overaion fades Index Register specification Figure 10-8 Instruction code format with index register Memory Registers p[eroor ooooritio01|9 x3[ 00000 x2[oord xiffoort 0000001001|2 B[ooooiiiio1 A[ooooriii0y Figure 10-9 Example to demonstrate the use of index registers stored in the base register. During the data-fetch phase, the effective address is obtained by combining the two parts, The value of the base register can be changed by the program with an instruction like: Load base register MM = Base register And this value remains unchanged as long as instructions reference operands relative to the present value of the base register. Relative addressing with base registers is useful in small computers for increasing the range of memory when only a limited number of bits are available for the address part of the instruction. Some large computers use base registers for the ease of relocation of programs and data. When programs and data are moved from one sector of memory to another, as required in multiprogramming and time-sharing systems, the address field of 350 COMPUTER ORGANIZATION Chap. 10 instructions do not have to change. Only the value of the program base register requires updating. Summary of Addressing Characteristics Various addressing techniques have been introduced throughout this chapter. The addressing characteristics that have been mentioned are now summarized in order to place the various possibilities in their proper perspective. 1. Implication: An address may be implied explicitly by a binary code as part of the instruction or implicitly in the definition of the operation, 2. Number: The number of explicit addresses in an instruction may vary from zero to three or more. 3. Type: In general, the address field of an instruction refers to an address of a memory register. An instruction may also have other address fields for the purpose of designating a unique index or pro- cessor register, or for choosing an input or output unit. 4, Mode: An immediate address is in reality not an address as such because the address field is taken to be the actual operand. An instruction with a direct address mode considers the address field as the address of an operand, The address field of an indirect mode instruction specifies an address of a memory register where the address of the operand is found. 5. Relative: An indexed instruction uses its address field as a base address to be modified by the contents of an index register. An address field relative to a baseaddress register specifies a certain number of lower significant bits of the effective address, while the base-address register holds the remaining higher significant bits. 6. Resolution: The address field of an instruction may specify a memory register consisting of a number of bits referred to as a word. Word sizes may range from 12 to 64 bits. The address field may specify a memory register that stores one alphanumeric character. A character length may be six or eight bits. The address field could, if necessary, have a resolution to each bit of memory. 7. Length: The length of the memory register specified by the address field or fields may be fixed, as is usually the case in word-oriented instructions. It may be made variable by specifying the first and last memory register or the first memory register and a register count. 10-1. 10-2. 10-3, 104, 10-5. 106. PROBLEMS 351 PROBLEMS A digital computer has a memory unit with a capacity of 4096 words, each 36 bits long. The instruction set consists of 55 operations. (a) What is the minimum number of bits required for the operation code? (b) How many bits are needed for the address part of an instruction: (c) How many instructions can be packed in one word using zero- address, one-address, or two-address instruction formats? (d) What happens to the fetch-cycle when two instructions are packed in one word? Using the simple computer structure of Fig. 10-2 with two accumu- lator registers R1 and R2, list the register transfer elementary opera~ tions needed to execute the instructions given in Fig. 10-1 and defined in the text. Assume a magnetic-core memory. Given: the simple computer structure of Fig. 10-2, For each instruc- tion defined below: (a) give a possible one-address format and a definition of the instruction in your own words, and (b) list the sequence of elementary operations required to execute the instruction, Assume a nondestructive read-out memory. (1) Add input and store: (4) + (P) = (2) Three-way test: If (4) > 0, branch to ; if (A) <0, skip next instruction; if (4) = 0, proceed to next instruc~ tion in sequence, (3) Increment memory register: () +1 = (4) Replace: (4) = and () > 4 (5) Skip; if not, increment: If () = 0, skip next instruc- tion; if () # 0, then () +1 . Give the binary code of each instruction in location 750-754 of the Program listed in Sec. 10-2, Use the operation code of Table 10-2 and a memory unit with a capacity of 2048 words of 15 bits each. Assume that subtraction can be done in the simple computer by using the 2's complement representation (see Sec. 9-9), Modify the machine-code program of Sec. 10-2 to include a test for the number of input words entered from the P register and branch to a “Stop” instruction after this number reaches 750. A memory unit has a capacity of 65,536 words of 25 bits each, It is used in conjunction with a general purpose computer. (a) What is the length of the memory-address and buffer registers? 362 10-7. 10-8. 10-9, 10-10. 10-11. 10-12, 10-13. 10-14. 10-15. COMPUTER ORGANIZATION Chap. 10 (b) How many bits are available for the operation part of a one- address instruction word? (c) What is the maximum number of operations that may be used? (@) If one bit of a binary integer word is used to store the sign of a number, what are the largest and smallest integers that may be accommodated in one word? Explain the difference between a central processor, a peripheral Processor, and a data communications processor. Specify a format of a control word for the peripheral processor that requests input of 256 blocks from a magnetic-tape unit starting from block number 150. A block of storage contains 128 characters of six bits each, List the sequence of operations necessary to service a program interrupt, Give a character format between a data communication processor and a data communication network for a transmission of text from the computer to the remote terminal. What is the difference between: (a) 2 computer program and a microprogram? (b) a computer instruction and a microinstruction? (c) an operation and a micro-operation? (d) an elementary operation and a micro-operation? Give the mnemonic and binary instructions generated by a compiler from the following FORTRAN program. (Assume integer variables.) SUM = 0 SUM = SUM +4 +B DIF = DIF+C SUM = SUM + DIF Repeat Prob. 10-12 using the following FORTRAN program: IF (ALPHA + BETA) 10,20,10 10 X = A .AND. B OR. C .AND. D 20 X = NOT. X List the sequence of machine instructions needed to multiply two integers. Use a repeated additional method. For example, to multiply 5 X 4, the computer performs the repeated addition 5 +5 +5 + 5. Write a program; that is, list the sequence of machine instructions to perform an arithmetic addition of two floating-point numbers. Assume that each floating-point number occupies two consecutive memory registers; the first stores the exponent and the second stores REFERENCES 353 the coefficient. Assume to have a computer with the instructions listed in Table 10-2 in addition to the following two instructions: Branch on positive accumulator M. Subtract M; ie. (A4)—()> 4 10-16. List the sequence of elementary operations needed to execute the single machine instruction described by the function: () ®@ () = 10-17, List the sequence of elementary operations needed to execute the single machine instruction whose mnemonic notation is Move My, Mz, Mo The instruction performs the following functions: () >= (KM, + >) > (KM, +2) = ) > 10-18, Let a “Branch to Subroutine” instruction with an address part M be executed as follows: (= jae et eee Explain in your own words how the branching is done and where the beginning of the subroutine is. Suggest an instruction for returning to main program, 10-19. Give the sequence of elementary operations when both an index register and indirect address are specified in one machine instruction. The instruction is first modified by an index register (if specified) and then checked for an indirect address. REFERENCES 1. Burks, A, W., H. H. Goldstein, and J. von Neuman. “Preliminary Discus- sion of the Logical Design of an Electronic Computing Instrument (1946),” John von Neuman Collected Works, New York: The Macmillan Co., 1963. Vol. V, 34-79. 2. Buchholz, W. (Editor). Planning a Computer System. New York: McGraw-Hill Book Co., 1962. 3. Chu, Y. Introduction to Computer Organization, Englewood Cliffs, N.J. Prentice-Hall, Inc., 1970, 4, Flores, I. Computer Organization, Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1969. 5. Martin, J. Teleprocessing Network Organization. Englewood Cliffs, NJ. Prentice-Hall, Inc., 1970, 354 COMPUTER ORGANIZATION Chap. 10 6, Vandling, G. C., and D. E, Waldecker, “The Microprogram Control Technique for Digital Logic Design,” Computer Design, Vol. 8 No. 8. (August 1969), 44-51. 7, Husson, S, S. Microprogramming: Principles and Practice. Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1970. 8, Hellerman, H. Digital Computer System Principles, New York: McGraw- Hill Book Co., 1967, COMPUTER 1 1 DESIGN 11-1 INTRODUCTION This chapter presents the design of a small general purpose digital computer starting from its functional specifications and culminating in a set of Boolean functions. Though this computer is simple, it is far from useless. Its scope is quite limited when compared with commercial electronic data Processing systems yet it encompasses enough functional capabilities to demonstrate the design process. It is suitable for construction in the labora- tory with ICs, and the finished product can be a useful system capable of processing digital data. The computer consists of a central processor unit, a memory unit, a teletype input unit, and a teleprinter output unit. The logic design of the central processor unit is given in detail. The other units are assumed to be available as finished products with known external characteristics. The design of a digital computer may be divided into three interrelated phases: system design, logic design, and circuit design. System design concerns the specifications and the general properties of the system. This task includes the establishment of design objectives, design philosophy, computer architecture, required operations, speed of operation, and economic feasibility. The specifications of the computer structure are translated into Boolean functions by the logic design. The circuit design specifies the components and circuits for the various logic circuits, memory circuits, electromechanical equipment, and power supplies. The computer hardware design is greatly influenced by the software system, which is normally developed concurrently and which constitutes an integral part of 355 356 COMPUTER DESIGN Chap. 11 the total system. The design of a digital computer is a formidable task that requires thousands of man-hours of effort. One cannot expect to cover all aspects of the design in one chapter. Here we are concerned with the system and logic design phases of a small digital computer whose specifica- tions are formulated somewhat arbitrarily in order to establish a minimum configuration for a very small, yet practical machine. The procedure out- lined in this chapter can be useful in the logic design of more complicated systems. The design process is divided into six phases: (1) the decomposition of the digital computer into registers which specify the general configuration of the system, (2) the specifications of machine instructions, (3) the formulation of a timing and control network, (4) the listing of the sequence of elementary operations needed to execute each machine instruction, (5) the listing of the elementary operations to be executed on each register under the influence of control signals, and (6) the derivation of the input Boolean functions for each flip-flop in the system and the Boolean functions necessary to implement the control network. The step-by-step execution of the operations specified in (4) and (5) are described by register transfer relations. The Boolean functions obtained in step (6) culminate the logic design. The implementation of these functions with logic circuits and flip-flops has been covered extensively in previous chapters. 11-2 SYSTEM CONFIGURATION The configuration of the computer is shown in Fig. 11-1. Each block represents a register except for the memory unit, the console, the master- clock generator, and the control logic. This configuration is assumed to be unalterable. In practice, however, the designer starts with a tentative system configuration and constantly modifies it during the design process. The name of each register is written inside the block, together with a letter in parentheses. This letter represents the register in symbolic register transfer relations and in Boolean functions. The number in the lower right corner of, each block is the number of flip-flops in the register. The configuration shown in Fig. 11-1 is very similar to the one used for the simple computer introduced in Sec. 10-2. Sec. 14-2 SYSTEM CONFIGURATION 357 Progam Contal Regater MEMORY UNIT 4096 = 2! words 16 bits/word Memory suave Regater tye Memory Baer Taacton Regater Register (B) 16 wm 4 F E ‘Ascenalator 1 1 Reger Control : GO" 1s Lone sec [s Sequence Taput ‘Output a Register Register L_@ 2} ww)? w) ? Maser cick senerator Switches and indieator lamps ‘CONSOLE Figure 11-1 Block diagram of digital computer We shall restrict the design of the computer to the parts that can be described by Boolean functions. Certain parts do not fall in this category and are assumed to be available as finished products with known external characteristics. These parts are the master-clock generator, the memory unit, the console and its associated circuits, and the I/O devices. The rest of the computer can be constructed with flip-flops and combinational circuits. Master-Clock Generator ‘The master-clock generator is a common clock-pulse source, usually an oscillator, which generates a periodic train of pulses. These pulses are fanned-out by means of amplifiers and distributed over the space the system occupies. Each pulse must reach every flip-flop at the same instant. Phasing delays are needed intermittently so that the difference in transmission delays is uniform throughout. The frequency of the pulses is a function of the speed by which the system operates. We shall assume a frequency of 1 megahertz (one pulse every microsecond) for this computer. This fre- quency is lower than what is normally used in commercial systems but was 368 COMPUTER DESIGN Chap. 11 chosen here for the sake of having a round number and to avoid problems of circuit propagation delays. The Memory Unit The memory unit is a magnetic-core random-access type with a capacity of 4096 words of 16 bits each. The capacity was chosen to be large enough for meaningful processing. A smaller size may be used if the computer is to be constructed in the laboratory under economic restrictions. Twelve bits of an instruction word are needed to specify the address of an operand, which leaves four bits for the operation part of the instruction. ‘The memory unit communicates with the address register and the buffer register in the usual manner. The memory cycle control signal, designated by m, initiates one read-write cycle for a period of 4 microseconds. During the first 2 microseconds, the word located at the specified address given by the address register is accessed, read into the buffer register, and erased from memory (destructive reading). During the following 2 microseconds, the word in the buffer register is stored in the location specified by the address register. Thus, a word can be read during the first half of the cycle and restored during the segond half if the contents of the address and buffer registers are undisturbed. On the other hand, a new word can be stored in the addressed location if the address register is undisturbed but the contents of the buffer register are changed prior to the beginning of the second half of the memory cycle. To simplify the logic design, we shall not distinguish between a read and a write cycle. The memory cycle control signal (v2) will automatically initiate a read followed by a write as described above. A timing diagram for the memory unit is shown in Fig. 11 The Console The computer console consists of switches and indicator lamps. The lamps indicate the status of registers in the computer. The normal output of a flip-flop applied to a lamp (with amplification if needed) causes the light to turn on when the flip-flop is set and to turn off when it is cleared. The purpose of the switches is to start, stop, and manually communicate with the computer. The function of the switches and the circuits they drive is explained later; their purpose can be better understood after some aspects of the computer are better known. The design of the computer is carried out without regard to these switches. In Sec. 11-8 we shall enumerate the console switches, explain their function, and specify the circuits associated with their operation. ‘Sec. 11-2, SYSTEM CONFIGURATION 359 The Input-Output Device The I/O device is not shown in Fig. 11-1. It is assumed to be an electromechanical unit with a typewriter keyboard, a printer, a paper tape reader, and a paper tape punch. The input device consists of either the typewriter or the paper tape reader, with a manual switch available for selecting the one used. The output device consists of the printer or the paper tape punch, with another switch available for selecting either device. The unit uses an eight-bit alphanumeric code and has available two control signals indicating the status of the I/O devices. These control signals activate flipflops, which indicate whether the device is busy or ready to transfer information. Registers are physically mounted in the unit and are used to store the input and output eight-bit information and control signals. The specifications for the 1/O registers are given in more detail below. The Registers The part of the digital computer to be designed subsequently is decomposed into register subunits which specify the configuration of the system. Any digital system that can be decomposed in this manner can be designed by the procedure outlined in this chapter. Very briefly, this procedure consists of first obtaining a list of all the elementary operations to be executed on the registers, together with the conditions under which these operations are to be executed. The list of conditions is then used to design the control circuits, while the input Boolean functions to registers are derived from the list of elementary operations. Using the frame of reference from Ch. 10, the register configuration of Fig. 11-1 is an absolute necessity. The following paragraphs explain why the registers are needed and what they do. A list of the registers and a brief description of their function can be found in Table 11-1. Registers that hold memory words are 16 bits long. Those that hold an address are 12 bits long. Other registers have different numbers of bits depending on their function. Memory-Address and Memory-Buffer Registers The memory-address register (D) is used to address specific memory locations. The D register is loaded from the C register when an instruction is to be read from memory and from the 12 least significant bits of the B register when data is to be read from the memory. The memory-buffer register (B) holds the word read from or written into memory. The opera- tion part of an instruction word placed in the B register is transferred to the J register and the address part is left in the register for further 360 COMPUTER DESIGN Chap. 11 manipulation. A data word placed in the B register is accessible for opera- tion with the A register. A word to be stored in memory must be loaded into the B register at the beginning of the second half of a memory cycle. Table 11-1 List of Registers Letter Number Desig: of ation Name Bits Function A Accumulator Register 16 multipurpose register B — Memory-Buffer Register. 16 holds contents of memory word © Program Control Register 12 hholds address of next instruction D_ — Memory-Address Register 12 holds address of memory word E Extended flip-flop 1 extra flip-flop for accumulator F Fetch flipsop 1 determines fetch or execute cycle G Sequence Register 2 sequence counter 1 Instruction Register 4 holds current operation S—— StartStop flip-top 1 determines if computer is running of stopped holds information from input device holds information for output device Input Register Output Register ce Program Control Register (C) The C register is the computer's program counter. This means that this register goes through a step-by-step counting sequence and causes the computer to read successive instructions previously stored in memory. When the program calls for a transfer to another location or for skipping the next instruction, the C register is modified accordingly, thus causing the program to continue from a different location in memory. To read an instruction, the contents of the C register are transferred to the D register and a memory cycle is initiated. The C register is always incremented by I right after the initiation of a memory cycle that reads an instruction. Therefore, the address of the next instruction after the one presently being executed is always available in the C register. Accumulator Register (A) The A register is a multipurpose register that operates on data previously stored in memory. This register is used to execute most instructions and for accepting data from the input device or transferring data to the output device. This register, together with the B register, make up the so-called “arithmetic” unit of the computer. Although most data processing systems include more registers for this unit, we have chosen to include only two in order not to complicate the computer. With two registers in the arithmetic See, 11.2 SYSTEMCONFIGURATION 361 unit, only addition and/or subtraction can be implemented directly. Other operations, such as multiplication and division, are implemented by a sequence of instructions that form a subroutine. Instruction Register (/) The J register holds the operation bits of the current instruction. This register has only four flip-flops, since the operation part of the instruc- tion is four bits long. The operation part of an instruction is transferred to the J register from the B register, while the address part of the instruction is left in the B register. The operation part must be taken out of the B register because an operand read from memory into the B register will destroy the previously stored instruction. (The operation part of the instruction is needed to determine what is to be done to the operand just read.) Sequence Register (G) This register is a sequence counter that produces timing signals for the entire computer. It is a two-bit counter-decoder excited by clock pulses with a period of 1 ysec. The four outputs of the decoder supply four timing signals each with a period of 4 sec and a phase delay between two adjacent signals of 1 psec. The relation between the timing signals and the memory unit is clarified in Sec. 11-4. E, F, and S Flip-Flops Each of these flip-flops is considered a one-bit register. The E flip-flop is an extension of the A register. It is used during shifting operations, receives the end-carry during addition, and otherwise is a useful flip-flop that can simplify the data processing capabilities of the computer. The F flip-flop distinguishes between the fetch and execute cycles. When F is 0, the word read from memory is treated as an instruction. When F is 1, the word is treated as an operand. S is a start-stop flip-flop that can be cleared by Program control and manipulated manually from the console. When S is 1, the computer runs according to a sequence determined by the program stored in memory. When S is 0, the computer stops its operation Input and Output Registers The input register (N) consists of nine bits, Bits 1 to 8 hold alpha- numeric input information; bit 9 is a control flip-flop. The control bit is set when new information is available in the input device and cleared when the information is accepted by the computer. The control flip-flop is 362 COMPUTER DESIGN Chap. 11 needed to synchronize the timing rate by which the input device operates compared to the computer. The normal process of information transfer is as follows: Initially, the control flip-flop Ny is cleared by a “Start” pulse from the console. When the teletype key in the input device is struck, an eight-bit code character is loaded into the N’ register and Ng is set. As long as Nz is set, the information in the N register cannot be changed by striking another key. The computer checks the control bit; if it is a 1, the information from the N register is transferred into the accumulator register and No is cleared. Once Ny is cleared, new information can be loaded into the NV register by striking a key again. The output register (U) works similarly but the direction of information flow is reversed. A “Start” pulse from the console clears the control bit Us. The computer checks the control bit; if it is a 0, the information from the accumulator register is transferred into Uy to Ug and the control bit Us is set. The output device accepts the coded information, prints the corresponding character, and clears the control bit Us. The computer does not Joad a new character into the output device when Uy is set because this condition indicates that the output device is in the process of printing the character. 11-3 MACHINE INSTRUCTIONS The number of instructions available in a computer and their efficiency in solving the problem at hand is a good indication of how well the system designer foresaw the intended application of the machine. Medium- to large-scale computing systems may have hundreds of instructions, while most small computers limit the list to 40 or SO. The instructions must be chosen carefully to supply sufficient capabilities to the system for solving a wide range of data processing problems. A minimum requirement of such a list should include a capability for storing and loading words from memory, a sufficient set of arithmetic and logical operations, some address modification capabilities, unconditional branching and branching under test conditions, register manipulation capabilities, and I/O instructions. The instruction list chosen for our computer is believed to be close to the absolute minimum required for a restricted but practical data processor. The formulation of a set of instructions for the computer goes hand in hand with the formulation of the formats for data and instruction words. A memory word consists of 16 bits. A word may represents either a unit of data or an instruction. The formats of data words are shown in Fig. 11-2. Data for arithmetic operations are represented by a 15-bit binary number with the sign in the 16' bit position. Negative numbers are assumed to be in their 2’s complement equivalent. Logical operations are performed on individual bits of the word, with bit 16 treated as any other bit. When the ‘See. 11-3, MACHINE INSTRUCTIONS 363 sign magnitude (negative numbers in 2's complement) wlisfifisfizfufwfols[zfofs]a]safa]a (a) Arithmetic operand wlis}isfisfiz}ujwofo}s}7joe]s}a]a}2}a () Logical operand character character wefisfis}a3fizfujwfols}7}ols|saiat2fa (©) Input/ourput data Figure 11-2 Data formats computer communicates with the I/O device, the information transferred is considered to be eight-it alphanumeric characters. Two such characters can be accommodated in one computer word. The formats of instruction words are shown in Fig. 11-3. The operation part of the instruction contains four bits; the meaning of the remaining twelve bits depends on the operation code encountered. A memory-reference instruction uses the remaining 12 bits to specify an address. A register- reference instruction implies a unary operation on, or a test of, the A or E register. An operand from memory is not needed; therefore, the 12 least significant bits are used to specify the operation or test to be executed. A register-reference instruction is recognized by the code 0110 in the operation part. Similarly, an input-output instruction does not need a reference to memory and is recognized by the operation code O111. The remaining 12 bits are used to specify the particular device and the type of operation or test performed. Only four bits of the instruction are available for the operation part. It would seem, then, that the computer is restricted to a maximum of 16 distinct operations. However, since register-reference and input-output instructions use the remaining 12 bits as part of the operation code, the total number of instructions can exceed 16. In fact, the total number of instructions chosen for the computer is 23. Out of the 16 distinct operations that can be formulated with four bits, only 8 have been utilized by the computer. This is because the left-most bit of all instructions (bit 16) is always a 0. This leaves open the possibility of adding new instructions and extending the computer capabilities if desired. 364 COMPUTER DESIGN Chap. 11 The 23 computer instructions are listed in Tables 11-2, 11-3, and 11-4. ‘The mnemonic designation is a three-letter word and represents an abbrevia- tion intended for use by programmers and users. The hexadecimal code listed is the equivalent hexadecimal number of the binary code used for the instruction. A memory-reference instruction uses one hexadecimal digit (four bits) for the operation part; the remaining three hexadecimal digits (twelve bits) of the instruction represent an address designated by the letter M. The register-teference and input-output instructions use all four hexadecimal digits (16 bits) for the operation. The function of each instruction is specified by symbolic notation that denotes the machine operation that it executes. A further clarification of each instruction is given below, together with an explanation of its utility. ANDtoA This is a logical operation that performs the AND operation on corres- Ponding pairs of bits in A and the memory word and leaves the result in A. This instruction, together with CMA instruction (see Table 11-3) that performs the NOT logical operation, supply a sufficient set of logical operations to perform all the binary operations listed in Table 2-5. ADD to A This operation adds the contents of A to the contents of the memory word and transfers the sum into A. The sign bit is treated as any other bit operation address welisfiafisfiz}afwolo}s}7}e]s}4)3]2qa (a) Memory-reference instruction code 0110 type of unary operation or test w]isfis]izfi2fafiolo}s}r}o{s}a]s{2]a (b) Register- reference instruction code 0111 type of input-output operation or test wlisfis}is}izjufifo}s}7}o]s}4}a}2}r (©) Input/ourput instruction Figure 11-3. Instruction formats ‘see. 11.3 MACHINE INSTRUCTIONS 365 Table 11-2 Memory Reference Instructions Hexa. decimal Mnemonic Code Description Function AND M 0 AND to A KM>) Ad =A ADD M 1 add to A () + A) = A, carry > E IZ M2 increment and skip next () + 1 ; if instruction if zero ()+1= 0 then(Q)+1—=C sto M3 store A (a) = BBM 4 branch to subroutine S000 + (C) = ; M+ 1-0 BUN M5 branch unconditionally M = C Table 11-3 Register Reference Instructions Hexa decimal Mnemonic Code Description Function NoP 6000 no operation cla 6800 clear oA CLE 6400 clear & OnF CMA 6200 complement ana CME 6100 complement £ @)-E SHR 6080 shiftright A) = BOA G4,) Ape A... S sHL 6040 shifter A.) BO) As A.) Ap E= 2... 5 16 Ica 6020 increment A M+1=4 SPA 6010 skip next instruction if Ay.) =O ten +1=C A is positive SNA 6008 skip next instruction Ay) = 1 then O+1=C if A is negative SZA 6004 skip next instruction it (A) = 0 then +130 if A is zero SZE 6002 skip next instruction if ©) = 0 then 41-0 iE is ze10 HLT 6001 halt oes according to the 2's complement addition algorithm stated in Sec. 9-9. The end-carry out of the sign bit is transferred to the £ flip-flop. This instruc tion, together with register-reference instructions, is sufficient for program- ming any other arithmetic operation such as subtraction, multiplication, or division.* This instruction can be used for loading a word from memory “The sequence of operations needed to perform subtraction is enumerated in Sec, 9.9, 366 COMPUTER DESIGN Chap. 11 Table 11-4 Input-Output Instructions Hexa decimal Mnemonic Code Description Function 7800 skip next instruction iN, = 1 then +1+C if input control is a 1 INP. 7400 transfer from input device (WV, . 4) +A, - 4;0=N, sor 7200 skip next instruction if U, = 0 then (+ 1—C if output control is a 0 our 7100 transfer to output U2 0, 510, device into the accumulator by first clearing the A register with CLA (see Table 11-3). The required word is then loaded from memory by adding it to the cleared accumulator. 1SZ: Increment and Skip if Zero ‘The increment and skip instruction is useful for address modification and for counting the number of times a program loop is executed. A negative number previously stored in memory at address M is read by the ISZ instruction. This number is incremented by 1 and stored back into memory. If, after it is incremented, the number reaches zero, the next instruction is skipped. Thus at the end of a program loop one inserts an ISZ instruction followed by a branch unconditionally (BUN) instruction to the beginning of the program loop. If the stored number does not reach zero, the program returns to execute the loop again. If it reaches zero, the next instruction (BUN) is skipped and the program continues to execute instructions after the program loop. ‘STO: Store A This instruction stores the contents of the A register in the memory word at address M. This instruction and the combination of CLA and ADD are used for transferring words to and from memory and the A register. BSB: Branch to Subroutine This instruction is useful for transferring program control to another Portion of the program (a subroutine) that starts at location M + 1. When executed, this instruction stores the address of the next instruction (of the main program) held in register C into bits 1 to 12 of the memory word at location M. It also stores the operation code 0101 (BUN) into bits 16-13 of the same word in location M. The contents of the address part M plus 1 are transferred into the C register to serve as the next instruction to be executed (the beginning of the subroutine). The return to the main program See, 11.3, MACHINE INSTRUCTIONS 367 from the subroutine is accomplished by means of a BUN M instruction placed at the end of the subroutine, which transfers control to location M and, through it, back to the main program. }: Branch Unconditionally This instruction transfers control unconditionally to the instruction at the location specified by the address M. This instruction is listed with the memory-reference instructions because it needs an address part M. However, it does not need a reference to memory to read an operand, as is required by the other memory-reference instructions. Register-Reference Instructions The register-reference instructions are listed in Table 11-3. Of the 13 instructions, most are self-explanatory. Each register-transfer instruction has an operation code of 0110 (hexadecimal 6) and contains a 1 in only one of the remaining 12 bits. The skip instructions are used for program control conditioned on the sign of the A register or the value of the £ flip-flop. To skip the next instruction, the C register (which holds the address of the next instruction) is increased by 1 so that the next instruction read from memory is two locations down from the location of the present (skip) instruction. The halt instruction is usually placed at the end of a program and its execution stops the computer by clearing the start-stop S flip-flop. Input-Output Instructions These instructions are listed in Table 11-4. They have an operation code of O111 (hexadecimal 7) and each contains a 1 in only one of the remaining 12 bits. Two of these instructions (SIN and SOT) check the status of the control flip-flop to determine if the next consecutive instruc- tion is executed. This next consecutive instruction will normally be a Branch (BUN) back to the previous SIN or SOT instruction so the computer remains in a two-instruction loop until the control flip-flop becomes a 1 in the input register or 0 in the output register. The control flip-flop is changed by the external device when it is ready to send or receive new information. So when the SIN instruction detects that the input-register has a character available (Vp = 1) or when the SOT instruc- tion detects that the output register is empty (Us = 0), the next instruc- tion in sequence (BUN) is skipped and an INP or an OUT instruction (placed after the BUN instruction) is executed. Instructions and data are transferred into the memory unit either manually by means of console switches or from the input paper tape 368 COMPUTER DESIGN Chap. 11 reader. A 16-bit instruction or data word can be prepared on paper tape in two columns of eight bits each. The first punched column represents half of the word and the second column supplies the remaining bits of the word. The two parts are packed into one computer word and stored in memory. Normally, instructions and data occupy different parts of memory, with a halt instruction terminating the instruction sequence. To start the computer, the operator loads the first address of the program from console switches into the C register and activates a “start” switch. The first instruction is read from memory and executed. The rest of the instructions are executed in consecutive order unless a branch or a skip instruction is encountered. 114 TIMING AND CONTROL The timing for the entire computer is controlled by the master-clock generator, whose clock pulses are applied to all flip-flops in the computer. The clock pulse period is 1 ysec but the memory cycle time is 4 psec. While the memory is busy during its cycle, four clock pulses are available to activate registers and execute required elementary operations. The purpose of the G register is to supply four distinct signals during a memory cycle. This register is a two-bit counter-decoder circuit* that generates the timing signals for the control unit. These timing signals are designated by fo, fy, fa, and fy and are shown in Fig. 11-4. Each timing signal is 1 sec long and occurs once every 4 ysec. We are assuming that triggering of flip-flops occurs during the trailing edge of the clock pulse and that the memory cycle control signal initiates a memory cycle also on the trailing edge of a pulse. The relation between the timing signals and the memory cycle is demonstrated in the timing diagram of Fig. 11-4. A memory cycle is initiated at the trailing edge of a memory cycle control signal designated by m. During the first 2 sec of a memory cycle, reading takes place; ie., the word whose address is specified by the D register is read into the B register and erased from memory. The word in the B register settles to its final value at least 0.2 ysec prior to the termination of signal f2. During the second half of the memory cycle, writing takes place; ie., the contents of the B register are stored in the word whose address is specified by the D register. While the memory is busy reading and writing a word, timing signals fo- are used to initiate elementary operations on computer registers. Certain timing relations must be specified for the memory unit in order to synchronize it with the timing signals used for registers. As mentioned before, the memory cycle control signal m initiates a read cycle which is always followed by a write cycle. To be able to transfer information into “The design of counter-decoder circuits is covered in Sec. 7-6. Sec. 11-4 TIMING AND CONTROL 369 1 see | ce cp, SLIL_SL_IL_S_II ete aS 4 ps0 ——_—- Memory cycle control L_ (m) + —2 pisec +} —2 see —o} | = o ZZRNE Read Write Figure 11-4 Computer timing signals the address and buffer registers simultaneously with signals to and 2, we specify that the memory unit has to wait at least 100 nsec before using the outputs of these registers at the beginning of the read cycle, as well as at the beginning of the write cycle. This implies that the propagation delay of flip-flops can be at most 100 nsec. It is further assumed that the word read from memory during the read cycle settles to its final value in the buffer register not later than 1.8 psec after the memory cycle control signal is applied. ‘The digital computer operates in discrete steps and elementary operations are performed during each step. An instruction is read from memory and executed in registers by a sequence of elementary operations. When the control receives an instruction, the operation code is transferred into the J 370 COMPUTER DESIGN Chap. 14 register. Logic circuits are required to generate appropriate control signals for the required elementary operations. A block diagram of the control logic that generates the signals for register operations is shown in Fig. 11-5. The operation part of the instruction in the J register is decoded into eight outputs gog7; the subscript number being equal to the hexadecimal code of the operation. The outputs of the G register are decoded into the four timing signals fo-t3 , which determine the timing sequence of the various control functions. The status of various flip-flops and registers is sometimes needed to determine the sequence of control. The block diagram of Fig. 11-5 is helpful in visualizing the control unit of the computer when the register operations are derived during the design process. The start-stop S flip-flop is shown going into the G register as well as ‘the timing decoder. All elementary operations are conditioned on the timing ignals. The timing signals are generated only when S = 1. When S = 0, the control sequence stops and the computer halts. The S flip-flop can be set or cleared from switches in the computer console, or cleared by the HLT (halt) instruction. We shall assume that when S is set by a “start” switch, signal fo is the first to occur; and when $ is cleared by a “stop” switch, the current instruction is executed before the computer halts T Register ‘Operation Decoder i nn Output control signals Logie | — for elementary operations fn registers Status of registers Network [ims Timing Decoder Lf G Register cop ——! Figure 11-5 Block diagram of control logic Sec. 11.5, EXECUTION OF INSTRUCTIONS 371 11-5 EXECUTION OF INSTRUCTIONS Up to now, the system design of the computer has been considered. We have specified the register configuration, the set of computer instructions, a timing sequence, and the configuration of the control unit. In this section, we start with the logic design phase of the computer. The first step is to specify the elementary operations, together with the control signals, needed to execute each machine instruction. The list of elementary operations needed for each register will be derived in Sec. 11-6. In Sec. 11-7 we shall translate the elementary operations into Boolean functions. The register operation sequence will describe precisely the process of information transfer within the registers of the computer. Each line of the sequence will consist of a control function, followed by a colon, followed by one or more elementary operations in symbolic notation. The contro! function is always a Boolean function whose variables are the timing signals forts, the decoded operation qo-q7, and various outputs of flip-flops. The elementary operations are specified in accordance with the symbolic nota- tion defined in Table 8-1. Once a “start” switch is activated, the computer sequence follows a basic Pattern. An instruction whose address is in the C register is read from memory. Its operation part transferred to the J register, and the C register is incremented by 1 to prepare it for the address of the next instruction. When the instruction is a memory-teference type (excluding BUN), the memory is accessed again to read the operand. Thus, words read from memory into the B register can be either instructions or data. The F flip-flop is used to distinguish between the two. When F = 0, the word read from memory is interpreted by the control logic to be an instruction, and the computer is said to be in an instruction fetch cycle. When F = 1, the word read from memory is taken as an operand, the computer is said to be in a data execute cycle An instruction is read from memory during the fetch cycle. The register- transfer relations that specify this process are: Fre: (©2D,m Fur ©+t13C Fe Bisse) 21 When F = 0, the timing signals fo, f, ¢2 initiate a sequence of operations that transfer the contents of the C register into the D register, initiate a memory cycle, increment the C register to prepare it for the next instruction, and transfer the operation part to the J register. All elementary operations are executed when the control function (as specified on the left of the semi- colon) becomes logic-1 and when a clock pulse occurs. Thus the elementary operations in register flip-flops as well as the memory initiate cycle, are

You might also like