Download as pdf or txt
Download as pdf or txt
You are on page 1of 192

P201447 N«ii3

. A
AGARD-AG-183
oo

<
6
<*•

<\
o
<

AGARDograph N o . 183

on

Principles of Avionics
Computer Systems
Edited by
J.N.Bloom

NORTH ATLANTIC TREATY ORGANIZATION -

DISTRIBUTION A N D AVAILABILITY
O N BACK COVER
AGARD-AG-183

NORTH ATLANTIC TREATY ORGANIZATION

ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT

(ORGANISATION DU TRAITE DE L'ATLANTIQUE NORD)

AGARDograph No. 183

PRINCIPLES OF AVIONICS COMPUTER SYSTEMS

Edited by

J.N.Bloom

Communications Research Centre


Communications Canada

This AGARDograph has been prepared at the request of the Avionics Panel of AGARD.
THE MISSION OF AGARD

The mission of AGARD is to bring together the leading personalities of the NATO nations in the fields of
science and technology relating to aerospace for the following purposes:

Exchanging of scientific and technical information;

— Continuously stimulating advances in the aerospace sciences relevant to strengthening the common defence
posture;

— Improving the co-operation among member nations in aerospace research and development;

— Providing scientific and technical advice and assistance to the North Atlantic Military Committee in the
field of aerospace research and development;

— Rendering scientific and technical assistance, as requested, to other NATO bodies and to member nations
in connection with research and development problems in the aerospace field;

Providing assistance to member nations for the purpose of increasing their scientific and technical potential;

— Recommending effective ways for the member nations to use their research and development capabilities
for the common benefit of the NATO community.

The highest authority within AGARD is the National Delegates Board consisting of officially appointed senior
representatives from each member nation. The mission of AGARD is carried out through the Panels which are
composed of experts appointed by the National Delegates, the Consultant and Exchange Program and the Aerospace
Applications Studies Program. The results of AGARD work are reported to the member nations and the NATO
Authorities through the AGARD series of publications of which this is one.

Participation in AGARD activities is by invitation only and is normally limited to citizens of the NATO nations.

Published December 1974

Copyright © AGARD 1974

681.32:629.73.05

*
Set and printed by Technical Editing and Reproduction Ltd
Harford House, 7 - 9 Charlotte St. London. WIP IHD
LIST OF CONTRIBUTORS

Chapter 1 INTRODUCTION
J.N.Bloom
Communications Research Centre
Communications Canada

Chapter 2 BASIC DIGITAL COMPUTER CONCEPTS


Prof. A.R.Meo
Instituto di Elletrotecnica Generale,
Politecnico di Torino, Italy

Chapter 3 DATA ACQUISITION AND COMMUNICATION FUNCTION


Yngvar Lundh
Norwegian Defence Research Establishment

Chapter 4 OPTIMISATION
Yngvar Lundh

Chapter 5 SYSTEMS AND SYSTEMS DESIGN


Dr C.S.E.Phillips
Royal Radar Establishment, U.K.

Chapter 6 AVIONICS SYSTEM ARCHITECTURE


R. E.Wright
C.E. Digital Systems Development,
Ferranti Ltd., Bracknell, U.K.

Chapter 7 DEFINING THE PROBLEM AND SPECIFYING THE REQUIREMENT


Silvio Boesso and Rodolfo Gamberale,
SELENIA, Industrie Elettroniche Associate SpA,
Rome, Italy,

Chapter 8 MONITORING AND CONTROL OF AEROSPACE VEHICLE PROPULSION


E.S.Eccles
Smiths Industries Ltd.,
Aviation Division, U.K.

Chapter 9 MAN-MACHINE INTERFACE


Dr E. Keonjian
Engineering Consultant, U.S.A.

Chapter 10 NOVEL DEVICES AND TECHNIQUES


Dr E.Keonjian and Dr A.L.Freedman

Chapter 11 SPECIFYING THE REQUIREMENTS


Dr A.L.Freedman
The Plessey Company Ltd., U.K.

Edited by J.N.Bloom

iii
CONTENTS

Page

1. INTRODUCTION
1.1 Purpose of the book 1
1.2 Plan of the book 1

2. BASIC DIGITAL COMPUTER CONCEPTS


2.1 The Functional Units of a Computer 3
2.2 Flip-Flops and Registers 4
2.3 Numeric Information coding in a Computer S
2.4 Boolean Algebra 10
2.5 Building Blocks 14
2.6 The Arithmetic Unit 20
2.7 The Memory 21
2.8 The Control Unit 23
2.9 Input-Output Devices 26
2.10 Software 27

3. DATA ACQUISITION AND COMMUNICATION FUNCTION


3.1 Typical Devices to which an Avionics Computer is Connected 30
3.2 Data Types, Forms and Formats 30
3.3 Characteristics of Data 31
3.4 A/D and D/A conversion 32
3.5 Computer Interfacing 36
3.6 Data Transmission 38
3.7 The Programmer's View 41

4. OPTIMISATION
4.1 The Optimisation Problem 42
4.2 Important Parameters 43
4.3 Typical Trade-Off Situations 44
4.4 Methods of Determining Adequacy 46

5. SYSTEMS AND SYSTEMS DESIGN


5.1 Introduction 47
5.2 Systems 47
5.3 System Design Methodology 48
5.4 Programs as Systems 49
5.5 Functional System Approach 51
5.6 Purpose of Programming Network Diagrams 52
5.7 Data Rectangles 52
5.8 Process Circles 55
5.9 Example of a Simple Hierarchic Program Network 56
5.10 Hierarchy of Diagrams 56
5.11 Simulation and Testing 61
5.12 Real Time Computer Systems 61
5.13 Hierarchical Viewpoint 62

6. AVIONICS SYSTEM ARCHITECTURE


6.1 Introduction 64
6.2 The Practical Approach 66
6.3 Methods of Assessment of Computing Power and Information Rates 71
6.4 General Philosophies and Trade-Offs 73
6.5 Reliability Considerations 78
6.6 Examples of Avionic System Architecture 82

7. DEFINING THE PROBLEM AND SPECIFYING THE REQUIREMENT


7.1 Introduction 88
7.2 Survey of Typical Tasks of an Avionic System 88
7.3 From Operational Requirements to System Functions 89
7.4 From System Function to Computer Requirements 92
7.4.1 Presentation of the Requirements 92
7.4.2 An Example Set of Elementary Operations 93
7.4.3 Functional Analysis 99

IV
Page

7.4.4 Translation of the Model 103


7.4.5 Mission Statistics 105
7.4.6 Memory for Data 108
7.4.7 Input-Output 110
7.4.8 Execution Times and Instruction Set 111
7.4.9 Instruction Word-Length and Format 114
7.4.10 Memory for Program 117
7.4.11 Total Memory Requirements 117

8. MONITORING AND CONTROL OF AEROSPACE VEHICLE PROPULSION


8.1 Introduction 119
8.2 Statement of the Problem 119
8.3 The Requirements of Propulsion Control and Monitoring 121
8.4 Definition of Design Failure Characteristics 125
8.5 System Selection and Architecture 126
8.6 System Architecture 132
8.7 Monitoring in Digital Computer Systems 135
8.8 Date Acquisition, Communication and Processing 136
8.9 Man-Machine Interface 138
8.10 Practical Realization 139
8.11 Conclusion 142

9. MAN-MACHINE INTERFACE
9.1 Introduction 143
9.2 Human Capabilities and Limitatioas of the Crew 143
9.3 Allocation of Functions to Man and Machine 143
9.4 Establishing Requirements for Information Display and Manual and Automatic Controls 144
9.5 Design of the Man-Machine Interface 144
9.6 Equipment for Man-Machine Interface 145

10. NOVEL DEVICES AND TECHNIQUES


10.1 Introduction 150
10.2 Large Scale Integration (LSI) Technology 150
10.3 Semiconductor and Other Types of Memories 154
10.4 Large Scale Integration (LSI) Testing 158
10.5 Functional Testing 158
10.6 Parametric Testing, D.C. and A.C. 160
10.7 Opto-Electronic Devices 160

11. SPECIFYING THE REQUREMENTS


11.1 Practical Definition of a System 163
11.2 Deriving the Specification of the System as a Whole 163
11.3 System Design 168
11.4 Devices and Techniques, An Overview 177
CHAPTER 1

INTRODUCTION

J.N.Bloom

1.1 PURPOSE OF THE BOOK

Modern computer systems comprise a set of structures that continue to grow in complexity, size and diversity.
To the uninitiated, the amount of information available that describes these structures appears overwhelming.

Often, career officers or civilian administrators with little or no computer systems background or experience
find themselves in a position where they are charged with a sole or joint responsibility for the acquisition of a
computer system. The purpose of this book is to provide these officers and administrators of the NATO countries
with a package of information that will give them an understanding of the procedures involved in defining a
requirement, specifying that requirement and, hopefully, of the convergent process that results in the satisfying of
that requirement for a computer based system.

This book presents to officials of the NATO countries an introductory treatment of the principles underlying
the computer systems encountered in avionics, and provides an insight into the structural organization of those
systems. The book explains the methodology behind the specification, analysis, design and implementation of
computer based systems. The treatment of the material emphasizes avionic systems, but the principles are relevant
and applicable to all computer systems.

A systematic treatment in depth of all levels of computer organization is not possible in a book such as this.

While sufficient material has been included in the text to achieve the principal goal of the book, that it be
educational, extensive references will enable the interested reader to pursue certain topics of his or her special
interest.

1.2 PLAN OF THE BOOK

The organization of the material in the book is such that the fundamentals of computers and basic concepts
are introduced in the early chapters. Gradually, the reader is introduced to the language of computer technology
and the vocabulary of systems terminology. The basic chapter of this book is Chapter 2. The chapter serves as an
introduction to the subject for those readers coming to it for the first time, as well as a useful review for those
who have been exposed to it in the past.

The next chapter, Chapter 3, introduces the reader to the problems of communicating with a machine, with
the preparation and formatting of input data. The task faced by the programmer in assembling a list of instructions
that will cause the computer to carry out a desired function is carefully described.

Chapter 4, on optimization, discusses the important topic of definition of the problem for which solution the
system is being assembled or acquired. The reader is shown that reality dictates a set of choices amongst
alternatives; there are no ideal optimal solutions.

The very complex subject of systems and system design is treated in Chapter 5. Fundamental ideas are
introduced and used to develop a basis for the next level of concepts. The goal is to give the reader an insight into
current concepts in design philosophy and methodology in systems.

Chapter 6 on Avionics System Architecture provides the reader with a logical approach to the problem of
determining the size of system required. The trade-offs and compromises that must be considered in arriving at a
suitable system configuration are presented so that the reader can appreciate the problems posed by choosing from
sets of alternatives. Some examples of typical systems are given to illustrate the ideas embodied in the text.

Chapter 7 is a comprehensive chapter on a higher level than the preceding material. The fundamental ideas
are introduced anew here, and brought to the point where the analysis of a typical, complex problem is undertaken.
The chapter develops more rapidly than the preceding ones, and is recommended for those readers with some
background in computing machines.

Chapter 8 illustrates how the material of the preceding chapters may be used. The chapter analyzes the
problem of the monitoring and control of aerospace vehicle propulsion. The reader can trace the application of
the principles introduced and discussed in the earlier chapters.

A problem area introduced in preceding chapters is enlarged on in Chapter 9. The problem of the man-machine
interface is of paramount importance and receiving a lot of attention from workers all over the world: but an
exhaustive treatment of the man-machine interface is not possible in a book of this kind. Rather, some basic
notions are introduced and the reader is left to follow up his interest by further reading.

Chapter 10, too, is only of an introductory nature. Sufficient information is given to show the many aspects
of semi-conductor technology today, and to indicate the variety of devices that contend for the designers attention
when implementing a system.

Chapter 11 is synoptic in nature, giving an overview of the book with some insight into the relationship of the
parts. The great experience and insight that the author has into the problems of specifying computer systems
requirements is evident; the reader will at once become familiar with some of the do's and don'ts of system
acquisition.
CHAPTER 2

BASIC DIGITAL COMPUTER CONCEPTS

A.R.Mao

2.1 THE FUNCTIONAL UNITS OF A COMPUTER

A digital computer is usually viewed as consisting of five functional units: arithmetic unit, memory, control
unit, input devices and output devices. The block diagram of such an organization is shown in Figure 2.1.

*
ARITHMETIC
UNIT
i
1

^
INPUT OUTPUT
MEMORY DEVICES
DEVICES w

A ii 4
1 i
l 1
1
_3 r 1
1
CONTROL __
UNIT __.

-• TRUE INFORMATION
•• COMMAND SIGNAL

Fig.2.1 The functional units of a computer

The arithmetic unit is the device where information is processed, that is, the arithmetical and logical operations
involved in a given program are performed.

The memory is the set of devices where is stored information currently not in use. Stored information
includes: the numerical data to be processed; the sequence of the operations to be performed on the numerical data
or program; the intermediate results; the final results to be delivered to the output. What is essential for an
information processing system to be considered a computer, or, more accurately, a stored-program computer, is
that memory should contain not only the problem data but also the program. Thus, for example, a desk calculator,
which is functionally equivalent to the arithmetic unit alone, is not considered a computer. If the numerical data
and the program can be as easily changed in the memory, the system is called a general-purpose computer, since
it is hardly limited in the number of applications of any given type for which it can be used. In many airborne
computers changing the content of the program memory implies a "rewiring of the machine", which is a relatively
complex operation. This drawback can be accepted when the system is designed for solving a specific class of
problems, and in this case the system is referred to as a special-purpose computer.

The input and output devices perform the functions of receiving and delivering the incoming and outgoing
information, respectively.

The control unit issues command and control signals to the remaining functional units of the system. It
receives information pertaining to the program from the memory and assigns tasks, one at a time, to the other
units.

2.2 FLIP-FLOPS AND REGISTERS

Flip-flops
Engineering considerations based on the analysis of cost, reliability and dimension have led to the conclusion
that the best hardware atom is the bistable device, or flip-flop. The symbol for the flip-flop is shown in
Figure 2.2.

1 OUTPUT 0 OUTPUT
ik. il

Fig.2.2 The flip-flop

At present, a flip-flop consists of a pair of active elements (e.g., transistors) working reciprocally. When one
of them passes current, the other is open; and vice versa. This implies that a flip-flop has states, arbitrarily
labelled 0 and 1. The information pertaining to the state of the flip-flop is delivered to the output by means of
the lines 0 output and 1 output. When the system is in the 0 state, the 0 output line is excited and the 1 output
line is not excited. The contrary occurs when the system is in the 1 state.

Three lines enter the block of Figure 2.2 one of them being often missing. A signal on the line labelled R
(reset) sets the device to its 0 state, regardless of its present state. A signal on the line labelled S (set) sets the
system to its 1 state, regardless of its present state. A signal on the line labelled C (complementation) changes the
state of the system.

A flip-flop can be set or reset in a few nanoseconds or tens depending on the type of logic used. Its
content can be read as fast as a signal can scan the output. In principal, a flip-flop is a memory element; indeed,
it is the smallest memory element, since it holds the information expressed by one binary digit, or bit. However,
since its cost is relatively high, it is seldom used as an elementary unit of the true memory of a computer. Usually,
flip-flops are introduced into all the functional units of a computer, especially the authmetic unit, as temporary,
fast-access storage devices.

Registers
A register is an ordered set of flip-flops (or any other one-bit storage devices). Therefore, the content of a
register is an ordered set of binary digits, or bits, which can be interpreted as a binary number. This ordered set
of bits is often called a "word".
Figure 2.3 shows the symbol for the parallel register, namely, the one in which all the bits are set in
parallel. Notice the double lines indicating the several signal lines transmit information into, or out of, the register,
and the single line indicating the command signal on receipt of which the new word is introduced into the register.

INPUT
INFORMATION

VL

COMMAND

V
OUTPUT
INFORMATION
Fig.2.3 The symbol for the parallel register

The serial register is a register, that can receive and transmit only a single bit of information at a time. For
example, the new bit is received at the left end and the output bit is delivered at the right end. The symbol for
the serial register is shown in Figure 2.4 where the single line labelled "shift" indicates the command signal on
receipt of which a new bit is received (and a new bit is delivered at the output).

OUTPUT H . INPUT
INFORMATION INFORMATION

Fig.2.4 The symbol for the serial register

A particular type of serial register is the shift-register. On receipt of a command signal, a new bit is received at
one end and the information content of any component flip-flop is transmitted one step toward the other end.
However, the information content of the whole register can be read in parallel.

2.3 NUMERIC INFORMATION CODING IN A COMPUTER

Because of the binary nature of the registers and the other building blocks, a computer is commonly built to
handle ordered sets of binary digits. Such ordered sets can represent numbers or, more in general, combined alpha-
betic and numeric information, according to certain assumed codes. In this section, we shall briefly present a
widespread way of representing binary numbers and performing arithmetical operations.

Binary Numbers
A common way of representing a number as a sequence of binary digits consists in viewing the number as an
ordered set of decimal digits and representing each decimal digit with a combination of four bits according to a
specified code. A possible code for representing a decimal digit in binary form could be the one reported in
Table 2.1. Thus, the number 378 would be represented as

0011 00111 1000

This coding way, commonly referred to as "binary coded decimal" (BCD) has a heavy drawback. Since the
number of the combinations of four bits is 16 and there are only ten decimal digits, six combinations are assigned.
Thus, for example, the sequence 1010 1111 is not used for representing any number. Therefore, the average
number of binary digits used in this representation technique is larger than the strictly necessary one, and this
results in a loss of efficiency.

TABLE 2.1

Representation of a Decimal Digit in Binary Code

DIGIT CODE

0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001
10101
1011
1100
• not allowed
1101
1110
llllj

This drawback is overcome by continuing the representation law indicated in Table 2.1 as shown in Table 2.2.

This code is called "pure binary code", and is specified by the following relation. A sequence of binary digits

an an_i ... a, ao . a_, a_2 ...

describes the number

N = an 2 n + an_, 2"- 1 + ... + at 2 + a0 1 + a_, 2 ' 1 + a_2 2" 2 + ...

TABLE 2.2

Representation in Pure Binary Code

DIGIT CODE

10 1010
11 1011
12 1100
13 1101
14 1110
15 mi
16 10000
17 10001
• •
• •
• •
Single-bit Arithmetic
Binary arithmetic is based on single-bit arithmetic. The addition of two single bits, an augend bit and an addend
bit, gives a sum bit and a carry bit according to the following rules:

AUGEND ADDEND CARRY SUM


BIT BIT BIT BIT
0 + 0 = 0 0
0 + 1 = 0 1
1 + 0 = 0 1
1 + 1 = 1 0

The multiplication of two single bits, a multiplicand bit and a multiplier one gives a single-bit product, according
to the following rules:

MULTIPLICAND MULTIPLIER PRODUCT


BIT BIT BIT
0 x 0 = 0
0 x 1 = 0
1 x 0 = 0
1 x 1 = 1

These rules are the obvious applications of the well-known concepts of sum and multiplication to the binary
arithmetic. Similar rules can be introduced for defining binary subtraction or division. The application of these
rules to performing binary arithmetic is straightforward. For example, the sum of 101.01 and 100.11 can be
performed as follows:

1 111 carry
1 0 1.01
1 0 0. 1 1
1 0 1 0. 0 0

Similary, the product of 110 by 101 is executed in the following way:

1 10 X

101
1 10
0 00
1 10
11110

Two-complement
The two-complement of a given binary number N = a„ a,,., ... a! ao — a useful concept for a widespread
way to represent negative numbers is defined as follows.

First the number

N* = 3n a n - i ••• *i *

is considered, which is obtained from N by complementing any bit. Then the number 00 ... 01, having only
one bit equal to 1 in the least significant position of N*, is added to N*, thus obtaining a result R which is the
two-complement of N.

For example, the two-complement of N = 0 1 0 1 1 0 0 is obtained as follows:

First step: N* = 1 0 1 0 0 11
1 1 carry
Second step: 1 0 1 0 0 11 +
1
R = 1 0 10 1 0 0
Representation of negative numbers
One of the most widespread techniques for representing the whole field of the relative numbers is the two-
complement notation, which is defined as follows:

Positive numbers
A positive number is represented by a magnitude section containing the representation of the number in the
pure binary code, and a sign bit, which is a bit 0, to be imagined at the left of the most significant bit of the
magnitude section. For example, the number +3 is represented by

SIGN BIT MAGNITUDE SECTION


0 1 1

Negative numbers
A negative number is represented by the two-complement of the given number in the magnitude section and
by 1 in the sign bit. For example, the number —1 is represented by

SIGN BIT MAGNITUDE SECTION


1 1 1

Indeed, 11 is the two-complement of 1.

As a more complete example, Table 2.3 presents the list of the codes used for representing the field of the
integers from —4 to +3.

Notice that the representation sn of a number (where s denotes the sign bit and n the magnitude section) can
be interpreted to mean that the represented number is

sx(-4) + n .

The merits of this representation technique will be apparent after the properties of the two-complement
arithmetic have been presented.

TABLE 2.3

Representation of the Integers from —4 to +3 in Two-complement Notation

NUMBER CODE

+3 0 1 1
+2 0 10
+1 0 01
0 000
-1 1 11
-2 1 10
-3 101
-4 100

Addition in two-complement notation


The main advantage offered by two-complement notation is that the sum of two relative numbers can be
performed by using an adder for positive numbers and interpreting the sign bit as a magnitude bit. This is shown
in the following examples.

Addition of a positive and negative number


0 11 (+ 3)
+
1 10 ( - 2)

00 1 (+ 1)
00 1 (+ 1)
+
1 1 1 ( " 1)

1 000 (0)
Notice that the left-most carry is to be discarded.
Addition of two positive numbers
0 10 (+2)
+
0 0 1 (+1)

0 1 1 (+ 3)
0 10 (+ 2)
+
0 11 (+ 3)
10 1 overflow

Notice that in the second case the result is not correct, as is obvious, since the sum of +2 and +3 is outside
the field of the represented numbers. This overflow condition is shown by the fact that the sign bit of the result is
different from the sign bits of the two addends.

Addition of two negative numbers


111 (-1)
110 (-2)

101 ( 3)
1 00 (- • 4)
1 10 (
+2)
1 0 10 overflow

In the second case the result is not correct, because the sum of —4 plus —2 is outside the field of the represented
numbers. Also in this case the overflow condition is shown by the fact that the sign bit of the result is different
from the sign bits of the two addends.

Summing up, the addition of two relative numbers can be performed by using the sign bit as the most
significant of the magnitude bits, under the following two conditions:

(1) the left-most carry is to be neglected;


(2) the overflow condition may occur when the two addends have the same sign, and it is shown by the
fact that the sign bit of the result is different from the sign bits of the two addends.

Subtraction in two-complement notation


The simplest way to perform subtraction in two-complement notation is to add the two-complement of the
subtrahend to the minuend. On the other hand, the computation of the two-complement of a given number N
involves the complementation of all the bits of N and the sum of the result and 1. It follows that both addition
and subtraction require only an adder and a circuit for complementing the binary digits of a given number. Notice
also that many registers have two complementary outputs for any stored information bit; therefore, complementing
a bit corresponds merely to using the "complementary" output instead of the "true" output.

Other operations. Hardware and software implementations


As in the usual decimal arithmetic, the product of two numbers is the sum of a certain set of rows, each
being obtained by multiplying the multiplicand by a digit of the multiplier and shifting the result a suitable number
of positions. In the binary case the multiplication of the multiplicand by a digit d is a very simple operation, since
the result coincides with the given multiplicand if d is 1, and it is always 0 if d is 0.

Division and other arithmetical operations can be performed by using computation techniques which are the
direct application to the binary case of well-known algorithms.

All these operations can be implemented in "hardware" or in "software". We say that an operation is
implemented in "hardware" when a circuit is avilable in the computer, which can perform that operation directly.
When a command, or a suitable set of commands, are given to that circuit, the content of one or more input
registers are read, the given operation is performed and the result or the results are delivered to one or more output
registers. We say that an operation is implemented in "software", when the computer circuits can only perform
elementary operations with respect to the given operations. This sequence constitues a "sub-program", which is
written in a memory device using a suitable code.
10

Fixed- and floating-point representations

In the representation techniques of the type above presented, which are usually referred to as "fixed-point
representations", the field of the represented numbers is relatively small. When the field of the represented
numbers is to be enlarged, it is necessary to use a representation technique of the type "floating-point", in which
a part of the available code is used for indicating the position of the decimal point. In other terms, if reference
is made to the expression of number N as

N = ±A-2±B .

a first section of the code is used for representing ±A (usually, in two-complement notation) and a second section
for representing ±B (generally, in the same way as for ±B).

Generally, small-and medium-scale computers used in aerospace applications perform floating-point arithmetical
operations in software, but there are also small- and medium-size avionic computers incorporating circuits for hard-
ware floating-point arithmetic.

2.4 BOOLEAN ALGEBRA

Boolean algebra is the mathematical tool which was introduced by George Boole for investigating logical
relations and is now widely applied to the description and design of digital computers. The basic concepts of
Boolean algebra are very briefly summarized in this section.

Boolean variables

A Boolean variable is a quantity which may, at different time, have one of two possible values. The two values
of a binary variable are denoted by " t r u e " and "false", or by 1 and 0.

In the circuits of a computer, these two values are represented by two voltage levels, or by the presence or
absence of a pulse, or by two physical states. When the value 1 is represented by the high level of a physical
magnitude and the value 0 by the low level, the system logic is called "positive"; on the contrary, when the values
1 and 0 are represented by the low and the high value, respectively, the system logic is referred to as "negative".

Logica] operators

Many operators which specify logical operations on Boolean variables can be defined. Some of the elementary
operators which are most widely used for describing complex systems is the following.

Logical NOT

The logical NOT, or complement, of a Boolean variable x is a new variable x' which is 1 when x is 0, and is
0 when x is 1.

Boolean operators are commonly defined by means of tables which specify the value of the dependent variables
as a function of any combination ot values of independent variables. These tables are usually referred to as
"truth tables".

The truth table for operator NOT is presented in Table 2.4. The complement of a variable x is often written
as X.

TABLE 2.4

Truth Table for Operator NOT

Logical OR

The logical OR, or sum operator, of two Boolean variables x and y is defined by the truth table presented
in Table 2.5.
II

TABLE 2.5

Truth Table for Operator OR

X y x OR y

0 0 0
0 i 1
1 0 1
1 1 1

This definition can be generalized to the case of many independent variables. The logical OR of n independent
variables x,, x. xn is a new variable y which is 0 when all the independent variables are 0 and is 1 for any
other combination of values of independent variables. Variable y can also be written as y = x, + x2 + ... + xn .

Logical AND
The logical AND, or product operator, of two Boolean variables x and y is defined by the truth table
present in Table 2.6.

TABLE 2.6

Truth Table for Operator AND

x y x AND y

0 0 0
0 1 0
1 0 0
1 1 1

The definition can be generalized to the case of many independent variables. The logical AND of n independent
variables x,, x. x n is a new variable y which is 1, when all the independent variables are 1, and it is 0 for
any other combination of values of independent variables. Variable y can also be written as y = x, . x2 xn .

Logical coincidence
The logical "coincidence", or COIN operator, of two Boolean variables x and y is a variable z which is 1 when
x and y take the same logical value and is 0, otherwise. Therefore, this operator is defined by the truth table in
Table 2.7.

TABLE 2.7

Truth Table for the Logical Coincidence

x y x COIN y

0 0 1
0 i 0
1 0 0
1 i 1

Logical EXOR
The logical EXOR, or exclusive-OR operator, of two variables x and y is defined by the truth table presented
in Table 2.8.
12

TABLE 2.8

Truth Table for Logical EXOR

x y x EXOR y

0 0 0
0 i 1
1 0 1
1 1 0

Notice that the truth table of logical EXOR differs from the one of logical OR only for the value associated
to the fourth row. This comparison explains why this operator is called "exclusive-OR", whereas the OR operator
is also referred to as "inclusive-OR". Notice also that the logical EXOR is the complement of the COIN operator

Operator NAND
The NAND operator is defined by the truth table presented in Table 2.9.

TABLE 2.9

Truth Table for NAND Operator

X y X NAND y

0 0 1
0 i 1
1 0 1
1 i 0

Notice that this operator, as the name NAND (NOT-AND) suggests, is the complement of the AND operator.
The definition given in Table 9 can be generalized to the case of many independent variables.

Operator NOR
The NOR operator, as suggested by the name (NOT-OR), is the complement of the OR operator. Therefore,
in the simple case of two independent variables, this operator is defined by the truth table present in Table 2.10.

TABLE 2.10

Truth Table for NOR Operator

X y x NOR y

0 0 1
0 i 0
1 0 0
1 i 0

Implementation of a complex function by means of elementary operators


Any Boolean function, which has been described by means of a truth table, can be easily implemented in terms
of OR, AND and NOT operators. By way of example, let us consider the simple case of the Boolean function
described by the truth table presented in Table 11. It is easy to verify that the output variable in the least
significant of the two digits generated by a circuit performing the binary addition of three binary digits (usually,
the digits of a certain weight of the two addends and the carry from the sum of the digits whose weight is smaller
by one unit). Inspection of Table 11 shows that the output variable can be expressed as follows:

s = X.y.z+x.y.Z'+x.y'.Z'+x.y.z

where any three-variable product corresponds to one of those rows of Table 2.11 for which the output variable is 1.
The circuit shown in Figure 2.5 corresponds to the above written expression.
13

TABLE 2.11

Truth Table of a Boolean Function

x y z s

0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 0
1 0 0 1
1 0 1 0
1 1 0 0
1 1 1 1

o S

X X y y z Z

Fig. 2.5 Implementation of the function s = X.y.z + X.y.7 + x.y 1 + x.y.z


14

The same output variable can be expressed as a function of independent variables in a number of different
ways. The problem of finding among the valid expressions the one corresponding to the circuit having the minimum
number of elementary units is one of the most important in logical design. Its treatment would be outside the
scope of this chapter; the reader interested in it can study References 1 — 3.

It is not difficult to prove that the same output variable s can be expressed in the following form:

s - [(/x)/(/y)/z)/[(/x)/y/(/z)]/[x/(/y)/(/z)l/[x/y/z]

where / denotes the NAND operator. The latter expression is formally the same as the former, with the exception
that any operator in the former has been substituted by a NAND operator. Therefore, the latter expression
contains only one type of operator and corresponds to a circuit having only one type of elementary unit in contrast
with the circuit of Figure 2.5 using three different types of elementary units. This explains why NAND operators
(as well as NOR operators possessing the same property) are so widely employed in binary circuits.

Combinational and sequential circuits

A binary circuit like the one shown in Figure 2.5, in which the output value depends only on the values taken
by input variables at athe considered instant, is called "combinational". On the contrary, a binary circuit in which
the output value depends on past values of input variables is referred to as "sequential". The counters, which will
be described in the next paragraph, are examples of sequential circuits. In other terms, the difference between
combinational and sequential circuits lies in the fact that the latter have a memory keeping track of the past
history of the system.

Analysis and synthesis of sequential circuits are more complicated than the corresponding problems for
combinational circuits. The reader interested in them can read the works cited in References 1—3.

2.5 BUILDING BLOCKS

From elementary units like the one we have so far described - flip-flops; AND, OR, NOT operators; NAND
operators; etc., — it is possible to build up more complex units, like the circuit shown in Figure 2.6 and performing
the arithmetic sum of three binary digits. In their turn, also a set of complex units like the one shown in Figure 2.6
can be arranged in order to generate even more complex systems, and so on. Therefore, there is a hierarchy of
complexity and it is very difficult the classify the set of systems and sub-systems. (Also classification on the basis
of the size of the circuit is of no value, since as large-scale integration (L.S.I.) proceeds, more and more complex
systems are placed in one chip.)

The concept of the building block is a very relative one. However, what is here meant by building block is
a sub-system of medium complexity. Many building blocks could be introduced at this point; the following list
contains only those units which will be referred to in the following.

The adder

The medium-scale circuit shown in Figure 2.6 performs the arithmetic sum of three binary digits. The circuit
can be sub-divided into two sections. The upper section, which coincides with the circuit presented in Figure 2.5,
performs the computation of the less significant digit of the results, often called "sum". The lower section
implements the function c = x.y + x.z + y.z and computes the more significant digit of the result, often
referred to as "carry". The whole circuit is commonly called "full-adder" (F.A.).

By a chain of full-adders, connected as shown in Figure 2.7, it is possible to implement a combinational


circuit performing the addition of two binary numbers

x
n - l x „ _ 2 , ••• > x l x
0
and
Vn-1 Vn-2- - > Vl VO

Such a circuit is called a "parallel adder", in that all the digits of the two addends are summed up in parallel.
However, the result at the output leads s n , s n _ i , ... , s j , SQ will not be correct until all the carries from any full-
adder to the following one have been propagated.

This implies that the operation time of the adder will be the sum of the delays with which all the carries are
determined. Since the computation of a carry requires two levels of elementary units, as shown in Figure 2.6, the
sum of two n-bits addends will involve a computation time of 2 x n x T, where T is the delay time of an
elementary unit. If T is equal to 10 nanoseconds and n is 16, 2 x n x T will be equal to 0.32 microseconds, which
is a typical value for the operation time of a parallel adder.
15

X O-

y o- CIRCUIT -o s
REPRESENTED
IN FIG. 5
z o-

AND

AND

Fig. 2.6 Example of implementation of a circuit performing


the sum of three binary digits (full-adder)

Instead of using a chain of full-adders, it is possible to implement the addition by means of a single full-adder
and a one-bit storage unit. The scheme of such a solution is shown in Figure 2.8. Here, the two addends are
contained in two serial registers X and Y, while the result is stored into a third serial register S.

A sequence of suitable commands are delivered as the SHIFT lines of X, Y and S; whenever a command is
given at those lines a new bit is transferred toward the right end of the registers. The same sequence of commands
is given to the one-bit storage unit, which is essentially a delay unit in the sense that at any SHIFT command the
input received at the preceding command is delivered to the output. The system of Figure 2.8 is termed a "serial
adder". Of course, serial adders are generally cheaper but also slower than parallel adders.

The preceding consideration developed with reference to parallel and serial adder applies also to other parts
of a computer. Many functional sub-units of a computer, like registers, transmission lines, multipliers, etc., can be
implemented in parallel or serial form. Parallel solutions are faster but more expensive. The resulting speed-cost
trade-off should be carefully evaluated, since a wrong choice for any unit may compromise the efficiency of the
whole system.

The switch
A switch is a circuit that permits or prohibits the passage of a signal through a line or of a set of signals
through a set of lines. The well-known relay is an example of switch. It is inexpensive and bistable, but it does
not operate fast enough to serve in a high-speed computer.

For this reason, switches in a modern computer are generally electronic. Figure 2.9 shows a simple two throw
switch. It is a combinational circuit implementing the function a.c + b.C. Here a and b are the signals to be
transmitted, and c is the command signal. It is apparent that the presence of signal c(c=l) throws the switch in
the up direction, and the absence of c throws it in the down direction.
-O X O X,

< c n-2

Fig. 2.7 Implementation of a parallel adder


17

SHIFT

SHIFT

ONE-BIT
STORAGE
UNIT

Fig. 2.8 Scheme of a serial adder

Similar arrays using AND and OR elementary units can implement multipole switches and multithrow switches.
A multipole switch can be imagined as a set of circuits like the one shown in Figure 2.9 operating in parallel;
therefore, it affects several information paths. A multithrow switch is a generalization of the circuit of Figure 2.9
in the sense that it can route a signal to several distinct lines.

Switches are among the most important components of a computer because they allow one computer sub-
system to control the behavior of another sub-system.

The decoder
In a computer, there is a frequent need for translation of a binary coded piece of information to a "one-out-
of-many" form. An example of this is the circuit shown in Figure 2.10. It has two input variables and four output
lines. Each of these output lines has been assigned a binary code (00 to line 0, 01 to line 1, 10 to line 2, 11 to
line 3). When a code word is presented at the two input lines, the corresponding output line is excited and all the
other ones take the value 0. The symbol for the decoder is shown in Figure 2.11.
18

a.c + b.c

Fig. 2.9 Scheme of the two throw switch

AND 0 (00)

AND 1 (01)

AND 2 (10)

3 (11)

Fig. 2.10 A simple example of decoder


19

*,

9*
__> D \

%*

Fig. 2.11 The symbol for the decoder

The encoder

The encoder provides the inverse function of the decoder. It has a number of input lines, only one of which
may be excited, and produces at the output the binary code corresponding to the input line excited. The symbol
for the encoder is shown in Figure 2.12.

*-

^~
N.
E X* >

0
9
w

Fig. 2.12 The symbol for the encoder

The counter

A counter is a device which is fed with a pulse train in time and delivers a signal combination in space, forming
a coded number which indicates how many pulses have been received at the input after the counter was last reset
to zero.

The symbol for the counter, which is a typical sequential circuit, is shown in Figure 2.13. In addition to the
count line C, receiving the pulse train, there is a reset line R, which may reset the counter to 0 regardless of its
present state. If the counter can count up to N, upon receiving the N" 1 pulse it is reset to zero.

C R

Fig. 2.13 The symbol for the counter


20

2.6 THE ARITHMETIC UNIT

A typical arithmetic unit contains some registers, some switches, a set of circuits performing arithmetic
operations, a small control unit and other elements of minor importance. A very simple scheme with two
registers and two switches is presented in Figure 2.14.

r^N r~\
REGISTER A —

CIRCUIT
_E ___:
(_> <_>
1— 1—
i — •
i—i
3 3
FROM THE oo OO
MEMORY
REGISTER B TO THE MEMORY

i
i
i
I _i

ARITHMETIC
UNIT
CONTROL

FROM AND TO
THE CONTROL UNIT

Fig. 2.14 Scheme of the arithmetic unit

Registers hold operands, intermediate results and final results. The number and the tasks of registers vary
from computer to computer. Very simple computers may have only one register, called "accumulator". Medium-
scale computers may have from two to five or six registers. Typical of a pair of registers is the function of holding
the divisor and the quotient during division. Large-scale computers may have up to few tens of registers variously
organized. An important register which will be presented later is the index register, used for a special technique
of addressing.

Switches control the flow of information from one register to another, either through the circuits performing
arithmetic operations.

What we mean by "circuits" is the system of the units performing arithmetic operations. In the very simple
case of Figure 2.14, the circuits may consist uniquely of a serial or parallel adder. In more complex computers this
building block may compromise very sophisticated units such as, for example, a facility for performing in hardware
floating-point multiplication.

The arithmetic unit control is a special unit which supervises the activity of the arithmetic unit. When
informed of the instruction to be executed from the control unit of the computer, this small control times and
monitors the process.
21

Other elements of minor importance are also required. For example, a counter may indicate how many bits
have been already summed up in a serial adder. Other indicators summarize the state of the system or the
occurrence of some particular events. For instance, a one-bit indicator is used for storing the information that
overflow has occurred in the last executed addition.

2.7 THE MEMORY

The main types of memory devices

The most popular memory device is the ferrite core memory. It is built up of ferrite cores, one per bit.
Indeed, ferrite has a nearly rectangular hysteresis cycle; so two stable states are possible when the core is not
excited, and each of these states is assigned a binary value. Most modern cores have a diameter less than 20 mils.

Cores are usually wired together to form square or possibly rectangular matrices in which each matrix contains
as many cores as there are words in the memory bank. A set of parallel matrices forms a bank, in which each
matrix corresponds to a bit position in the memory word.

The capacity of a ferrite core memory ranges from few thousands to several millions of bits, and the time
required for reading or writing a word varies from few hundreds of nanoseconds to few microseconds.

The principle of using the two magnetization states of a small element is applied in a number of other magnetic
storage devices. The most common of these magnetic storage media is magnetic tape; but also magnetic disks, drums,
cards or strips are widely used. The capacity of these memory devices may be very large, but the average time
required for reading or writing a word is much larger than the one of a ferrite core memory and depends on the
position of the datum on the magnetic medium.

Storage organization

In general, a computer memory is organized as a set of cells or positions, each of which is specified by an
address. The address should not be confused with the content of a cell; the former is a unique label for the cell,
the latter may vary during the operation of the memory.

It is customary to distinguish between sequential access memories or serial access memories, and direct access
memories, or parallel access memories or random access memories. Examples of sequential access memories are
magnetic tapes or disks. In them the records must be written and read in sequence, and, therefore, the access
time, namely the time it takes to find a memory block and write into it or read from it, varies somewhat depending
on where the required data are situated with respect to the last called data. A core memory is an example of
random access memory. In it any part of the memory can be reached in approximately the same time, regardless
of where the record is situated relative to the record which was last read or written.

Sometimes, it is necessary to distinguish between primary memory and secondary memory. A primary memory
is fitted into the organization of the computer in such a way that it can exchange information directly with the
control unit and the arithmetic unit. Generally, each cell in primary storage can be specified with one instruction.
A secondary memory cannot exchange data with any unit other than primary memory. In general, an individual
cell in a secondary memory cannot be specified with one instruction; information is transferred in larger blocks
between primary and secondary memory.

The system with primary and secondary memory is a simple example of hierarchical storage organization. The
need for a hierarchy in storage organization derives from the different costs of the different memory devices. Thus,
for example, a core memory is rather fast, but it has a cost per bit relatively high. A disk memory is characterized
by a smaller cost per bit but a larger access time. In a typical solution a core memory is used for primary storage
and a disk memory for secondary storage. Those data which are often used during a certain computation are stored
in the primary memory, while those data blocks (numerical data or programs) which will be used later are held in
the secondary memory. They will be transferred into the primary memory before they are to be used in the
execution of a computation.

Another type of two-level organization is frequently used in avionic computers, whose main features are to
be very small size, low power dissipation and light weight. In these systems the following two types of storage are
used:
(1) A primary memory consisting of relatively few flip-flop registers. This small, very fast memory, which
is well suited for data and instructions that are going to be used very often, is sometimes called a
"scratch pad memory".

(2) A secondary memory consisting of relatively many one-bit storage devices, into which the computer
cannot inscribe information.
22

The electronic technology used for this type of storage device, generally based on metal-oxide transistors,
allows a small integrated circuit to contain a relatively large amount of memory. This type of memory, generally
called "permanent memory" or "read-only memory" (ROM), will be used for the permanent storage of programs
or numerical constants of computation.

The elements of a random access memory


A core memory, as well as a large solid-state memory, can be represented with the block diagram of Figure 2.15.
The basic elements of such a system are: the cells, the memory address register, the memory data register, the
memory control unit.

MEMORY ADDRESS
REGISTER

MEMORY CELLS
MEMORY DATA
REGISTER

RECALL/MEMORIZE

START MEMORY
CONTROL
UNIT
DONE

Fig. 2.15 The organization of a random access memory

As mentioned above, the cells are arranged in a two-dimensional array. Each cell holds an ordered set of bits
or word, which is not destroyed by a recall operation. However, a new word may be written into a cell at the
expense of destroying existing information. The time necessary for memorizing or recalling a word is independent
of the address of the cell and is called "cycle time".

The memory address register holds the address of the cell with which the memory is currently concerned. The
content of this cell is transmitted from some other unit of the computer.

The memory data register holds a datum. During a recall operation, the information from the cell pointed to
by the memory address register is placed temporarily in the memory data register by the memory control unit. It
is available to the requesting sub-system when the done line becomes on. During a memorizing operation, the datum
to be stored is placed by the originating sub-system into the memory datum register by a suitable command signal.
Then it is transmitted to the cell pointed to by the memory address register by means of a suitable command
delivered by the memory control unit.

The memory control unit controls the memory cycle. It is instructed by the requesting unit either to recall
or memorize. After the start signal is received, the memory control unit keeps the memory address register and
the memory data register locked out from interference by other sub-systems until the whole job is completed.
The memory control unit finds the cell and times the flow of information between the memory data register and
the chosen cell. When the job is completed, the memory control unit issues a completion signal indicated in the
figure as done.
23

2.8 THE CONTROL UNIT

Function
The control unit supervises and coordinates all the operations of a computer including those of the arithmetic
unit, memory, input/output devices, as well as its own. Depending upon the organization of the computer, the
control unit may or may not be able to relinquish its autonomy to one of the other sub-systems. Even when it
does so, the sub-system in question returns authority to the control unit when the sub-servient sub-system has
completed its operation.

Complete directions are supplied to the control unit by the program, the sequence of instructions or commands.
These instructions or commands are memorized in the memory together with numerical data and are recognized and
interpreted by the control unit as this encounters them. Since each instruction is comprehensible to the computer
but may not be directly reasonable by the human, this sequence is called the "machine language program".

Operation
The control unit operates in two cycles, fetch and execute.

In the fetch cycle a new instruction is brought from the memory to a location in the control unit where it is
examined and interpreted. With some exceptions, the control unit gets its next instruction from the memory
location right after the one where it got its last command.

In the execute cycle, the control unit interprets and performs the instruction it has fetched. Usually, the
execution of an instruction requires at least one operand which is held in the memory. The control unit sets up the
destination for receipt of the new operand, and, when this is passed over to the destination sub-system, it instructs
the sub-system what to do.

When the destination sub-system has completed its task, a new fetch cycle is begun.

The structure of the control unit


The structure of the control unit is presented in Figure 2.16.

The instruction counter is a register which stores the address in the memory of the current instruction. As a
rule, the sequence of the instructions is stored into a set of adjacent cells, so that the content of the instruction
counter may be increased by one unit at the completion of any fetch cycle. Only a limited class of instruction —
the "branch" instructions — require a content different from the increment by one unit to be introduced into the
instruction counter.

The one-bit storage device F indicates which cycle of operation — fetch or execute - is in progress. During the
fetch cycle Switch 1 transmits the content of the instruction counter to the memory address register, so that the
new instruction is read from the memory and brought to the memory data register.

During the fetch cycle, after the new instruction is received, the content of the memory data register is
transferred to the instruction register. Usually, an instruction consists of a number of distinct parts. A section of
the instruction contains the "operation code" indicating which type of instruction is to be executed. Some
distinct sections contain the addresses of the memory cells holding the operands or other data (for example, in
the branch instructions, the address of the instruction to be executed next). In the simplest case, each instruction
refers only to one operand, so that there is only one section containing an address. Finally, other sections may
contain supplementary information.

The decoder works during the execute cycle. It examines the section of the instruction register holding the
operation code, and excites at the output the line corresponding to the type of instruction to be executed.

The instruction encoder interprets the signals produced by the decoder, chooses the sub-systems which are
to be informed and sets up the flow of information to them.

Notice that during the execute cycle the content of the address section of the instruction register is trans-
ferred to the memory and brought to the memory data register. Usually, this data will be transferred to the arith-
metic unit through Switch 2.

The Repertoire of the Instructions


In a first, rough classification, whose intent is merely to give an idea of the instruction repertoire, we can
distinguish between the following classes of instructions.
24

MEMORY

MEMORY DATA MEMORY ADDRESS


REGISTER REGISTER

TO THE •+
ARITHMETIC
UNIT
-C SWITCH
3 c SWITCH
*y -.
J

INSTRUCTION REGISTER INSTRUCTION COUNTER

DECODER

INSTRUCTION
ENCODER

T
Fig. 2.16 Scheme of the control unit
25

Transfers

The content of the cell whose address is specified by the address section of the instruction is to be transferred
to a certain register (usually, of the arithmetic unit).

Arithmetic (or logic) operations

The content of the cell whose address is specified by the address section is to be submitted to a given arithmetic
(or logic) operation together with the content of a certain register.

Shifts

The content of a specified register is to be shifted to the right or to the left. The number of shifts to be
executed is usually indicated in the address section with a certain code.

Jumps

The content of the address section is to be transferred to the instruction counter. Sometimes the indicated
jump must be executed only if a certain condition - for example, the equality to 0 of the content of a specified
register - is satisfied (conditional jump).

Output operations

The content of a certain register (for example, the main register of the arithmetic unit) is to be transferred to
the interface register of a specified output device, which is indicated in the address section of the instruction.

Input operations

They are similar to the preceding output operations.

Indexing and indirect addressing

Let us assume that, as is often the case, the length of a memory cell (and of an instruction) is equal to 16.

If 6 bits are devoted to the operation code, which will distinguish 2 6 = 64 different instructions, only 10
bits will remain available for the address section. It follows that, even in the case of one-address computers, only
2 1 0 = 1024 different memory cells can be referred to in an instruction. Since the memory size may be much
larger than 1024 cells, it is necessary to devise some method for referring to the whole memory. Two of the most
common techniques for extending the addressing capabilities of instructions are the two following ones.

Indirect addressing

In this mode of addressing, an instruction-contained storage address does not specify the location of an operand;
instead, it specifies a location that contains the address of the operand. Therefore, the whole length of a memory
cell is devoted to indicating an address. This means that, in the case of a memory length equal to 16, 2 1 * distinct
addresses can be distinguished. Notice that if both direct and indirect addressing modes are desired, one bit in
the instruction must be devoted to indicating whether the- address specified in the operand field is to be interpreted
as the address of the operand or the one of the cell containing the address of the operand.

Indexing

An index register is a hardware register, usually of the same length as the memory cell, whose content can be
added to or subtracted from the address written in the operand-field of an instruction for obtaining the true
address where the operand will be found. Of course, the instruction code must contain a bit indicating whether or
not indexing must take place.

Besides extending the addressing capability of a computer, indexing can greatly simplify programming by
facilitating the handling of loops, arrays, and other repetitive processes. Some computers have a number of index
registers and facilities for modifying and using each of them separtely.

Microprogram ming

Many modern machines are often designed applying the concept of "microprogram control". In such machines
each instruction, instead of being used to initiate control signals directly, starts the execution of a sequence of
"microinstructions" at a more elementary level. The microinstructions are usually stored in a special read-only
storage unit. Thus, the instruction repertoire of a microprogrammed computer can be altered to suit particular
requirements by simply changing the stored microinstructions.
26

2.9 INPUT-OUTPUT DEVICES

Operation
An input operation begins when an input instruction is read by the control unit and a command is sent to
an input device to read a set of words or "record". Reading takes place by having an input medium (e.g., a
punched card) move through the input device. Information is read and converted to the code used by the computer
system. The coded information is transmitted to the internal storage and stored into locations assigned to hold
the input record. The data are then available for use by the processing instructions.

An output operation is essentially the reverse of the preceding one. The data to be written are arranged by
program instructions in storage locations assigned for this purpose. An instruction to perform output causes the
data from the output storage locations to be copied and transmitted to the output device.

An input or output device is directed by a device control unit. This relatively small control unit decodes the
command from the computer control unit and effects operation of the device or devices. In some cases other
operations, such as, for example, checking of transmitted data, are performed.

The connection between the central processor and the device control unit is, in most large-scale computers, via
a "channel". This is essentially a control unit for the system of some input-output device control units. The task
of the channel is essentially to control the input-output paths by which data are brought into and out of the
primary storage.

The Principal Input Devices


The following is the list of the main input devices.

Teletype
Information is read from the keyboard or from the well-known paper tape. A typical reading speed is 10
characters (of 8-bits each) per second.

Paper tape reader


Reading speed ranges from 350 to 1000 characters/second.

Punched card reader


The medium where information is written is the well-known card having 80 columns into each of which a
character is written. The typical rate of speed varies from 300 to 1200 cards/minute.

Analog-to-digital converters
If an analog quantity is to be processed by digital equipment, an analog-to-digital converter must be connected
between sensor and computer. One kind of converter is based on comparing the sample of the input signal (at a
given instant of time) with a reference voltage which varies with time. An electronic counter, connected to a clock
generator, counts the number of clock pulses elapsed before the reference voltage reaches the level of the analog
voltage. Accuracy and speed of analog-to-digital converters are rather variable. Typical values are perhaps an
accuracy equivalent to 1 part in 2 10 parts and a speed of 50,000 samples per second.

The main output devices

Teletype
Output information is either punched on the paper tape or printed. A typical writing speed is 10 characters/
second.

Paper tape punch


Punching speed ranges from 20 to 150 characters/second.

Card punch
Punching speed ranges from 100 to 500 cards/minute.
27

Line printer

Writing speed varies from 200 to 1500 lines (of 120 characters each) per minute.

Video displays

They are used as output devices as well as input devices by means of a suitable light pen.

Digital-to-analog converters

In process-control applications like many of the air-space applications, heavy reliance is placed on digital-to-
analog converters which make it possible to convert a sequence of digital data into a continuous signal. In general,
realization of digital-to-analog converters does not offer heavy difficulties.

Interrupts and cycle-stealing

Consider the simple case of a fast paper tape reader. If a reading speed of 1000 characters/second is assumed,
the time required for transmitting a character from the device to the memory of the computer is 1000 micro-
seconds. Almost all this amount of time is spent in mechanical and electrical operations leading to writing the
information which has been read from the paper tape into the interface register of the device. Instead, only few
tens of nanoseconds are sufficient for transmitting information from the interface register to some register of the
arithmetic unit (or even, in some cases, to the memory data register), and a time of the order of some hundreds
of nanoseconds is required for storing information into the memory from the memory data register.

A way to prevent all the computer from remaining idle during the operation of the input device is described
below in its successive stages.

(1) When in a program the order of reading a record from the paper tape is to be given, an instruction is
written which starts the operation of the reader. It is the control unit that interprets this instruction
and delivers to the device the command starting the motion of the tape and the other mechanical and
electrical operations of the device.

(2) As soon as the starting signal has been transmitted, the control unit, without waiting for a character to
be read from the tape, picks up another instruction, interprets it and starts its execution. Thus, while
the reading operations are performed, the program execution is continued in parallel.
(3) No sooner a character has been read from the tape and transferred to the interface register of the device,
than the latter sends a reading request to the control unit through a suitable interrupt line. Upon
receiving an interrupt request, the control unit stops the background program and begins the execution
of a routine which transfers the datum from the interface register to some register or memory cell and
possibly performs other operations.
This interruption routine has been written in some memory area. The first cell of this area is to be
known by the control unit. This is achieved by writing in a fixed memory cell the address of the first
instruction of the interruption routine.
(4) At the end of the interruption routine, a jump instruction to the background program is executed,
exactly where the background job was interrupted.

In order to reduce the time spent in the input/out operations, sometimes all the operations listed in 3 and 4
are performed in hardware without the intervention of an interruption routine (direct memory access: DMA). In
this case the transfer is performed through special channels which steal time slices from the control unit whenever
necessary. During each stolen time slice one transfer is performed.

The computer logic performing the direct memory access is basically independent of the logic involved in the
programmed transfer. The main point is that the DMA does not perform the transfer via a register of the
arithmetic unit. Rather the transfer is performed via the memory data register directly with the computer memory.
Since the program execution is not involved in the DMA transfer, the computer working registers are not disturbed.

This kind of transfer is also known by other names, such as data channel, data break, and cycle stealing
transfer.

2.10 SOFTWARE

Software is the collection of program and sub-programs associated with a computer which facilitate the
programming and operation of the computer. These service programs do not solve the user's problem directly, but
are employed as components of his program. They are generally designed not by the user, but by the system
programmer. The basic elements of software are listed overleaf.
28

Assemblers
Most programming today is not done in machine language, which is cumbersome and lengthy. Assembly
systems represent the first step for overcoming the disadvantages of machine language. Programs are written in
an "assembly language" which is very similar in structure to the machine language but differs from the latter
essentially for two reasons:

(a) the programmer can specify the instruction code by a mnemonic symbol rather than a numerical code;
(b) the memory cells can be specified by symbolic names rather than their addresses.

An assembly language is an example of "source" language. It requires one or more stages of translation to
produce the machine language program. The basic tool for performing this translation is the "assembler" which
is a suitable program (in machine language) "assembling" programs written in an assembly language to produce
machine language programs.

Compilers
Assembly languages have the characteristic that each source language command is represented by exactly one
machine language instruction. This is the reason why an assembler is a softwear tool relatively simple to be produced.
But programming in an assembly language, although it is less cumbersome than programming in the machine
language, still remains a lengthy task. For that reason, other types of source language have been introduced, which
are characterized by commands (such as the computation of functions or complex operations in floating-point) each
involving many machine language instructions. These languages convey information with a syntax and word
structure similar to that used by the programmer in expressing himself when he describes his algebraic or business
problem. Examples of such languages include FORTRAN, ALGOL, PL1, COBOL.

A program wirtten in a high-level language, such as FORTRAN, is translated into a machine language program
by means of a suitable program called "compiler". The compilation process usually involves examining and making
use of the overall structure of the program.

Relocation, linkage and loading


At assembling or compilation time, it is sometimes not known where in memory the program will be placed
for execution. Thus, some of the addresses in the instructions cannot be definitively assigned. Therefore, in such
cases the assembler or the compiler produces only relative addresses, or, more specifically, addresses relative to zero,
i.e., addresses which will be used if the first word of the program is placed in the first cell of the memory.

When the program is to be loaded into memory, starting in cell x, it has to be "relocated". This simply means
that x is added to all addresses which have been tagged as being relative.

This holds in particular when the program is composed of a main section and a number of sub-programs. It
is convenient for each sub-program to be compiled separately, so that it can be used in other programs without
being re-compiled. But at compilation time it is known neither where the first instruction of the sub-program
will be placed nor which memory cells in the main section of the program or in other sub-programs will be devoted
to variables which are to be used in the present sub-program. Therefore, a set of sub-programs which have been
compiled separately are to be "linked" together.

Linkage is generally performed by the same program that transfers programs written in machine lanaguage on
some medium, (paper tape or card, magnetic tape or disc) to the main memory prior to the execution of the
program. This program is usually called "loader". But in many medium - or large-scale computers the two
operations of linking and loading are executed separately by two programs, called "linkage editor" and "loader",
respectively.

REFERENCES
1. McCluskey, E.J. Introduction to the Theory of Switching circuits. McGraw Hill, 1965.

2. Marcus, M.P. Switching circuits for Engineers. Prentice Hall, 1965.

3. Miller, R.E. Switching Theory. John Wiley, 1965.

4. Flores, Ivan, Computer Organization. Prentice Hall, 1969.

5. Gear, William, C. Computer Organization and Programming. McGraw Hill, 1969.


29

6. Foster, Caxton, C. Computer Architecture. Van Nostrand, 1970.

7. Chu, Yarham Introduction to Computer Organization. Prentice Hall, 1970.

8. Beizer, Boris The Architecture and Engineering of Digital Computer Complexes. Plenum Press, 1971.

9. Stone, Harold S. Introduction to Computer Organization and Data Structures. McGraw Hill, 1972.

10. Weitzman, Cay Aerospace Computer Technology Catches up with ground gear. Electronics,
pp. 112-119, September 1972.
30

CHAPTER 3

DATA ACQUISITION AND COMMUNICATION FUNCTION

Yngvar Lundh

3.1 TYPICAL DEVICES TO WHICH AN AVIONICS COMPUTER IS CONNECTED

An avionics computer is, by definition, part of a real time system, either in the air or on the ground. It
therefore has to communicate with the rest of the system. To communicate is to exchange data in one form or
another. In this chapter we shall discuss various aspects of such data exchange. First, let us briefly review
some typical devices which may be part of an avionics system, and how these would communicate with the
computer in some example cases.

Operator communication is perhaps the most obvious. Human operators will need communication for super-
vision and/or interaction with the system. (The man-machine interaction function is discussed in more detail in
another chapter.) For this there may be a series of switches, pushbuttons, lamps and displays or specialized indicators,
pointer or dials. Handles, joysticks, "tracker-balls", as well as light pens or other graphical means may be used for
input. For drawing the operator's.attention to special situations, alarm conditions, etc., flashing lights, audible tones,
bells or even synthesized or prerecorded voice messages are available. All these devices can be used as computer in-
out devices, suitably arranged to fit a given set of needs. To relieve the mental load on the operator, it is often
practical to arrange for the computer to request information when needed, rather than wait for the operator to
remember, and to state the choices to be made or to suggest actions already computer-optimized, but needing
operator sanction.

Communication to process variables may typically be arranged as in the following examples. E.g., navigation
and guidance accelerometers. The quantities measured by these devices must be encoded into a form suitable for
entry into the computer. These quantities will for example be angular positions of gimbal rings, electric currents
in servo compensating devices, etc. The throttle control of an aircraft might be a handle with an angle measuring
device giving the throttle position as a number to the computer. Connection to these or more complex sub-systems
may all be put into a general class of "sensors".

Exchange of information between aircraft and ground installations may be required for guidance and control
or other purposes. Computers may thus talk to each other over radio links, or a computer may have remote
control of actuating devices, or receive data from remote sensors via radio or other transmission media.

3.2 DATA TYPES, FORMS AND FORMATS

Physical quantities may be represented in an avionics system in two ways: analog or digital. Digital means
"numeric" i.e., representing a quantity by a number (of units). Analog is what most of the real world is, namely
continuous variable quantities. The name "analog" may be understood as one physical quantity representing
another by behaving in an analog manner. The speedometer needle in a car gives an analog representation of the
velocity while the mileage counter gives a digital representation of the travelled distance.

Data in digital form are discrete in nature. A number consists of a definite number of digits. Each digit may
attain a definite number of values: ten for a decimal digit, two for a binary digit. A binary digit is called a bit.
Most digital systems use the binary number system, where the series of positive integers are

000 zero
00 1 one
0 10 two
0 11 three
100 four
etc.

Decimal numbers may often be used indirectly by representing each decimal digit by a binary code, for
example (straight binary coded decimal or BCD representation):
31

IMAL BCD
0 OOOO
1• 0001
2 00 1 0
3 001 1
4 01 00
5 0101
6 0110
7 0 111
8 1 000
9 1 00 1
Many other coding schemes are in use for various purposes.

A collection of bits representing a number is often referred to as a "word", and the number of bits in each word
as the "word length". A word may be subdivided into groups each representing a decimal digit or a binary coded
alphanumeric character. A string usually of 7, 8 or 9 bits is often referred to as a byte, and may typically be used to
represent a coded alphanumeric character. See section on "Data Transmission".

On transmission paths, within parts of a system or in sequential logical nets, operations may be serial, i.e., bit by
bit, or parallel, i.e., one word at a time, or some combination of .these.

Since most physical quantities of the real world are experienced as analog, a digital computer needs to have data
converted. Various conversion methods are available. Analog to digital- and digital to analog conversion is discussed in
some detail in the section on 'AD and DA Conversion".

One operation which is necessary when converting an analog quantity to digital form is sampling. That is to take
measurements (samples) at regular intervals (sampling interval). Each sample is converted to a number. An analog
function of time which has thus been converted to a string of numbers is referred to as a time series.

3.3 CHARACTERISTICS OF DATA

Taking an analog quantity which plays a role in a system which we want to deal with, various characteristics are
important. The main parameters are:
— Bandwidth: How fast does the quantity vary with time? This can be specified as the (energy versus
frequency) spectrum. "Bandwidth is the frequency range where the signal has significant energy content".
— Range: What are the minimum and maximum values which the variable will attain?
— Accuracy: With what precision are we interested in knowing the absolute value of the quantity? This
can be stated in units (millimeters, degrees, etc.) or as a fraction of the range (per cent, parts per million,
etc.).
— Resolution: With what detail are we interested in observing small variations? This is similar to accuracy, but
refers to differences between two values of the quantity taken within a limited time span, or a limited area
of the range, rather than their absolute value.
— Linearity: To what extent is there proportionality between two quantities where one represents the other?
Linearity may be a highly desirable factor in many cases. In other cases, nonlinear representation may be
desirable. Examples are logarithmic scales, saturation characteristics, etc.

Data in digital form are discrete in nature. A number consists of a definite number of digits and each digit
may have a definite number of values; ten for decimal digits, two for binary digits.

Certain basic relationships exist between the main parameters, which we shall review in the remainder of this
section.

According to the sampling theorem - the full information content of a continuous function of time with
bandwidth f, is present in a string of samples taken with frequency (sampling rate) fs ^ 2f . In other words:
the original function may be reproduced exactly from a string of samples if the sampling theorem is obeyed. Note,
however, the following two practical considerations which are very important:

(a) When sampling a function at a sampling rate fs the highest frequency occurring in the signal must be
f = 1/2 f, . It is not sufficient that the highest frequency of interest is below that limit. If there is noise
32

of a higher frequency, it will be "folded" down into the interesting frequency band, Therefore, a signal
must usually be lowpass filtered before sampling.

(b) To say that a spectrum does not contain energy beyond a certain limit is, of course, an approximation.
In real life spectra will be shaped by filters having a finite "roll-off. In practice, one must therefore
sample at a higher rate than the theoretical minimum. How much higher, depends on what residual noise
level which can be tolerated, and is typically a compromise between filter complexity and further processing
complexity of the time series.

Accuracy and resolution is degraded in the conversion process between analog and digital data. However, data
are always only required with a certain specific accuracy and resolution for any specific purpose. Noise in one form
or another, including quantization noise will be tolerated below that limit. In conversion to digital form, the range
of a variable is divided into a number of discrete values, i.e., quantized. Linear quantization would mean that the
step or quantum from one value to the next would be constant over the whole range of the variable. If the total
number of steps is N ; the resolution is (100/N)%. For quantization into N steps to be accurate is (100/N)%,
the error in the quantization must be less than one half step; "error" then meaning departure from the ideal or
nominal quantization "staircase function", see Figure 3.1.

HUMERI- ' 1
CAL VALUE

7 1
1
1
1
1
0 y
y
6 •

5 • 1 0 1
1. • 1 0 0
3 - 0 1 1 y/j
2 • 0 1 0
1 0 0 1
0 - 0 0 0
1 1 m *

A B
T H R E E - BIT WEIGHTED
INPUT VARIABLE
BINARY BIT
RANGE A-B
CODE PATTERN

Fig.3.1 Basic digital representation of a variable

A binary number of n bits can have 2 n different values, and this represents values with a resolution of
N = 2 n levels. For example to represent a variable with a resolution of 0.1% in digital form, means quantizing into
at least 1000 levels. This can be encoded by a digital number of 10 bits, since 2 10 = 1024 .

If we wanted to represent a function of time having a bandwidth of f = 4000 Hz to an accuracy of 0.1%,


we would need to sample at a minimum of fs = 2f = 8000 Hz and each sample must be represented by 10 bits.
The resulting data stream would be 8000 10-bit words per second, or 80,000 bits per second. Straight binary coding,
as we have assumed here, is in general an efficient method of coding, i.e., it requires few bits. In practical situations
other codes may be chosen, which will increase the data rate in bits/second. For example if data are to be directly
readable by human operators, a lot is to be said for binary-decimal coding, which is still a reasonably efficient
coding scheme, but where 0.1% accuracy would mean 3 decimal digits, coded into 12 bits. Many other coding
schemes which are in use have an even much higher degree of redundancy.

Redundancy may be used systematically for various special purposes such as making the system more resistant
to failure, by automatic error detection and correction. These and other practical matters will increase the bit rates
from the theoretical minimum.

3.4 AD AND DA CONVERSION

In this section we shall discuss conversion between analog and digital representation in further detail. Figure 3.1
shows how a range of a variable can be subdivided into n (here = 8) equal steps, and how these can be numbered.
33

We see that the individual bits of this straight binary code carry a certain weight. The first and most significant bit
tells whether the value is in the upper or lower half, i.e., the weight of that bit is one half the total range. The next
weighs one fourth, etc.

A digital to analog (D/A) converter makes use of this. The input to a DA-converter is a digital number or
"word", the output is typically an analog voltage, i.e., a voltage proportional to the number. Figure 3.2 shows the
principal parts of a DA-converter. The bits control switches connecting or disconnecting "binary weighted" resistors
to a summing amplifier, such that the output voltage is proportional to

l/2bj + l/4b 2 + . . . 1/2".b n

where bj is the i* bit of the n bit number. (Different circuit configurations may be used to obtain the weighting
than the example shown in Figure 3.2.) To understand this, note the following main facts: the high-gain summing
amplifier will produce an output voltage which makes the summing point zero volts. This will be the case when the
current in the Rf just balances the sum of the currents in all the summing resistors.

REFERENCE
SUMMI NG
VOLTAGE / R E S I S TTORS
t

FEED-BACK
K_ "
RESISTORS

2R
b2 o - -N^ZIh
ANALOG
) OUTPUT
VOLTAGE

2 "MR

t SUMMING POINT
SWITCHES
CONTROLLED
BY INPUT
BITS

Fig.3.2 Digital to analog converter

An analog to digital (A/D) converter may make use of a DA-converter in a feed back loop as shown in
Figure 3.3. The "control" has the task to find a binary number ("digital output") which when converted to analog
is less than one half step different from the input voltage. It can do so by using the information "too big", "too
small" or "tolerable" which comes from the comparator. Various strategies or methods are used for implementing
these, and are readily implemented by electronic circuits.

Let us now as an example consider an AD-converter working on the principle of "successive approximations".
The conversion will then take place on n steps for an n-bit number:
Step 1: The controller tries the number 100 0, i.e., one half the range. The comparator decides
whether this number is too large or not.
- Step 2, 3 . . , n — 1: If previous number was too large, reset the "one" entered in previous steps to 0. In
any case, set next bit to "one". Then decide whether this number is too large or not.
- Step n : If previous number was too large, reset the bit enetered last.

In this conversion principles, the AD-conversion times will be

n . [(DA-settling time) + (comparison time)]

Typically, the conversion time is a few microseconds per bit.


34

ANALOG
INPUT COM-
CONTROL
PARATOR

'f Ii II 1 '
DIGITAL
OUTPUT
DA-
CONVERTER

Fig.3.3 Analog to digital converter

For the general case, this is the most efficient scheme. There are, however, special cases where other methods are
more desirable or more efficient. Assume for example that it is known that the variable has only changed a small amount
since last conversion. It may then be more efficient to replace the successive approximation type controller by a
bidirectional counter, counting up or down as decided by the comparator.

This latter principle actually opens up for review the fundamental characteristics of bandwidth and accuracy
discussed in the previous section; there are practical cases where incremental coding schemes have merit (i.e.,
coding the difference from one sample to the next). This becomes especially attractive at sampling rates significantly
higher than twice the bandwidth. The theory of such coding schemes has been analyzed in great detail in connection
with "delta modulation". A general limitation of incremental coding is that a single error - or the lack of a defined
starting point — makes the absolute value uncertain. This is tolerable in many cases, e.g., when the frequency
response does not have to inlude "DC". For normal conversion (i.e., non-incremental), the main performance
characteristics are accuracy (or resolution) and conversion time. For the DA- converter in Figure 3.2, the conversion
time is the time from the digital number is made available at the bit inputs until the analog output value has
"settled to" one half step or less from its nominal value. This is referred to as "settling time". DA-conversion is
much faster than AD-conversion in the typical case.

Let us return for a moment to the principle of digital representation of analog variables as depicted in Figure 3.1.
Nothing was said about polarity of the input variable. The normal procedure to apply in the case that the input can
be both positive or negative is to call the most significant bit the sign-bit. Note that the various representations of
positive and negative numbers can be obtained by subtracting half the range (binary 100) from each variable:

POSITIVE POSITIVE AND


NEGATIVE
1 1 1 MAX 0 1 1
1 10 0 10
101 00 1
1 00 0 00
0 1 1 1 11
0 10 1 10
00 1 101
00 0 MIN 100

In the case of l's complement notation, "plus and minus zero" have equal value. This contraction of the scale
is obtained by making the weight of the most significant digit one quantizing step less than half the range.

The reference enters into all conversion schemes in an essential role, namely that of the "yardstick", or the
definition of the measuring unit. If it is required to have "zero" correspond to zero voltage (i.e., the summing
point in Figure 3.2), the most significant summing resistor may be referred to a secondary reference voltage of
opposite polarity.

In the discussion so far, we have assumed that the variable to be converted is intermediately represented by
analog voltage. Although probably the most common, this is by no means always the case. One type of variable
which is frequently occurring is angle. Shaft position is, of course, important in many automatic systems. In
servo systems, for example, shaft positions are often represented by relative amplitudes between two or three AC
voltage having specific phase relationships, as generated by "synchros" and "resolvers". Circuits are available for
direct conversion between synchro and resolver signals and binary coded representation - thus exploiting the high
35

accuracy available in these, relatively rugged and compact electromechanical devices, and making shaft position
readily available to the digital computer. This method is of special interest in mixed systems, i.e., systems
where analog servo loops and digital circuits are used together.

The disc encoder is a device which generates position-codes directly. The principle is indicated in Figure
3.4. Light and dark areas, representing zeros and ones laid out in a metallized or opaque pattern for reading by
electric pick-off brushes or by optical sensors. One brush or photocell for each circular band gives one bit of the
code. Figure 3.4(a) indicates how a straight binary weighted code may be obtained.

An important practical consideration for this type of encoding may be seen as follows. Assume the disc
rotating very slowly past a transition, say from " 0 " to "15". Exactly at the transition all bit sensors must change
from 0 to 1. This would require perfect alignment of the sensors, unless some small area of uncertainty should
be allowed where some, but not all bits have changed. However, in such an uncertain area, any code might be
expected, in other words a reading error of ± 50% might occur.

12 11

( 7 ) STRAIGHT BINARY CODE (?) GRAY CODE

Fig.3.4 Shaft position coding

This highly impractical ambiguity is avoided by using a code such as the Gray code. Figure 3.4(b). The
Gray code is one of a class of codes whose main virtue is that one step's variation in the variable to be
encoded will cause change in one bit only. Alignment requirement of sensors is thereby only one half step,
a meaningful value consistent with the actual coding accuracy.

The Gray code runs as follows:

0 0 000 0 000
1 0 00 1 0 00 1
2 0 0 10 0 0 11
3 0 0 11 0 0 10
4 0 10 0 0 110
5 0 10 1 0 111
6 0 110 0 10 1
7 0 111 0 10 0
8 10 0 0 110 0
9 10 0 1 110 1

The number of bits required to obtain a given coding accuracy has not been changed from that of the straight
binary code. We have, however, had to pay something to obtain the so-called "unit distance" property: the bits
are no longer weighted, i.e., "least and most significant bits have lost their identity, and all bits are equally
significant. The most important consequence of this is that normal binary arithmetic rules can not be used.
Computing devices therefore will have to convert to straight binary before further processing, or else apply more
complicated arithmetic algorithms - whichever is the more attractive from a system's point of view.
36

Disc encoders are available in a variety of sizes, shapes and accuracies, down to the most extreme minute-
or even second-of-arc tolerances. In employing devices such as these, a word of caution is in order, however.
Although the functional principle is quite straightforward in these devices, certain compromises may have to be
made as to their practical operation. One should therefore very carefully consider parameters concerning rugged-
ness, reliability, power-consumption and -dissipation, environment and sampling rate.

In the analog-digital encoding discussed so far, we have assumed a linear quantizing "staircase" such as shown
in Figure 3.1. There exist, however, cases, where it is desirable to have the resolution variable over the range.
One may, for example, be more interested in fine detail of the small "hiss" than that of the big "bang". One
then compresses the high-valued ends of the range into fewer encoding steps, as indicated in Figure 3.5. Similarly,
one expands the higher valued ends of the range when decoding. This process, called "companding", is in most
prominent use for enhanced transmission economy of voice signals (i.e., "more understandable voice per transmitted
bit"). Companding methods may be interesting, wherever data volumes stresses economic aspects of coding
methods.

CODING
STEPS

VARIABLE TO
BE CODED

Fig.3.5 Companding

3.5 COMPUTER INTERFACING

We have now discussed how quantitites of the real physical world can be expressed in digital form such that
they may be handled by digital circuits, such as a digital computer. Let us now consider how a "device" is
connected to a computer. As mentioned earlier in this chapter, a device may be a data transfer unit of anything —
from the angle positions of a gimbal suspended gyroscope, to the output of a strain gage, to a guidance command
code to be transmitted over a radio link. The computer sees them all as "devices", and is mainly interested in
whether they produce or consume data, i.e., are input or output devices; in their data rate, their response time,
and the codes and data format which they use.

Take as an example a type of device which is connected to many computers: the electric typewriter. We
quickly see that this is not one, but two devices: an input keyboard and an output printer. What must the
computer do to print a line of text? - Supply the characters, in codes understandable by the printer (see the
37

following section), one by one: letters, punctuations, carriage returns, line feeds — all in the right order. The speed
in which the codes arrive, is, however, limited by the movement of mechanical parts, and the printer can only accept
the next character when the former has been duly recorded. Synchronization information is therefore needed by
the computer. Even more accentuated is this need by the keyboard. Many factors determine the rate at which the
keys are pressed — from contact bounce in switches to operator's skill and "thinking time".

Let us first establish certain features common to all devices. To the computer, the outside world is a number
of "peripheral devices", Figure 3.6. In some form, the communication to a device may be split up into data, device
code or actuating signal, and status or "sync" signal. How general in nature each of these will be depended on the
complexity of the entire system, as well as the individual device.

iDATA

DEVICE
ACODE__.
POSSIBLE
PERIPHERAL
COMPUTER FURTHER
DEVICE
COMMUNICATION
^STATUS

\ j

Fig.3.6 Computer communication with outside world

The device code from the computer, activates the device, for example to print the character, whose code is
simultaneously presented on the data lines. A "ready" signal is raised by the device whenever it can accept a new
data-set (e.g., character) and lowered when it is "busy". Note that this concept is quite general, and applies equally
well to input and output devices. The parameters important to the computer are:
- Input or output.
- Data format and code (word-length, binary, BCD, "ASCII"*, Gray, etc.).
— Transfer time (how long must data and device code be available for transfer between computer and device).
— Transfer (how often, max, min, and average, will transfer be required).

Let us return for a moment to the printer. The computer puts a character code on the data lines and the
"printer" — code on the device lines. The printer will accept the character, and lower its "ready" signal. The
computer thereby knows it has to wait for the "ready" before it can present the next character. This, however,
will take several tens of milliseconds, even for a fast printer, in which time the computer might have done
thousands of useful computer operations. It cannot a priori know exactly how many, though. To utilize computer
time efficiently, while communicating with peripheral devices at the speed they require is a fundamental problem,
to which there exist a number of solutions. We shall discuss the main principles in this section. Note that the
speed requirement is normally set by the peripheral device, either for economic reasons ("to keep the printer
moving") or for other reasons associated with the system. I.e., a sampling process must be carried out by a clock's
precision or an operator command must be obeyed with a minimum response time, etc.

The simple solution to let the computer wait is rarely acceptable. The main synchronization methods can be
listed as follows in order of rising complexity and data rare capacity.
(a) Let the computer wait.
(b) Compute for some "safe" maximum time, then wait.
(c) Keep computing, but sample the status of the device at intervals, and attend to it whenever it is ready.
(d) Keep computing, but let the peripheral device "interrupt" for attention.
(e) Let the device communicate directly with the computer memory without direct program control.

"American Standard Code for Information Exchange", dominating in use for coding alphanumerical test (letters, digits, punctuation,
etc.).
38

Methods a—d, with variations, are known as 10-controlled ("In-Out"-) communication with peripheral devices.
Method e, is known as "direct memory access" (DMA). 10-control requires the computer to issue a specific "10-
instruction" such as typically "transfer contents of the accumulator to output device number d". Somehow, there
must be a synchronizing mechanism, and that may be done in various ways, e.g., methods a—d. Methods a or b
would halt the computer temporarily, until the device became ready. Method c would require a variation of the
10-instruction permitting a conditional jump in the computer program depending on the status of the device.
Method d requires specific status change signals from peripheral devices to be able to "generate interrupts". An
interrupt means that the program, necessarily retaining information which permits the computer to resume where it
left off after the interrupt has been attended to. Many variations of interrupts and interrupt handling capabilities
are available, and have been a field of much fruitful ingenuity over the years. Note, however, that all of these 10-
methods, interrupt or not, still defend their place for some application. No method is "perfect" for all applications.
Our further discussion will be limited to a few points of special interest.

Note that none of the 10-control methods mentioned can ensure perfect efficiency for both computer and 10-
device in a general case. One or both normally has to waste some of its time. Even a sophisticated interrupt
system will be wasteful to some extent. Assume that a compuier has only one 10-device. Data can then be trans-
ferred with maximum efficiency. Such a case is, however, rare. Probably there would be at least two devices, say
one for input and one for output. We then have the wicked possibility of coinciding interrupts. This problem is
easy to deal with. However, the computer must somehow both determine which one of several possible interrupts
it has received, and if it receives coinciding interrupts it must be determined which to attend to first. Although
coincidence may occur infrequently, the system must always be prepared to handle that situation. In practical
systems this uses up part of the computer's capacity, and makes it impossible to guarantee immediate response to
all interrupts. In more extreme cases, the computer spends most of its time shifting between interrupt priority levels,
thus impairing the efficiency with which it can deal with other tasks. An even more serious problem is the logic
implications of multiple interrupts from the programmer's point of view. This problem is dealt with in further depth
in another chapter.

Direct memory access (DMA) is a good solution in cases, where the data transfer rate is so high that too much
computer capacity would be taken away for administration by 10-control. A computer basically consists of (one
or more) central computing unit(s) and memory (memories). Instructions and data are stored in memory and
fetched from memory via a memory access channel, where a computer word of some given word length, say 16 bits,
is transferred, one at a time. A DMA-channel is another subscriber on the memory, which may "steal" memory
cycles while the computer is kept waiting. The computer need not "worry", i.e., no logical provision need be made
in the program, but its operation will be delayed by the "stolen" memory cycles.

The device communicating over a DMA-channel needs to provide a memory address together with its "memory
cycle request", and it must provide or accept the data-word, as the case may be input or output. The program will
communicate with peripheral devices by finding or placing, respectively the data in "10-tables".

This method of communication is of first interest where large amounts of data, i.e., many computer words,
are to be transferred. This will then take place as a block transfer, in which the DMA-device has the capability to
be started at some table beginning location and to increment addresses for each transferred word up to an ending
location of the block or table. The complete block transfer will typically be started (and perhaps be stopped or
"signed off) by "conventional" 10-instructions, where the "data" typically specify beginning location and block
length, rather than actual data. The DMA-control will then appear as an 10-device. "Ready" and "busy" signal
will apply to the complete block transfer. In addition, other status information such as address count, errors,
abnormal conditions, etc., will be needed by the administrative programs, and can be sampled while the transfer is
in progress.

Real time is a parameter, important in many control systems. A clock is then arranged as a peripheral device.
E.g., time of day may be read in coded form. The clock may be arranged to generate interrupt at specified times,
for example every millisecond.

3.6 DATA TRANSMISSION

Data can be transmitted from one place to another over lines or radio channels similarly to voice communica-
tions. Direct transmission of analog data has very limited application for a number of reasons. In the more general
case, data are transmitted in digital form, which again is modulated upon some carrier. The most interesting fact
is, perhaps that it is technically within the state-of-the-art to transmit data at any conceivably demanded rate (say
up to a few Gegabits per second), and to make sure of its correctness to any specified degree. The more extreme
demands may, of course, rule themselves out for economic reasons if nothing else.

In a limited text, such as this, we shall again limit our discussion to the most central and important facts. A
data transmission channel has the task to accept binary words of a certain word length and rate at one end, and
present them at the other with only a specified amount of loss, error and delay. The necessary functions, some of
39

which are optional, are depicted in Figure 3.7: a stream of words are buffered, may be converted to another code,
and transformed to a stream of binary digits occurring at a certain bit rate. These are modulated on to some carrier
system which is used on the actual transmission medium, which may typically be lines, radio links or laser beams.
For example, the output of the modulator may be a signal confined to the spectral and other requirements
standardized for a telephone channel. At the other end the signal is demodulated, etc., by a similar, inverse chain
of functional units.

Let us look at the functions in some more detail: buffer circuits are necessary to accommodate the power
levels and timing requirements of the sending and receiving devices to those of the transmission channel for a
parallel word. Various interfaces as well as many parameters are being standardized by several standardizing bodies,
such as for example: CCITT, CEPT, ISO, and are used extensively for many standard data transmission tasks.

"Recorder" and "decoder", is here a generalized name for the following functions:
— Adding special bit patterns for "frame sync", necessary to recover the original word pattern in a continuous
bit stream.
— Adapting the bit rate and possible synchronization deficiencies of the transmission channel to the actual rate
of incoming and outgoing words.
— Adding redundancy to the incoming information bits, for detection and correction of errors introduced by
noise, etc., in the transmission channel.
— Encryption.

These functions may take on a variety of forms. They may also be included in the sending and receiving
devices, rather than the transmission channel. Their purpose is to achieve perfect transmission of data over a non-
perfect transmission medium, or more accurately: to obtain a specified maximum error rate in spite of noise and
distortion. (Encryption is used for protection of data so that they cannot be understood by unauthorized parties).

Speed adaption is required if the transmission channel is built for a specified fixed bit rate higher than that
required. Seen from the receiving end, there must be a safe way of identifying each bit as distinct from the previous
and the next bit. Further, there must be a safe way of determining which bit is which, so that one can re-edit the
bit stream into words. Further, one needs to identify each word, etc. Without going into detail, let us note, that
these requirements will take their share of the capacity of the transmission channel. To meet these requirements
with satisfactory efficiency, performance and simplicity is and has been the field of much igenious design. Let us
note that specified bit rate of transmission has different meaning depending on where in the transmission channel
we refer to.

A more subtle speed adaption may be required in some transmission systems which apply intermediate storage
and/or time division multiplexing: if various parts of the transmission system employs non-synchronized clocks,
"slippage" may occur. Circuits can be included which automatically supervise the information stream and eliminates
loss of transmitted information by "stuffing". That is: some redundant words are included at regular intervals at
the transmitting end. These are removed, or more are inserted along stations of the transmitting path, to ensure
that slippage does not cause loss of "paying" information.

Some circuits treat data as "messages" of limited, fixed or variable length. In complex nets of many stations,
messages are "switched" i.e., routed to their destinations. Sometimes "store and forward" techniques are applied.

Error detection and correction may be achieved by special coding. This simplest technique is "parity checking".
In each word, an extra bit is included. It is made one or zero depending on the other bits of the word such that
the total number of ones in a word is odd ("odd parity";. All received words are checked to see if they have odd
parity. If they do not, an error must have occurred in transmission. If the received parity is correct one has some
assurance that there were no transmission error. A double error, may, however, go undetected. There is therefore
a finite probability of undetected errors. To reduce this probability, one might include more "check-bits". By
thus choosing the degree of redundancy and the pattern in which it is employed, any specified probability of
undetected errors can be met in a given noise environment. "Cyclic block coding" is a generalized method of
high redundancy coding. It is easy to implement, can be applied to any degree of protection, and has well predict-
able performance if the noise characteristics are known.

It is also possible to employ redundant coding to achieve error correction, i.e., transmission errors can be
automatically corrected at the receiving end. The simplest concept is perhaps to send each part of the message
three times, and determine the correct part of the message three times, and determine the correct part by "majority
voting". More sophisticated and efficient methods are also available. Common to all is that they do not give 100%
protection, but again: by choosing the degree of redundancy and the pattern in which it is employed, any specified
protection may be achieved, in terms of probabilities.
40

SERIA- BIT MODU- TRANMITTING


DATA RE
BUFFER CODER LIZER LATOR
IN STREAM SITE

TRANSMISSION
MEDIUM

DE-
DE- BIT SERIAL
CODER DATA RECEIVING
MODU- TO ERROR BUFFER OUT SITE
LATOR STREAM PARALLEL DETECTOR

Fig.3.7 Data transmission channel


41

In general, error correction is much more costly redundancy-wise than error detection. An interesting and
practical compromise if two way communication is available is error detection, and retransmission of erroneous
messages upon demand.

It should not be forgotten that many applications will tolerate a certain amount of errors. In any case, the
actual need for error free transmission should be properly analyzed before going to an error detection or error
correction system. Well established theory is available for proper quantitative design of error protecting systems
in data transmission.

Similarly the data may be encrypted, that is: special coding and decoding devices may be included to make the
information unintelligible to a third party listening in the transmission channel.

Standards are especially important in data transmission systems. Many standardizing bodies such as ISO,
CCITT, CCIR, CEPT and others have issued standards and recommendations for numerous variables from bit rates,
to alphanumeric codes, from time division multiplexing, to modem interfacing.

3.7 THE PROGRAMMERS VIEW

Although it is simple in principle to connect a peripheral device to a computer, there are numerous details
which must be logically correct and efficiently managed for every information transfer. These details, which fall
into the main categories coding, timing, editing and management belong to the more tedious and critical logic
problems in computer system design. Some of the problems are common to all devices, but every device has its
own special requirements.

As a Programmer, one may have written a routine to compute a function and store the results in a table in
memory. These numbers can probably best be stored in the format used by the arithmetic unit of the computer.
If I want a graphical display of the function and a printed table, the internal table must be converted into two new
and diffferent formats: one to suit the digital to analog conversion parts of the graphical display device and one
to suit the alphanumeric code used by the printing device. Further, some scaling, editing and possibly some inter-
polation and some rearrangement of data will be required to make efficient use of the capabilities of the devices and
to give the results a neat and easily understandable appearance. Further, it will be useful to also produce some
additional information such as axes, "tick marks", and headings. Since the two devices are separate and independent
they might be running concurrently, while the central computer at the same time performs these conversion, editing
and other functions. To do this, however, some management is necessary.

To avoid having to go into every little detail every time, certain of these functions are usually programmed
once and for all in such a way that they may be used for various applications. These various programs, termed
"device drivers", interrupt handlers", translating and conversion routines, etc., are often combined with other system
programs into a common program called the "operating system".

Ideally, the system programs of a computer will permit the programmer only to worry about the details
which are specific to the task at hand, to let him specify his desires easily, and to allow maximum utilization of
the devices which are employed by the system. These problems are discussed in depth in another chapter.

Selected Reading for Further Detail

General logic, number representations:

1. Richards, R.K. Digital Design, Wiley 1971.

AD- and DA-conversion:

2. Hoeschele, D.F., Jr Analog-to-Digital Digital-to-Analog Conversion Techniques, Wiley 1968.

Delta modulation, companding:

3. Belts, J.A. Signal Processing, Modulation and Noise, English Universities' Press, London 1970.

Data transmission, error protection:

4. Martin, J. Teleprocessing Network organization, Prentice-Hall, 1970.


4:

CHAPTER 4

OPTIMIZATION

Yngvar Lundh

4.1 THE OPTIMIZATION PROBLEM

An avionics computer system, as well as all other engineering jobs, needs optimization. In our case we may
somewhat more specifically define the process as:

For a given task to seek a device which on one hand solves the problem with adequate performance and
reliability, and on the other hand requires minimum space, weight, power and cost in some combination.

First let us realize, that this optimization process may be different depending on who does it: the system
designer or the computer designer. The entire system's point of view, again may or may not permit complete free-
dom in sharing the different parts of the jobs between computers, special units, hardware, software and so on.

For our discussion let us assume that the solution is not constrained by limited choice of standard sizes, shapes
and forms or by choices made by someone else, past history or bad fortune. Although such ideal situations rarely
occur, let us still try to seek out the ideal trade-offs in order to come closer to them.

One little guide may be deduced already from this little introductory consideration: it is a virtue in itself in
this world full of imperfections and compromise, that a complicated system consists of individual, small sub-units
which have readily defined functions and connections to their environments, such that they may be replaced,
improved and perhaps even removed with minimum effect on the rest of the system.

For a computer based system to have adequate performance, will as a minimum mean that all the logic and
computational functions required can be done — in the time demanded by the system requirements. The designer
easily gets into a vicious circle here, because he does not necessarily know all the logic and arithmetic functions
required until he has the bulk of the design work already done. But long before then he must have assumed or
made a number of choices. These may later prove to be far from optimal. For example, if the choice were between
computers A and B, he may have to actually program many functions before he knows if the computing capacity
is adequate or grossly over-estimated. To have a too large computer sitting idle during much of its time rarely
improves the system in any way, but may be a burden. Clearly, therefore methods for making the right choice early
are in demand.

It will always be necessary to break a complicated system down into smaller parts for various purposes. Each
part, then can be defined and specified, and be optimized individually during the design process. As more details
become clear during this process, conflicts may arise because of unforeseen details, and there will be choices to be
made which involve more than one part. Often these choices are such that simplification of one part means compli-
cation of another. During development one has a dynamic situation, some decisions are easy to change later, some
are difficult. Since often various parts are to be made by different groups of people, companies or establishments,
contractual and various, perhaps irrational viewpoints often tend to remove further optimization from the strictly
technical level. This can easily lead to a sub-optimal solution, and it may be regarded as part of good system
planning to be aware of this feature of the practical world and to seek the solutions which are the least prone to
non-technical optimization. Without trying to offer further general guidelines, let us be allowed to offer some
pieces of experience which have proved useful.

Partitioning of the system is extremely important. It is more important that the interfaces between parts are
simple, functionally logical, well definable and well defined than that the total sum of all parts are an absolute
minimum. We are here referring to the partitioning done on an early planning stage of a system being developed.

Design jobs to be done by people are highly dependent not only on their competence but also on their motiva-
tion. It is, therefore definitely contributing to a good solution if partitioning both of system functions and of
development jobs to be done can be made with this in mind. The best criterion for this is often the partitioning
which minimizes the need for communication between people (not the communication, but the need for it). Among
other things, this implies that the size of each task be such that it can be managed by one man who is master of all
the problems on this level, and who may safely refer problems on other levels to other people.
43

Let us not go further into management considerations, but only note that various decisions on the technical
level can not be seen isolated from how the detailed development of technical solutions are to be undertaken, if a
really optimal solution is to be expected in the end.

4.2 IMPORTANT PARAMETERS

Let us turn now to some specific important parameters which are useful to identify and consider in optimiza-
tion of a computer based system.

Logic speed is a "technological factor", which characterizes logic circuits. It is measured in maximum ("clock")
rate at which one can accept pulses, for example to be counted. One for example talks of "30 MHz-logic". Another
useful measure is logic stage delay, i.e., the time from the input signals are presented to a single gate circuit until
the gate produces its final output. Typical value is several nanoseconds.

Computing speed is a very important factor, which unfortunately is difficult to measure exactly. It will have
different meaning depending on the computer. One measure is the number of machine instructions which can be
executed per second, or the instruction rate. However, execution time is normally different for different instructions.
An average may be used, but this approximation is application dependent.

Machine instructions are also, of course, different from one machine to another - some do more useful work
per instruction than others.

It is therefore almost impossible, and inconsistent with the continuous movement in the state of the art, to
define a universal measure of computing speed. After considering these factors instruction rate and instruction
power (of which word length is one, very coarse, indication), the next thing to do, is to actually program some
typical, preferably frequently occurring functions, and find out what the computation time is. One function
frequently used for comparison is the computation time for calculating the square root of a number. Other functions
may, of course, be more relevant for specific applications.

Memory capacity is another important factor, which is easier to characterize and measure.

The principal parameters are


— Size; N words of B bits each.
— Access time; time from the address is specified until the desired word content is produced for read operations.
— Cycle time; minimum time taken per successive operation.

Note that

(access time) < (cycle time)

These times may be different for read and write operations.

"Random access" means "access time is independent of the address and the order in which addresses are
called". Certain memory types are serial in some way, which mean that the access time is much shorter when the
words are accessed one by one in a given order. If an arbitrary word is sought, one has to wait for an arbitrary
part of a "multicylce" or "revolution". This is referred to as non-random access type of organization. The access
time for such memories is a function of several variables and of the way in which the memory is used, rather than
a single figure.

Since speed is desirable, but hard to come by, larger memory systems normally consist of a combination of
fast and slow memory. Some mass memories are often arranged to be able to transfer large blocks of many words
rather than single words.

Volatility is another quality of importance for many applications: the danger of losing memory content in
the case of power failure or abnormal conditions. A continuum of properties are available in different types of
memory such as: "completely volatile", "power protected", "read mostly", "programmable read only", "read
only", etc.

Communication capacity is the efficiency with which the computer can exchange data with peripheral devices,
that is data transfer rate, flexibility of synchronization (direct memory access, interrupt, etc.) and adaptability to
various situations and equipment configurations. This concerns both hardware and software facilities.

Survivability and reliability under normal and adverse conditions are important parameters. Many avionics
systems are required to operate, perhaps at reduced performance, with defective parts. This, of course, then is a
requirement of fundamental importance to the system design, specification and evaluation.
44

Modularity, i.e., partitioning into replaceable parts, is normally a desirable quality.

Physical parameters of primary importance are: volume, weight, power dissipation. In addition numerous
parameters associated with environmental resistivity are important, such as operating temperature range, vibration
and shock, electrical noise, radiation, etc.

Programming complexity is a feature of the utmost importance for computer systems in general. For an
avionics system, the computer programs, even all "application programs" must be considered to be part of the
system. When a system is complete and working, one might therefore consider it immaterial how much trouble
went into completing the programs. However, programs as well as hardware normally need maintenance. Change
in data characteristics during use may reveal shortcomings or errors. Accommodating additional facilities into the
system, expansions and improvements, all require the programs to be re-examined and changed. The complete
program must therefore not be a mysterious, unreadable tape or jungle of symbols. It should be specified,
described, explained and broken down into functional parts at least as thoroughly as the hardware parts of the
system. Further, there should be available facilities for making changes. This means that new or changed programs
can be written and tested using the same methods (language, editing, testing) as those used orginally.

These are the main parameters which characterize a computer based system. Unfortunately, most of them are
difficult to measure. This fact may often place too much weight on the few quantities which can be measured,
overlooking other important factors, and thereby missing better solutions. Were it not for this unfortunate situation,
optimization would be more straightforward, certain — and less of an art.

A comprehensive study of a number of important and representative computers have been made by C.G.Bell
and A.Newell in a voluminous book: "Computer Structures: Readings and Examples" (McGraw-Hill 1971). They
have introduced unified methods for describing the computer organization and instruction coding, and they have
identified sets of "parameters" or "dimensions": which are all important for characterizing computers. All these
dimensions can then he said to form a large "computer space". They define the main dimensions of this space to be:
— Logic technology (tubes, transistors, IC's, . . .),
— Word size (number of bits),
— Addresses per instruction,
— Structure (organization and interconnection of main functional units),
— Memory access (random, cyclic, linear, . . . ),
— Memory concurrency (multiprogramming, interrupt handling, . . .),
— Processor concurrency (parallel, serial, multiple instruction streams, . . .).

The complexity and large amount of information collected in that book throws much needed light on many
sides of computer technology and how to appreciate computers. It also, however, serves to illustrate the fact that
there is no really simple and universal way to describe and compare all aspects of computers.

4.3 TYPICAL TRADE-OFF SITUATIONS

In this section we shall identify some important trade-offs which are useful to know when seeking a way out
of the multitude of possible configurations which a computer based system may be given.

Speed complexity are related in a clear cut and often surprisingly general way. Let us look at two examples:

(1) To perform 200 000 additions per second we can use one adder which can do its job in 5 microseconds.
If such an adder were not available, or had an unattractive price, we might consider a slower one, say a
10 microsecond unit. Almost certainly we could replace the fast one by two slow ones plus some extra
circuitry to distribute the load between them.

(2) If a binary full adder needs 10 nanoseconds to add two bits and 5 ns to propagate the carry, a 16 bit
parallel adder would consist of 16 such circuits and might require 10 + 5.16 = 90 nanoseconds for
adding two 16 bit numbers, plus perhaps another 20 nanoseconds to set the result into a flip-flop register.
If we, however, did not require this high speed, we could do the job bit by bit, i.e., in series instead of
parallel. Using the same full adder circuit, we would only require one instead of sixteen. Instead of
110 ns, the total add time might be (10 + 5 + 20) ns per bit, i.e., 35.16 = 560 ns for 16 bits.

These, slightly simplified, examples are typical of the general rule that speed may be traded for simplicity: A
faster unit can be simpler or a more complex unit can be slower to achieve a specified processing capacity.

Instruction repertoire - speed is just a specific example of the same rule. Let us look at another example.
Computer A (big) has a multiply-instruction in its repertoire, computer B (small) has not. B has a standard
45

subroutine, however, which employs B's simple instructions such as add and shift to do the same job. The interesting
thing now is "how long does multiplication take", not "which one has hardware multiply" or "what is the instruction
rate"

Instructions - speed - memory size are really three factors which determine computing capacity separately
and can be traded against each other. To compute a sine function one can for example
(a) have an instruction to produce it directly, by a complicated arithmetic unit; or
(b) do a subroutine using simpler instructions, more time (assuming the same logic speed in the arithmetic
unit) and more memory; or
(c) do a table look-up, using extremely simple, if any arithmetic, very little time and a large amount of
memory.

Accuracy - word length is, of course, unlimited in a digital computer. However, if one needs more than 16 bit
accuracy in a 16-bit machine, one needs to program multiple word length operations. This takes much more than
twice the time for double precision than for single, etc. That is probably still the most economical if the need
arises infrequently. If, on the other hand, higher precision is in frequent demand, a longer word length may be a
cheaper alternative than a higher instruction rate.

All these factors are interrelated and dependent in ways which are difficult to describe in a general, application
independent way. The most important fact is, however, that they are interdependent. It is for example meaningless
(or at best a very coarse approximation) to say that slow processes need slow computers, meaning low instruction
rate. It is the total processing capacity which must be matched to the task.

Another matter is that there usually are constraints which limit the free choice along the entire scale of tach
parameter. In real time systems, this may be illustrated by the two factors throughput and response time. "Real
time data processing" in general means that: Any backlog of data accumulated in the processor never increases
beyond a given maximum value for a specified data throughput which may be maintained indefinitely. Response
time is: the time taken for a data value entering the data process to influence the output. The required response
time may require a fast computer rather than a slow but complicated one. For example, our previous suggestion
with the two 10-microsecond adders would be unacceptable if the result were needed in 50 microseconds (i.e., the
delay of 10 us could not be tolerated). So although the two adders might cope with the throughput, the choice
then would be constrained by the response time requirement.

Hardware-software is another trade-off, which really is a different name for the instruction-speed-memory size,
which was already mentioned. This name, however, can also refer to the trade-off between built in functions and
functions which must be reconsidered when programming a new situation. A Fourier transform, may be done by a
special function unit (hardware) or a special subroutine (software). Similarly a computer may have an interrupt
handling subroutine (sofware), or much of the logic such as priority, save-unsave, etc., may be implemented by
special devices (hardware). Such functions may be solved equally well by hardware or software. In a computer
offered as a component there are, however, great variations in which and how many such standard functions have
been included at all, or must be done by the system designer and application programmer. This trade-off may not
be important for the performance, but to flexibility and design changes, as well as complexity of the system design
job.

Reliability-complexity is a less well defined relationship. With a given technology, i.e., type of circuit compo-
nents and tolerances, the simplest, i.e., smallest number of components will be the most reliable if the organization
is non-redundant.

If the reliability of the computer thus obtained is too small (mean time between failures too short), it could
be improved by duplicating the computer, i.e., to have another doing the same job, and some equipment or programs
to decide if one computer fails and then disconnect it. Even more reliability both per dollar and per liter, etc., and
totally, may be achieved using other, more sophisticated redundancy methods. Theory and methods for design of
fault tolerant computers are in constant development. Conceptually, however, they tend to be difficult. They are
therefore expensive, and normally not justified on a limited economic basis. The economic aspects of reliability
are normally covered by conservative use of tolerances and well proven technological methods.

There are, however, many cases especially in avionics systems, where this is used to its fullest extent, and still
is insufficient. Extreme statistical reliability is for example required during missions of long duration without
possibility of repair. Special techniques and quantitative methods for assuring necessary reliability must then be
employed. In other cases hazards may occur against which it would be impossible to make the system resistant.
This is true with many military systems, and equipment which is to operate in less well known environments.
"Graceful degradation" is then desirable. The system may be designed to operate for example in emergency modes
perhaps at reduced performance when parts have been disabled. The trade-off here is between survivability and
complexity.
46

Another type of trade-off in a highly dynamic field such as ours concerns credibility of new and exotic devices.
Our aim is at new systems to be developed rather than at old proven ones. However, we normally want new ideas
implemented early to beat "the competition" in some form or another. It is therefore of interest to consider which
solution can be implemented first. In an existing system it is frequently easy to point out not only shortcomings,
but how things ought to have been. In the new system to be developed, however, we are typically "going to employ
some extremely attractive new techniques" which completely eliminate the old shortcomings. To foresee the
inevitable new shortcomings of the new system is desirable, but not automatic. Good system partitioning has
already been mentiond, and may more specifically more easily permit correction of mistakes along the way. It is
also no coincidence that small computers are normally earlier to utilize new technological progress than big ones:
they take shorter time to develop. Therefore, they are often competitive in computations per dollar or per liter
although the small computer would tend to loose against the big one using the same technology. Concepts which
can be implemented rapidly, therefore have a virtue by themselves. For tasks other than theoretical:

An old-fashioned and primitive, but working device is infinitely more efficient and useful than a modem
sophisticated one which only exists in theory.

4.4 METHODS OF DETERMINING ADEQUACY

The only method which can be recommeded, other than actually trying out a complete system is simulation.

A computer based system consists of computers) and peripheral devices. The latter must be specified by
their performance data and the job they are to do. The computer is to be designed, chosen and/or programmed.
After this has been done, most of the work is finished. It is of great interest to be able to make estimates long
before then. No general way to achieve this can be offered. Let us, however, describe one attractive way of
designing dedicated computer based systems, and leave it to the reader how he might make use of it in this case.

We are to have a computer doing a specified job as part of an avionics system. We will then first assume a
computer X. We will then know its instruction repertoire, possible standard subroutines, 10-functions, etc., and
how long each takes to execute.

First we write a simulator for the new computer as a programme on another computer S (preferably in a high
level language, which may even be computer independent). We then write the application programs for the assumed
computer X. Further, we write a monitor program to be run on computer S, simulating whatever is required of the
environment, and monitoring the (simulated) performance of X. It runs the X-computer simulator program and
keeps account of how long each "operation" takens and how often it is performed. "Operation" may refer to
machine instruction, subroutine calls, times through specific program loops, etc.

We may thus simulate our whole problem, or critical parts of it using computer S only, not even needing to
acquire computer X. This procedure allows us to do the following important investigations.

We may make sure that processing capacity is adequate. For a real time system that typically means that the
time available, e.g., within a sampling interval is sufficient for running through all necessary program loops by way
of all possible paths. Similarly we will find out if the processing capacity is grossly oversized. We can in other
words, determine how the computer meets the demand for processing capacity.

We may investigate alternate programming methods, algorithms concerning their consumption of computer
capacity as well as their performance.

We may identify the most critical parts of the computer limiting its capacity for our application. This is
done by examining the accounts of the monitor for which instructions or operations are most frequently used and
how time is spent. This is especially useful both in seeking and evaluating alternate algorithms as well as studying
the value of optional features. The method is equally useful for considering a computer to be bought or an entirely
new one to be developed.

These procedures may further be applied not only in determining the adequacy of computer X, but in
comparing computer X, Y, Z while seeking the optimum solution. The facilities offered by a simulation such as
this are in many respects greater from the point of view of optimization than it would be to actually have
computers X, Y and Z available, since computer S is simultaneously used for obtaining all these useful statistics.
It is also generally a fast method because it eliminates many non-essential experimental difficulties.

The application programs which where thus used during simulation are immediately ready to be used in the
actual computer later on.

To simulate the entire system may be useful. Without going to that extreme, however, it may still relieve the
systems designer of many uncertainties if he can only identify and simulate the more critical points at an early stage.
This both enables him to make better estimates, and forces him at an early stage to actively pursue the critical
points, which in general is good design practice.
47

CHAPTER 5

SYSTEMS AND SYSTEM DESIGN


(Software Design in Computer Based Systems)

C.S.E. Phillips

5.1 INTRODUCTION

This chapter is not concerned with the writing of programs by individuals for their own purposes, nor with
computation per se. It is concerned with the production of software by professional teams for computer controlled
systems dedicated to some special purpose. Such systems are essentially software based, usually real time and, as is
now realized, much more complex than they may appear to the outsider. Recent experience has shown that these
systems have taken much longer to produce and have been less successful in operation than had been expected at the
time of their conception. In particular, the amount of software needed and the difficulty of its production has been
severely underestimated. This was mainly due to the assumption that the individualistic, problem solving approach
appropriate to programmers as computer users is adequate for system building; in other words, the lack of systems
thinking and a systems approach.

5.2 SYSTEMS

A system is best defined as a whole consisting of a set of interacting constituents, where the whole is different
from or transcends the sum of the constituents. This definition is so all-embracing that it is not surprising the
subject should carry a high flown subjective air and be regarded at times as of little practical use. In modem times
the systems viewpoint has permeated organization theory (the interacting parts being men), but if we think of the
organizational problems faced by the builders of the Pyramids the study of systems could be said to predate science.
Nevertheless, the real impact of systems thinking has come quite recently, in sociology, politics, economics, manage-
ment, etc., as well as in most branches of science and engineering. There are natural systems (physical and biological),
man-made systems (political, economic, engineering), designed systems (avionic, computer, social Utopian), slowly
evolving systems (political, economic), systems which are self regulatory, controllable, goal-seeking, adaptive, learning,
growing, decaying, etc, and those which are not There is a small but growing interest in the study of systems
concepts in general both for its intrinsic interest and for its practical importance as the basis for system design. In
computer based systems particularly, system problems are beginning to overshadow the problems of engineering
technique.

In the past most systems have been regarded as "closed" i.e., having no inputs from or outputs to an environ-
ment. Such systems, for example the planetary system, systems of mathematics and physics, nineteenth century
economic models, etc., can be recognized by their lack of purpose (any purpose other than their own existence)
and are conventionally subjects of scientific or engineering work. The interactions between the constituents and
the internal behavior of a closed system can be expressed often mathematically in great detail as casual relationships.
Physical scientists and engineers are trained in and are mostly interested in closed systems. Thus scientists analyze
natural systems whose purpose is not their concern, whilst engineers tend to pay only lip service to purpose since
they are primarily concerned with making things. White and Tauber1 in their book Systems Analysis are mainly
concerned with closed systems and trace their extensive and highly successful development from the Renaissance
onwards.

An open system has an environment and it was biologists who first pointed out that their systems affected and
were affected by the environment3. Open systems which are characterized by "purpose" and "hierarchy" are
never an end in themselves and require Ideological (goal directed) rather than causal explanations. For example,
in answer to the question why does the apple fall to the ground? we may see this episode as belonging to a closed
gravitational system and reply because the weight exceeded the strength of the stalk. If the gravitational system is
regarded as a sub-system of an ecological system we answer so as to produce more trees. We have only to think of
"man in his environment", the topical interest in world social and ecological matters, the falling interest in "science"
(i.e., regarded as closed system thinking), modern attitudes to organizational and planning techniques and finally the
very recent attitude towards computers (that they are rarely an end in themselves) to realize how widespread open
system thinking has become. In fact the term system is now synonymous with open system.
48

Systems whose interacting parts are themselves systems are by definition hierarchical - thus we speak of sub-
systems and sub-sub-systems and the purpose of a sub-system is to perform some function for the system (which
itself may be part of a super-system). For a very crude example, in the following connected, "bottom-up" sequence
of system levels — solid state physics, microcircuits, computer architecture, software, navigation system, aircraft, air
defence, economics, foreign policy, national politics, world objectives, each can be regarded as an open system which
makes use of and depends on the (previous) sub-system and whose purpose is to serve and thus affect the behavior
of the (successive) super-system.

Turning to the class of man-made systems (more precisely, sub-systems) which can be designed or re-designed
over a short time period, that is those which primarily concern engineers and computer scientists, one might reason-
ably conclude from this that the design of a new system would take other levels into account. Here we reveal a major
system design problem. Difficult as it sometimes is for a designer to cooperate with designers of other sub-systems
at the same level, cooperation between designers at different levels is often poor or even negative. The reasons for
this are partly psychological (individualistic attitudes, etc.) and partly mutual incomprehension (different disciplines).
It is difficult for designers to give and take responsibility for the design of even the immediate system level above
and below without creating difficult problems of cooperation.

More important, perhaps, there is a further inherent difficulty with the design of an open system which arises
from the fact that it is defined Ideologically.

It seems to be a general characteristic of open systems that, though they clearly have a purpose of some kind,
on careful examination that purpose is revealed as uncertain or ambiguous and that the difficulty of developing a
new system is more connected with this ambiguity and uncertainty of purpose than from its inherent complexity.
This may account for the fact that it seems to be easier to put two men on the moon than to develop an air traffic
control system or reorganize a local authority. The ambiguity is often unrecognized in that the "purpose" may be
multiple (i.e., a "balanced" set of objectives), understood differently with different people, or changing with time.
Furthermore the purpose of a system can only be described in terms of a system at a higher hierarchic level. As
Langefors7 has said, all systems are potentially sub-systems. But what is the purpose of this higher level system?
Here we touch on a paradox, that a system to be developed can be fully understood only in terms of (theoretically)
all higher levels of system imaginable; yet work can start only if the purpose of the system is taken for granted,
(e.g., two men on the moon). Unfortunately it is all too easy to conceptually dose a system, that is, to assume its
purpose is self evident; the problem in practice is to open it up again. The solution of this problem of iteration
between hierarchic levels belongs to the realm of system design methodology.

5.3 SYSTEM DESIGN METHODOLOGY

Recent experience9 in the design of complex computer-based systems has shown that such systems have taken
much longer to produce and have been less successful in operation than had been expected at the time of their
conception. In particular the amount of software needed has been severely underestimated.

A more general understanding of Systems and Systems thinking would have avoided such errors and disappoint-
ments. On the other hand it must be admitted that at present the study of systems concepts offers wisdom and
understanding rather than panaceas for the design of complex systems. Nevertheless there is a growing interest in
the application of these concepts and theories to system design methodology. Here we must point out the important
difference both between systems and engineering and between individual and team work. In terms of man-years,
the most efficient way of developing for example a hardware or software system is for the engineer or programmer
to "close" it (i.e., to define its operational purpose himself), to develop it where possible from standard available
sub-systems, and to design and construct it himself. He would evolve his own instinctive ad hoc methodology,
using such principles as seemed appropriate, based on his experience and ability. Where an organization (system of
interacting men) is concerned, such an approach is not possible as its "intelligence" is of relatively low order. It is
generally agreed that the rate of development is not proportionately increased when larger teams are used.

We must recognize that computer-based real time systems are particularly complex because of the software
involved. It is not that the elemental sub-programs are any more complex than the other parts of the system such
as the sensors and other peripheral units; nor is it connected with the computing hardware, which can often be
regarded as a given "off-the-shelf sub-system of the software. The problem arises partly because of the lack of
standardized sub-programs and any standard way of defining and interconnecting them. It is for this reason that
individual programming work is so much more productive than team work. Another problem is the communication
between programmers and higher level system designers as already mentioned. It is unlikely that programmers
would be capable of, or even be permitted to, dominate and determine the design of, for example, an avionic
computer system. For the future, one must assume that avionics system designers will have a much better training
in computer science.

What methods should we use to develop computer-based real time systems? Boguslaw4, in an amusing analogy
between the design of social Utopias and computer-based systems, has proposed four main system design ideologies.
Firstly, formal methods in which the proposed system is planned and defined in exhaustive detail, secondly,
49

heuristic methods using "good" (but vague) principles, thirdly the use of available (standard) sub-systems and
fourthly, the "ad hoc" or unplanned approach. Clearly, the last method has been well tried, will always be popular
and, let us admit it, more interesting for the individual designer.

One formal method under investigation at the moment attempts to borrow the Syntax directed techniques of
compiler writers by defining systems in syntactic terms11-12. Unfortunately, formal methods tend to be over-rigid
and obscure in practice. As far as general principles are concerned, those currently in favor are the "top down"
(in which the details are considered first) and the principle of iteration.

A particular methodology of software development combines these principles with the idea of organized
evaluation - the "three prototype method"10. Here a system would "evolve" by being developed three times
(rather than once), the first time to clarify its "purpose", the second to find out how to make it and the third time,
how to make an engineered product. The philosophy behind this idea is that iteration between hierarchic levels is
impracticable during the development of a software system, so that definite time slots are arranged for this purpose
between the development of three essentially closed systems. A research modelling or prefeasibility stage might also
be required before work on the first prototype was begun.

The "available sub units" approach is much used in traditional engineering. In computing systems the method
is exemplified by the idea of computer packages in which standard programs of sets of programs are used. Such
packages are beginning to be used in A.D.P. systems where standard operations occur, but not yet for real-time
systems. Many project managers seem unaware that a new program is not made like hardware, but must be invented
before it can be produced. The software should therefore be their first concern - not something to be left till later.

It is difficult to draw any definite conclusions about the trend in system design methodology. The subject is
very complex, embracing on the one hand project management (another system in itself) and organization theory
and on the other, systems of documentation (another difficult and neglected subject). With a very small, high
calibre team, ad hoc methods may still be best, but it is more usual now to evolve more formal, if arbitrary, methods.
In general, the principle of iteration is thought to be particularly important, i.e., cycling through "top-down" to
"bottom-up", but it is difficult to achieve this in practice where the teams are large. Undoubtedly, better methods
of controlling iterative system development using computer aided documentation of the time varying managerial,
design and programming data will be introduced in the future, but human problems will be more resistant to
change. System designers largely concern themselves with their own problems and often tend to regard "sub-
system desigems" on the one hand as short-sighted and incompetent and "super-system designers" on the other as
ignorant and vacillatory. This problem is not necessarily alleviated when there is a one to one correspondence
between system levels and an organizational hierarchy.

We are mainly concerned in computer based systems with iterations between levels confined to closely allied
sdentific and technical disciplines. As far as broader matters such as defence systems as a whole are concerned the
problem is even more difficult. Present administrative and managerial methods have been developed for the procure-
ment of "equipment" i.e., well-defined, "closed" systems, so that inter-disciplinary iterations over many levels
involving "ultimate" users would require a revolutionary change in attitudes. This problem is better understood in
the social sdences than in computing or engineering.

5.4 PROGRAMS AS SYSTEMS

Let us imagine that we are given the task of writing a real time program given suitable hardware and
programming facilities including a good, high level language. Taking a systems viewpoint we know that we ought
to interact at least one level up and down. The level down involves taking part in the design or choice of the
programming language and the computer configuration. Let us ignore this aspect by assuming that previous
experience has confirmed our agreement with the choice. The level up concerns operational descriptions which we
take part in, similarly we idealistically expect that operation (e.g., avionic system) designers help to write the program.
As a result of this cooperation, iterations of analysis and synthesis can take place which will result in an expanded
and modified operational description and a first attempt at a written program.

We note immediately that the typical designer of an operational avionics system has little experience of real
time programming. The typical computer programmer probably knows little about avionics. How are they to
communicate with each other? This is an example of the interdisciplinary problem already mentiond. Moreover,
it is a fact that the higher the hierarchic system level the less precise and "scientific" the subject matter becomes.
It is fair to say that there is as yet no organized and agreed body of knowledge about the specification and the
writing of such programs. It would be helpful therefore if we could interpose additional conceptual system levels
between the application designer and the program writer. There appears to be two possible levels where more
formal techniques can be introduced, the functional specification and its translation into networks of process and
data areas.

The first of these levels concerns problems of system specification and "man-machine interaction". There is
a need here for a technique which clearly, rigorously and unambiguously describes the user requirements of the
50

system as far as possible in hierarchic terms. Working from these descriptions a program designer would be then
able to draw up a high level program design in which all the main data areas, programs and their interactions are
specified. The aim should be to provide a time ordered description of all the operations undertaken by both man
and machine for each of the functions undertaken by the system. The description would make clear all branches
in a chain of operations and also those places where parallel operations occur. The object would be to enforce a
structured description of the system which is more precise than plain English and which is, in principle at least,
hardware independent.

There are two main techniques which offer possibilities. One is to "program" each function in plain English,
with an accompanying flow chart. Each "program" would contain operations to be performed manually as well by
the computer. In one scheme of this type10, modifications to conventional flow charts have been made to indicate
parallel operations and to distinguish conditional branches occurring within the computer from those occurring
manually. It would be possible to simulate such a "program" so as to investigate the proposed system from a
functional point of view.

A more "mathematical" technique is to describe the system syntactically10'11'12. This method is essentially
hierarchical in that each definition is expanded into sub-definitions and is inspired by the methods of defining and
constructing computer languages. It goes beyond a textual description of the operational-requirement in that it
"holds the hope of being a more rigorous way of developing real-time programs". However there is some doubt
whether such a rigid technique will be suitable for the rather diffuse, parallel running functions which must be
expressed at this sytem level or whether operational system designers will be able to master such a sophisticated
technique.

When those parts of the functional description which must be performed by the computer are separately defined,
the conversion of these into running computer programs has still to be accomplished. At this lower level it is
preferable to think in broader terms than either detailed programming (coding) or computing hardware. The program
network10 is one possible technique. It is based on the concept that the functions are performed by a set of
parallel-running, cooperating processes whose interactions consist of data transfer. A process is defined as a more or
less continuous conversion of data from one form to another. At this stage we are not concerned with the means
by which such a conversion is to be achieved so that a process is best thought of as a "virtual computer" or a
program to be written.

The particular system concept of dividing real time programs into processes and data areas and the diagrammatic
technique described below which arises from it is not universally used, but is gaining acceptance, primarily
because it is an easy method of specifying and comprehending computer programs.

Diagrammatic techniques which illustrate the continuous running of sub-programs in parallel are analagous to
the block diagrams of electronic engineering or analog computing. The importance of such diagrams, which should
not be confused with flow diagrams, has been recognized only slowly partly because, as mentioned previously,
programs have been regarded as algorithms rather than systems and partly because there is no agreed basic
philosophy of real time computing.

Having described how the functions of the system can be achieved by means of interacting conceptual
processes, we now need to consider in turn how these processes should be constructed. Let us assume we have
drawn up a network of processes which is connected with a manual and physical environment by means of data
and signal transfer. Our remaining problem is therefore twofold: to write a program which will run on a machine
for each process and to provide a means of activating processes. One of the major conceptual and practical
difficulties in computing arises from the fact that these process programs must share common computing hardware,
which allows only one process, or part of a process, to run at a time. Of course, multi-computer systems exist
which share the work crudely but it would be impractical and inefficient to use a computer for each process.
Similarly one could envisage multi-processor systems where particular processes are allocated to "processors", but
the present trend toward "reconfigurable" multi-processors for higher reliability implies no such allocative distinction.

We therefore arrive at a situation where the processes are to be implemented using a common set of hardware,
the computer; they share access to a data base containing current information about the environment, and in some
situations share code to perform their actions. Thus, detailed knowledge of the interfaces, both in terms of messages
passing between processes and also via shared data base areas together with information about shared code is vital
to specifying the program design. Such information can be conveyed by means of a matrix of interconnections or
more conveniently by the network diagram. Such a network of processes does not include those manual and non-
manual processes external to the computer, but in early development it is good practice to add these in order to
describe and simulate the total system.

An individual process is "constructed" by writing a program. A process algorithm is therefore described in


plain English or at a more detailed level by its program text or alternatively by a mixture of the two. A single
activation of a process is called a "task". We are now concerned with two questions, how to control these tasks
and how to write the programs. Owing to hardware limitations, tasks must run sequentially and must share
51

hardware resources with all other tasks. This problem is overcome by developing a separate programming system
called the "real time operating system", which conceptually creates from the (usually single) processor of the
computer many virtual processors to run processes.

The real time software system can therefore be divided into two main sub-systems, one concerned with the
application tasks to be carried out and the other with the means by which tasks are scheduled, and allocated to
computing resources such as stores, computing time and peripheral equipment. Leaving the application programs
aside for the moment, we have seen that the real time operating system, which is often rather confusingly described
as "system software" or even "software", transforms the computing machine into a "virtual machine". (In the
future one can expect computer designers to respond to this reflection on their equipment by designing hardware
more closely matched to programmers requirements, once these are better understood.)

A real time operating system does not normally interact with ordinary "user" programmers, as in a conventional
computing installation. It is therefore simpler in this sense than the large multi-purpose conventional operating
systems. (The adjectives "multi-purpose" and "general purpose" when applied to the word "system" should cause
alarm and it is interesting to observe that the complexity of conventional operating systems is widely believed to
be out of proportion to the benefits they bring.) A real time operating system has more limited objectives, but
even here there is a need for a better conceptual understanding and an underlying philosophy. The idea of attaining
a more modular and extensible system (i.e., by dividing into sub-systems) is gaining ground, but the "ad hoc"
approach still dominates. The main difficulty is one of definition. For example a recent trend is to regard a real
time operating system as consisting of primitive routines called a "kernel" or "nucleus" handling operating system
tasks and application tasks indistinguishably. This latter view point presupposes a more clearly defined concept of a
task and a particular philosophy of real time programming. It should be emphasized that it is often difficult to
differentiate between the actual tasks to be performed and such matters as the interactions between tasks, avoiding,
detecting and controlling fault situations, arranging for easy alterations off and, perhaps, on line, avoiding "deadly
embraces" (where two tasks prevent each other from using resources) and, in a message based system, "back up"
where local overload spreads to all message channels.

Whether or not the time-shared processes include operating system programs which look like application
programs, that is irrespective of the theory behind the division into the two sub-systems, each sub-system requires
the construction of processes, i.e., the writing of programs. It is possible at this stage to regard this work as an
independent activity, i.e., merely a case of developing an algorithm to a specification. However these processes are
often complicated or ill defined and the corresponding programs when written reveal inadequacies in the upper level
design. It is a valuable conception therefore to sub-divide each process into sub-processes and data areas in exactly
the same way and for exactly the same reason as the system was divided into processes. The difference here is that
sub-process and data area interconnections involve the data structures and sub-routine calls of the programming
language used. In the case of an Algol type of language such as Coral 66, Reference 13, sub-processes can be
described by procedures, and data area connections are defined and restricted by the syntax. A process can therefore
be envisaged as continuing private data indistinguishable at that level of description from pure program, which at a
level below is revealed as the linkage between sub-processes. It is an obvious extension upwards for a language to
include within its syntax, built-in ways of handling parallel process inter-communication as well. The exact require-
ments for such an invasion of operating system functions is not well understood as yet so that such language must
be regarded as experimental.

Having sub-divided processes hierarchically in this manner we arrive at elemental processes which are programmed
preferably as procedures in high level language (or sub-routines in assembly code). These lowest level procedures will
probably be relatively simple, communicating with data either directly or via the parameter mechanism provided by
the language. The extent to which procedures can be nested depends on the language used. Some procedures will
be common so that a diagram will show a hierarchical network of procedures rather than a simple tree. Connections
with data will be complex so that it may well prove useful to store network information on a central computing
facility as part of the documentation system - together with program text.

The efficiency of the running programs depends on the language design in relation to the computer architecture
and the quality of the compiler. A small number of particularly fundamental pieces of program may be written as
"macros" which expand into short pieces of machine code. Viewing real time programs as systems, these macros are
technically the lowest level elements. However in a modem (high level language) computer based system the "procedure"
should be regarded as the fundamental element which drives the system. The computer based system is then more
accurately described as a software based system, since translation of software into hardware actions is purely automatic.

5.5 FUNCTIONAL SYSTEM APPROACH

We have referred to the need for a technique which bridges the gap between the functional description of the
software sub-system and the complex detailing of programs which are ultimately the means by which these functions
are achieved. What is really required for the software is a number of parallel continually running data processing
operations, but we are forced by the inherent nature of digital computers to construct each data processing operation
52

in terms of an ordered sequence of steps - as the word "program" implies. We are also forced to simulate the
parallel running of these operations by time sharing computer hardware.

We are not therefore concerned in what follows with programs as such — as algorithms expressed in program-
ming language or flow diagrams — since these are the details of construction of a process. We are attacking the
main problem in software production - deciding what programs are to be written. The actual writing of a program
is usually a relatively straightforward job.

The principle behind the particular method to be outlined is that the software is regarded as a system whose
constituents are
(1) data; and
(2) processes
and where the interactions between constituents are
(1) reading and writing of data by processes; and
(2) interactions between processes.

The interactions between processes at an upper system level are


(1) interrupts,
(2) system calls (interactions with some form of real time operating system),aand
(3) sequential activations.

The system is hierarchical in the sense that processes contain (or in programming terminology "call") sub-processes,
leading to
(4) hierarchical interactions between processes.

Although these entities and relationships can be set down in matrix form, a diagram is a more useful tool for
this purpose, and such diagrams are referred to as "program networks" or "Phillips diagrams".

5.6 PURPOSE OF PROGRAMMING NETWORK DIAGRAMS

The programming network 24 ' 27 is a two-dimensional, diagrammatic, information processing, language primarily
intended to simplify the description of large programs particularly those to be written by a number of programmers
(although it can be used to describe any program). The most important time to draw up programming networks
is at the systems analysis stage, i.e., after the system requirements have been clearly established, (by means of
functional specifications for example). Program networks are useful for:
(1) Program design,
(2) Program interfacing and integration,
(3) Assisting management in understanding,
(4) Monitoring progress,
(5) Easing program and modification and maintenance,
(6) Instructing new recruits,
(7) Documentation
(8) Program size and time estimation.

Data and Processes


The programming network consists essentially of two entities, data (represented by rectangles and processes
(represented by circles) together with lines which show intercommunication between these two entities. The
reading of data by processes and the writing of data into data areas by processes is represented by simple unbroken
lines. The distinction between data input by a process or data output from a process is indicated by arrowheads.
The names of data areas and of processes are written into the rectangles and circles. Simple examples are given in
Figure 5.1.

5.7 DATA RECTANGLES

Data rectangles can be used to represent either files, lists, buffers, arrays, tables, identifiers, etc. or the words,
characters, bits, etc. of peripheral sources (keyboards, input typewriters, registers, etc.) and sinks (displays, output
53

INPUT -^PROCESS OUTPUT

Fig.S. 1(a) A procedure named "process" reads data named "input" operates upon it
and writes the result into data area named "output"
Note: In Coral 66 terms "input" is a value and "output" is a location

UPDATE K- DATA

Fig.5.1(b) A procedure named "update" reads, transforms and writes back


data named "data"

Fig.S. 1(c) A procedure named "mix" reads "data" and "data 2", operates
on the data and writes the result into "data 3"
54

(NAME) Process (program, procedure, sub-routine, macro, block, statement)

NAME Data (source or sink)

*•> Data flow


s
Program initiator implying temporary transfer of control (link, procedure
V~> call)
v
/> Program initiator implying permanent transfer of control (jump, goto)
A Program initiator implying the connection between two separately designed
^ = > systems (system call)

-^wvwv-^ External (hardware) interrupt

Fig.S.2(a) Basic symbols

Peripheral device Hardware Hardware data


PUNCH (data source or nterrupt "DARD, source/sink and
sink) generator interrupt generator
\//A
Program which for
Stf Circular list or
buffer with input
r/^rr chain linked
O some purposes may
and output pointers
~yy~ list
w be classed as data

l/P O/P
-V
Parameterless procedure call Procedure call with input
Ar
Procedure call with out
parameter(s) parameters

ro/pi Q3D ITpl


Ar ^ % • ^

Procedure call with input Parameterless system call System call with input
and output parameters parameters)

Jk4 V %
m v
4th output
System call with System call with input and Procedure call when dat
data areas
parameter(s) output parameters omitted (optional)

Fig.5.2(b) Variants of basic symbols


55

typewriters, registers, etc.). Each rectangle must contain an unambiguous and meaningful name (unabbreviated if
possible) which is consistent throughout the documentation. If there are equivalent names or if a data area is broken
down elsewhere into elements, a complete index of names must also be provided. Cross-hatching of hardware data
rectangles is recommended to distinguish them from software sources and sinks. Figures 5.2(a) and 5.2(b) show
the types of basic symbols used. Only simple arrowed lines may be connected to data areas.

Fig.S.3 Simple hierarchic program network

5.8 PROCESS CIRCLES

A process circle represents a function whose behavior is independent of its activation. In programming terms
there must be only one entry point. Process circles can be used to represent procedures, sub-routines, macros, coral
66 blocks — even simple statements. Processes are usually activated by other processes, but they can also be activated
directly by hardware (external interrupt). Processes should preferably be written as procedures (or macros) and are
therefore activated hierarchically by other procedures. The external interrupt connection between hardware and
process is quite different from the data transfer between hardware and process and data areas are conceptually
possible, but direct connections between one data area and other (i.e., without an intervening process) are never
allowed.

Types of process activation are illustrated in Figure 5.2(a). The distinction between these forms is primarily
a question of control, i.e., the place where control exists before activation and the place where control is passed
after the process terminates. As far as the process itself is concerned, there is only one entry point. The behavior
of a called process is therefore independent of the calling process except insofar as it is modified by actual data.
The four methods of activation of processes are:

(1) External activation (by hardware interrupt). Here control is temporarily transferred to the process at a
time determined by external hardware. Thus in practice control is transferred from an indeterminate
point in an indeterminate process and on completion of the process or processes, control returns to the
interrupted process at the point of interruption.

(2) Sequential activation. In this case one process permanently transfers control to another. The last obeyed
action of the calling process must be a GOTO label (or jump) where the label is the name of the called
process. To avoid unnecessary distribution of control, this form of activation should be avoided where
possible.
56

(3) Nested (or hierarchical) activation. This is the procedure (or sub-routine) call, where one process, at a
predetermined point, temporarily transfers control to another, control returning to the calling process at
the point of interruption. Nested activation can also be used to represent the effect of a Coral 66 (Algol)
block. This form of activation is preferable to sequential since it expresses the hierarchical relationship
between processes.9 Input parameters may be passed to a sub-process which may subsequently return
answers to the calling process (see Figure 5.2(b)).

(4) System activation. This form, which can often be omitted in diagrams, is used to represent the transfer
of control, at a predetermined point, from a process in one sub-system to a process in another sub-
system, being cooperating, parallel acting, or separately designed. This form of activation would be used
to describe the interaction between a program and a time-sharing operating system which handles more
than one program. However from the application programmer's point of view, system activation might
resemble any other procedure call.

5.9 EXAMPLE OF A SIMPLE HIERARCHIC PROGRAM NETWORK

Figure 5.3 shows a simple network which illustrates some of the above features. The diagram is not based on
an actual program. The sequential activation symbol indicates that the program starts by activating process x . The
absence of "STOP" indicates that process x never ends. Process x can call processes y and z and process y
can call process z. The process circles can be arranged vertically in hierarchical order so that x is first level, y is
second level and z is third level (i.e., all calls are downward). It is not intended that the diagram should reflect
any time sequences of events, i.e., the sequential activation symbol has not been used (apart from START). The
supplementary documentation will carry the following information:

(1) An overall explanation of the program in terms of x, y, z, fred, John and harry.

(2) The detailed data descriptions of fred, John and harry including an index of their sub-names which seem
in the corresponding program texts, i.e.,
fred or parts of fred are referred to in x and y ,
john or parts of John are referred to in x, y and z ,
harry or parts of harry are referred to in y and z.

(3) A description of x, y and z in the following forms:


(a) In plain English,
(b) In outline language (e.g., pseudo coral),
(c) In Coral 66 (or alternative language).

Note that program z will not refer to x or y ,


program y will refer to z only,
program x will refer to y and z .

Although the network of Figure 5.3 is purely imaginary and the supplementary information is not available, it is never-
theless possible to describe the program in some detail. Process x has an internal loop or loops which ensure continuity
and this process updates fred using john data. It call two sub-processes y and z which use fixed constants called harry.
Process y also uses fred. The purpose of y and z is to produce john data for x . The overall purpose of the network
is to provide fred data for output or onward transmission. This data is provided by x from john data, which is itself
produced with the assistance of y or z or both. There is also some feed-back since sub-process y uses fred data.

Note that the number of times x calls y and z and y calls z (or even whether they are called at all) is not
revealed by the diagram since this depends on the algorithm concerned and the actual data.

5.10 HIERARCHY OF DIAGRAMS

A fully detailed network diagram of a very large program will contain many rectangles and circles. To avoid
complexity and to aid comprehension, large networks should be prescribed in hierarchical fashion such that each
diagram is an expansion of part of a higher level, simplified diagram. Each diagram should contain less than about
twenty or thirty rectangles and circles. No software system, even the largest, should therefore require more than
five or six levels of documentation, although the number of lowest level diagrams would be quite large in this case.
57

LEVEL 1

LEVEL 2

LEVEL 3

LEVEL 4

LEVEL 5

LEVEL 6

Fig.S.4 Procedure call diagram

An hierarchical program network should be simplified by the following rule:

A sub-process activated by only one process can be regarded as a part of that process and so can be "eliminated'
by merging it with the activating process, adding its activating arrows and data connections to the joint process.

Figure 5.5(a) shows the effect of following this rule on the network of Figure 5.3. Note that process x (now
including y ) writes data into the data area called john, but the three data areas cannot be omitted since they do
not belong solely to any one process.

Figure 5.5(b) shows the result of a second simplification. It is now permitted to eliminate process z since it
is now called by x only. Figure 5.5(c) shows a third simplification, permissible because all three data areas now
have simple connections, "sam" being the name of all three data area.

Similarly, the network of Figure 5.4 may be progressively simplified as shown in Figure 5.6. Firstly, we can
eliminate b, e and g.

Note that process a must activate process f since it now includes process b .

The second simplification reduces the network to process a and process f and a third simplification
eliminates process f.

In general, any programming network can be progressively simplified in this manner except in "pathological"
cases where every process is activated by more than one other process. This only occurs when there are loops
which means, in the case of nested activation, recursion.

Figure 5.7 shows a highest level network simplified to the point that it could be said to describe any real-time
computer-based system. There are only two processes, one being software which operates on inputs from automatic
and keyboard sources and provides output data both for automatic control and human recognition. The human is
regarded as a "process" too complex to replace by software.
5H

START

Fig.5.5(a) First simplification

START

Fig.S.5(b) Second simplification

START

SAM • FRED & HARRY & JOHN

Fig.S.5(c) Third simplification


59

START

START

Fig.S.6(b) Second simplification

Fig.S.6(a) First simplification

ENVIRONMENT

DATA SOURCE DATA SINKS

DISPLAYS KEYBOARDS

Fig.5.7 Real-time computer-based systems


60

Fig.S.8(a) Simplified real-time system

TIME 'ENVIRON-
SIMULATOR P2 MENT
^SIMULATOR

jr
p?
INTERVAL
TIMER
am
1 P1
H
DATA
STfJKt
f u INPUT
DATA

Fig.S.8(b) Simulation of system shown in Figure 5.8(a)


61

5.11 SIMULATION AND TESTING

Program design could be described as the expansion of a higher level program network into lower level detailed
program networks. The production stage could be described as the integration of lower level networks into a higher
level network. In the latter stage, each progressively larger group of modules (rectangles and circles) is surrounded
by test modules culminating in the simulation of pheripheral hardware. These test data and simulation programs
should also be described by program networks. (See Figure 5.8.)

5.12 REAL TIME COMPUTER SYSTEMS

The majority of computing is concerned with computation for business or scientific purposes. In these
applications, a program is written which takes in some data, transforms the data into some other form and outputs
it to a printer, plotter or graphical display. Nowadays, a computing service is set up to do this for a number of
independent users who share the computing facilities. A user's individual results do not depend on the time taken
to perform their computations, i.e. they are not "real time" programs in the normal meaning. (In practice users
are very concerned about "turn around" time so that the term "real time" is often ambiguous.)

Continuously running within such a computer is a complex (and often very large) program which organises
the separate computations (by "batching" or "multi-programming") and controls the various peripheral equipments
employed. Such a program, or rather, system of programs, is a general purpose "operating system". Although not
often thought of in such terms, an operating system is an example of a complex real-time software system dedicated
in this case to the special functions of a multi user computing service, namely the preparation and running of a
wide range of unknown, individual, user programs. Since each "user" is intentionally shielded from knowledge of
its inner workings, the operating system can well be regarded as an extension to the computer hardware and indeed
modem machines are being designed with this in mind, particularly for real-time applications.

The kind of real-time system with which we are concerned in this book differs markedly in certain respects.
In the first place there are no user programs as such and so a wide range of back-up software is not normally
required. Secondly, there are additional peripheral equipments to control which are foreign to a conventional
computing configuration. These control programs are usually regarded as part of the "application programs". A
real-time operating system therefore has fewer functions to perform than a general purpose operating system. In a
very simple real-time system the operating system itself could virtually vanish as a separate entity, all its functions
being performed by the application programs themselves. Depending on the range of peripherals and the range of
functions to be performed by the system, the dedicated real-time system can be much simpler or more complex
than a conventional general purpose operating system. However, the complexity of most modem real-time systems
necessitates sub-division of the software into smaller interacting sub-processes. It follows that, unlike conventional
programs, real-time programs are more than merely algorithms and should be regarded as software systems.

Simpler types of real time on line systems would be fully automatic, but more complex systems permit manual
intervention and supervision. Such systems also refer to and maintain a data base. The stored contents of a real
time computer therefore consists of messages (communicating with the peripheral hardware), variables (the data
base) and "pure" program. During the early stage of building such a real time system the elemental sub-programs
can be regarded as independent computations so that a simplified form of computing service with support facilities
for compiling, editing, loading and testing is required. In general, such facilities have been poor in the past since
the kind of computers used for dedicated systems tend to employ computer hardware not normally intended for a
general computing service. There are two ways to overcome this difficulty. One is to provide a program develop-
ment facility specially for the development of the system, the other is to develop the system on an existing
different facility. The latter method is adequate for the early stages, but requires the programs to be easily trans-
ferable. When the final stages of integration of the sub-programs with the particular sensors take place, the general
computing facility is inadequate and some means of testing and development on the object machine is essential. As
a result of past difficulties, much more attention is now being given to program development systems for real time
computing projects.

A real time program consists or can be viewed as consisting of a number of interacting continuously running
sub-programs specially designed and dedicated to respond to and control input/output peripheral hardware for some
overall purpose. The concept of continuous "parallel" running of programs is very important in real time systems.
Of course, from a lower (machine code) level viewpoint, most computers in fact permit very little parallel operation;
perhaps one central processing unit, with a separate input/output channel to the main store. At the lower micro
unit system level, all operations are sequential, unless there are separate stores. Some "multi processor" computers
have separate stores and processing units, but these are still comparatively rare. The software problem arising from
the continuous running of programs "in parallel" are the main "techniques" difference between real-time systems
and ordinary user computations.

Fundamental to the parallel running of a set of cooperating processes is the means of interaction and the
techniques of activation of processes. Two main methods of activation are used, one (asynchronous) analogous to
a postal service or in/out tray where signals are polled, the other (synchronous) analogous to a telephone in that
62

processes are interrupted. Interrupts are used by external hardware to activate processes and these in turn interrupt
other processes. (It is interesting to note that an interrupt, viewed at a lower sytem level as digital hardware logic,
is achieved by polling techniques nevertheless.)

In dedicated real-time computer systems the real-time response of the system (including the collection of sub-
programs) is vitally dependent on the particular requirements of the system and on the peripheral hardware employed.
The system programs must take into account an environment determined response time. We can therefore distinguish
two extremes in real time computing systems, those in which a slow response to manual intervention is merely
inconvenient and systems which would fail if a particular process (running of a sub-program) did not respond in time
to a peripheral hardware interrupt.

Important intermediate cases — also described as real time — are transaction processing, dedicated systems
whose function is to handle as much traffic as possible such as air traffic control (where overall response time is
particularly important) or (less real time) air-line booking. There is no doubt that some of these distinctions are
more apparent than real and that the general trend in computing is towards more "real time" and to some extent
towards more dedicated systems. Possibly the true differences between computer systems are connected more with
objectives such as response time, throughput, integrity, security, reliability, flexibility, adaptability etc., than with
the extent of real time operation.

5.13 HIERARCHICAL VIEWPOINT

Complex systems are difficult to design and even to describe. As in other fields, a common descriptive tech-
nique is to start with the broad features and proceed in a series of steps of increasing detail towards elemental
components. From such a "top-down" hierarchic systems viewpoint, our subject encompasses a number of distinct
levels.

At the highest level we have the user's view. Here the system is described functionally, i.e. in terms of what
it can do. This description should hopefully match the original operational requirements. The next level describes
how this is achieved in terms of manual operators and peripheral equipments viewed as sources and sinks of data
and control signals. The applications software occupies a specially important role at this level as it is both the glue
which holds everything together and the central means by which the functions are carried out. For this reason
our systems are more accurately described as "software based" rather than "computer based". This software is
sub-divided (perhaps through several sub-levels) into a large number of simple algorithms. Each algorithm is described
in a computer language which is translated (by a special compiler program) into binary code. Here we arrive at a
description of the computer. The binary code is interpreted by digital hardware into microprograms (sequential
sets of standard gating pulses) which control registers and store locations consisting of sets of semi-conductor
packages etc.

Of course, it would be unlikely that a new avionics project would involve simultaneous design at all these
levels. Nevertheless, independent developments cannot be ignored. As is common in systems work, designers at
any one of these levels would prefer the lower level technology to be static and standardised since this simplifies
his problem, but on the other hand, technological advances in lower levels offer him new possibilities. Some
examples of these interactive problems are: the effect of rapid technological advance in the semi-conductor
industry on computer design, the matching of computer architecture and machine code to the growing need for
better operating systems and high level language programming, the design of high level languages which are
reasonably powerful, stable and standard, cover a range of users and yet make proper use of improved computer
hardware and, perhaps most pressing for the avionics system designer, the relationship between programming and
system functions.

Until quite recently, it was generally believed to be essential to program real time systems in machine code.
This created a very wide gulf between the user's view of the system as a set of operational functions and the assembly
code programmer's viewpoint. This gulf is being bridged by growing realisation that a real time program must be
sub-divided in some way and that "efficient" assembly language programs can be bought at too great a price in
complexity.

The idea of using computers to control equipment is comparatively recent. At first, real-time programs were
written in the individualistic style of conventional programming by programmers who often knew nothing of
engineering. Such programs were comprehensible to the programmer only and led to obvious difficulties in develop-
ment and maintenance. When teams of programmers were involved this could lead to chaos. Nowadays, the accent
has shifted from ingenuity and run-time efficiency towards comprehensibility and structure. The greater use of
diagrammatic documentation techniques and high level languages is part of this emphasis. Much remains to be done
here. A major impediment to the exploitation of the computer in engineering systems is the relative difficulty of
"explaining" the workings of a program compared with the workings of complex hardware. It is as if electronic
equipment could be studied, analysed and comprehended only in terms of detailed wiring diagrams.
63

As computer systems become less of a novelty, standardisation at lower levels, though restricting possibilities,
offers the avionics system engineer fewer problems, assuming he is prepared to make use of existing technologies.
For example, if he is content to use "off the shelf hardware and a standard language for real time programming,
his basic elements are statements in computer language. On the other hand, if he wishes to use a new computer
architecture with novel instructions he may be forced to extend a standard language or even redesign it. Similarly
new digital circuits may lend themselves to new computer hardware concepts.

REFERENCES
1. White, Systems Analysis, Saunders, 1969.
Tauber,

2. Political Economy of Efficiency, Public Administration Review (USA) December 1966,


also C.A.S. reprint No.2 HMSO.

3. Emery, F.E., (Ed.) Systems Thinking, Penguin, 1969.

4. Boguslaw, R. The New Utopians — A Study of System Design and Social Change, Prentice Hall,
1965.

5. Klir, G.T. An Approach to General Systems Theory, Van Nostrand, 1969.

6. Mesarovic, M.D. Theory of Hierarchical, Multilevel, Systems, Academic Press, 1970.


etal.

7. Langefors, B. Theoretical Analysis of Information Systems, Student Litteratur, 1966.

8. De Greene, B. Systems Psychology, McGraw Hill Series in Management, 1970.


Kenyon (Ed.)

9. Naur (Ed.) Software Engineering, p.47, 186, 204, NATO Publication 1969.
Randell (Ed.)

10. - A Guide to the Development of Computer Based Systems, I.E.C.C.A. (P) 4/72,
Royal Radar Establishment, MOD(PE).

11. Smith, M.H.A. Syntactic Description as a Means of Writing Computer Programs, A.S.W.E. Tech.
Report TR-70-4, July 1970.

12. Hdy, M.H.M. Syntax Analysis as an Aid to Sytem Design, RAF, Radio Introduction Unit,
RIU/126/1/AIR June 1972.

13. Woodward, Official Definition of Coral 66, HMSO 1970.


et al.

14. Phillips, C.S.E. Networks for Real Time Programming, Computer Journal, Vol.10, No.l, May 1967.
(An early description of program networks as used for an automatic radar program.)

15. Jackson, K. Debugging and Assessment of Control Programs for an Automatic Radar, Computer
Prior, J.R. Journal, Vol.12, No.4, November 1969. (This article refers to program networks for
program testing and the simulation of peripheral hardware.)

16. Jackson, K. An Experimental Operating System Written in a High Level Language, Software
Symposium on experiences with software in Computer Control Applications, Institute
of Measurement and Control, July 1969.

17. Jackson, K. An Exercise in Program Design, Inter Establishment Committee on Computer


D.E.Buchan (Mrs) Applications, IECCA(P) 6/71 Ministry of Defence (PE), UK. (A detailed description
of iterative top-down program design using networks extensively.)
64

CHAPTER 6

AVIONICS SYSTEM ARCHITECTURE

R.E.Wright

6.1 INTRODUCTION

The system architect's task is to define and combine a set of hardware components to form a system whose
aggregate behavior will meet the operational requirement for the system. Most avionic systems start with an
operational requirement specified by a user or air-frame manufacturer. During the short history of aviation
there has been a growth of such operational needs which have presented problems requiring technical solutions.
There has also been an evolution of technologies to meet such needs. The two processes have to some extent
proceeded independently, becoming locked together whenever a major project requires implementation of hardware,
when it is the task of architectural design to meet operational requirements with components which meet the
system constraints and are within the "State of the Art".

This design process may well necessitate compromises in the operational requirement should a fully
compliant solution be impractical on technical or economic grounds. The avionic system derives much from the
general developments in system engineering, but is subject to particular operational requirements, physical
environments and physical constraints which together justify a somewhat specialized approach.

The operational requirements imply some target for the reliability of a system which is dictated by require-
ments for mission success and aircraft safety. The reliability of a system can be expressed as the probability
that it will perform a specified mission. The advent of digital computers in the 1950's offered a potential
solution for an increasing operational need for precision in calculation and data transfers associated with navigation
and weapon delivery. However, the early digital computers, based as they were on thermionic valve circuits,
could not survive or operate reliably in the relatively hostile thermal and mechanical environment of aircraft, nor
meet competitively the physical constraints of size, weight and power consumption. The digital computer was
thus initially confined to a ground environment, and ground-based systems for the tracking and control of aircraft
were developed for both commercial and military applications. This work included the development of data links
for the transmission of information in digital form over wire and radio links, both between ground sites and
ground-to-air.

A dramatic change was brought about by the development of the transistor, the magnetic core-store, and
subsequently the integrated circuit. These circuit techniques made it feasible to develop computer equipment for
airborne use. There are now available and in development a variety of digital computers suitable for airborne
use. This chapter will be concerned with the design of systems involving digital computers as "components"
and the design methodology at that component level.

The early aerospace applications of digital computers were substantially real-time, the computations being
performed using data simultaneously acquired by the computer system and the output from the system being
used to give directions to operators or control systems. Such systems existed very much in an analog world.
Input parameters (such as pressure, air-speed, aircraft heading) were continuous and usually presented to the
digital system in electrical analog form (i.e., dc voltages, synchro waveforms, etc.). There has since been a
widening application of computers to cover management functions and signal-processing, some aspects of which
are not required to be performed strictly in "real-time". At the same time transducers and other systems have
tended to use digital techniques and provide digital interfaces. However, the changes have been largely-
uncoordinated and often present the system designer with unnecessarily complicated situations.

A major consideration in any manned aircraft is normally the safety of the crew and passengers. Analog
equipments, including radio equipment, have developed largely in an uncoordinated way, each new flying aid or
facility generating its own standards and equipment. Once established as a generally recognized facility individual
equipments have been developed by a process of evolution, but in general have been resistant to radical change.
Reasons for this include the undesirability of changing displays, facilities and controls with which air crew have
become familiar and the cost of replacing existing in-service hardware. New equipments have been accepted but
have not in general replaced the established facilities. This situation has produced an embarrassing accumulation
of hardware, but the very duplication and redundancy of facilities has the attraction to the operator of lessening
65

his dependence on any one system and the separation of systems minimizes the chance of propagating faults
from one system to another. Digital techniques offer the possibility of combining some of these systems, but
in doing so the system architect must endeavor to maintain the level of system integrity to which the user has
become accustomed.

The need for safety has led to the establishment of national agencies charged with the specification and
control of design standards for avionic systems, both military and civil. These bodies have produced a maze of
specifications and procedures of which the system designer must have cognizance.

However, aviation is an international business and aircraft have to interact with ground-based facilities. This
has led to the necessity for defining international standards for some system parameters (i.e., radio frequency
allocations) and equipments. International military standards are usually agreed on an inter-government basis.
Standards used in commercial aviation are also determined on an inter-government basis, but the operating air-
lines and the supplying avionics industries have formed associations and agencies (such as the Aerospace Industries
Association, the Airlines Electronic Engineering Council. ARING and EUROCAE) with the aim of consolidating
opinion among the participants so that recommendations can be made to governmental bodies, airframe manufac-
turers and equipment suppliers. It can be extremely important to the viability of a commercial system develop-
ment that it is, or can be made to, fit within the framework of an internationally agreed specification.

Most aerospace digital computers use the binary number system. Each digit of a binary number can be one
of two states, '0' or '1'. Thus binary digits (or "bits") can be represented by a variety of physical devices that
have two distinct states, such as a switch that is either "on" or "off", or an amplifier output which is either
"hard on" (low voltage) or "hard off (high voltage). A binary number or code of several digits forms a word.
Words can be represented and transmitted electrically either as a time sequence of two levels of signal on a
single wire (serial operation) or as a set of simultaneous signals on a set of wires where each wire corresponds
to a particular digit (parallel operation) or a combination of the two (serial-parallel).

The two-state nature of the signal enables thresholds to be defined such that appreciable degradation from
an ideal signal can occur before a '0' or T state is incorrectly identified. Also drcuits can be compounded to
give words of any length, so that once quantities have been converted into digital form it is possible to transmit
and record them without loss of accuracy, and to perform calculations with them to any desired precision. The
trade-off is primarily between accuracy and hardware. Typical word lengths for data in aero-space systems lie
between 12 and 24 bits. Computer architecture can include a range of the word lengths for both data and
instructions within the same computer.

The potential advantages of digital techniques at a system level include


— maintenance of accuracy of encoded data during manipulation,
— adequate computational precision and range of computing power (including use of special purpose
processors, for example for spectral analysis)
— range of techniques for filtering and mixing of information (often without the time lags inherent in
other approaches),
— automation of operating mode selection (reducing operator work-load),
— multi-plexing wire-sharing techniques (with consequent weight saving),
— data manipulation for communication with operators, crews and with other systems (making the use of
electronic displays feasible),
— possibility of storing library information, including flight manual and maps, in digital form in backing-
storage (with a consequent weight saving and ease of up-dating and access),
— possibility of automatic fault detection and mechanizing failure survival,
— relative ease of system development and optimization (including the use of standard hardware and
software modules).

These advantages alone can make digital techniques essential to meet certain operational requirements, but
this is re-inforced by the continuing rapid development of digital hardware technology. Developments in MOS
and Bi-polar semi-conductor technology, thick-film, thin-film and printed circuit interconnection techniques,
circuit encapsulation techniques, and electro-optical techniques have considerably widened the range of physical
environments and constraints for which digital systems are practical. The predominant development is the so-
called Large-Scale Integration technique(L.S.L), whereby some hundreds, even thousands, of digital circuits
(e.g., gates) can be accommodated on a single semi-conductor chip. This will make available an increasing
range of L.S.I, computers, memories, input-output multiplexers, etc., at significantly reduced quantity production
prices, so that the system architect will be able to make more liberal use of processors and memories, and be
able to trade logical multiplexing of signals for wiring.
66

6.2 THE PRACTICAL APPROACH

As computer systems have continued to grow in complexity and sophistication it has become increasingly
recognized that to design, analyze and document a system a number of levels of system description are necessary.
These are not alternative descriptions, each level of description arises from the abstraction of the levels below it.

A hierarchy of levels can be identified, each level has associated with it distinct "language" for representing its
components, modes of combination and laws of behavior, albeit that the language may be expressed in both algebraic
and graphical form. Bell and Newell1 identify four main levels: the circuit level; the logic level, the programming
level, and the Processor-Memory-Switch level (abbreviated to "PMS level"). It is with this PMS level that we are
mainly concerned here. The system is conceived as an inter-connected processing system. The medium flowing
through the connections is information, which can be measured in bits (or digits, characters, words, etc.). The
components of the system are modules with information handling characteristics, including capacities and flow
rates. The methodology of combining such components is system architecture. A definition of avionics systems
architecture is then: the combination of programmed processors, memories, switches, controls, communication
links, peripheral control/interface units, peripherals/transducers to perform a defined combination of operational
and control tasks, subject to partitioning and packaging dictated by the physical environment and requirements for
maintenance. However the system architect must also be concerned with other levels of system description, the
lower levels of programming and logic, and a rather ill-defined higher level concerned with the interaction of the
computer system with other major systems, including possibly other computer systems. This higher level of system
description (Major System Level) is necessary to determine the functional requirement of the computer system,
and will be considered further in another section.

The primary components of PMS systems are defined by the set of operations they perform. In general the
primary components consist of PMS structures of other components. Primary components interconnect with each
other at communications interfaces called "ports". Here we will content ourselves with allocating single-letter
names to primary components and defining the roles the components play in the system structure. A more detailed
notation is given in Reference 1.

I-unit: A hierarchically organized information structure, in which each level consists of a number
of sub-units, all identically organized. The basic unit of information is usually the bit.
Information rate, as measured at a port for instance, is the flow of I-units per unit time.
L-Iink: A component for transmitting I-units from the port of one component to the port of
another. A link permitting transmission in one direction only is normally called a simplex
link, a link permitting transmission in both directions is called "full" or "half duplex,
depending on whether the transmission can take place simultaneously in both directions
or not. The I-unit can be transmitted as a message block, of width determined by the
number of basic units transmitted in parallel and of length the number of widths trans-
mitted serially in one operation. The physical realization of links as wiring is often
termed a "highway" or "bus".
M-memory: A memory is a device for storing information, and indeed the term "store" is used
synonymously. It consists of an array of locations in which I-units (i.e., words) can be
stored. The two main operations are reading, in which an I-unit presented at an input
port to the memory is transferred to a location, and writing, in which the I-unit in the
location is presented at an output port.
The information defining the address of the location used may be supplied by the
component accessing the store information of some different component. The information
rate is the information in the stored I-unit times the operation-rate.
S-switch: A potential means of linking sets of input and output components. It is actuated by an
address which determines the sub-set of links to be connected.
T-transducer: A pair of connected links that have different I-units, or underlying carriers. Although the
meaning of the information transmitted is preserved the amount of information may not
be. At a higher level of PMS structure a transducer may represent an analog-to-digital
interface.
K-controI: A logical circuit that evokes operations in other components.
D-data operation: This component creates information, It takes information as an input, operates on it, and
presents the result at an ouput.
P-processor: A component which operates with memories to perform a sequence of operations, including
data-operations, on I-units from memory. Each operation sequence is determined by an
instruction (or "order"), and the component can be characterized by its instruction set.
A distinguishing feature of the processor is that it determines its own next instruction.
This is achieved by adopting instruction formats which enable sequenced instructions
(i.e., a program) to be held in memory, the address of next instruction from the memory
to be used being determined by the processor itself.
67

C-computer: This is a combination of processors, memories, transducers, switches and controls that can
perform information processing under the control of a common program. Such a computer
with more than one processor is called a multi-processor computer, and a distinction should
be made between this and a system complex involving more than one processor obeying
separate programs; the latter is a "multi-computer system". It should also be noted that
that part of a computer dedicated to servicing a particular port is sometimes termed a
"channel".

N-network: A collection of two or more computers not interconnected via a primary memory (e.g., a
memory holding directly executable programs).

An example of the use of this notation is given in Figures 6.2 and 6.3 which show the same typical computer
system in block diagram form and PMS notation respectively. The notation is not necessarily comprehensive, the
level of analysis of the structure of each component in terms of its own PMS structure being at the discretion of the
user. For example in the diagram the Input/Output Controller is represented as a switch, although it will probably
have controller (K) and memory (M) components. It is also possible to describe and classify the components in
more detail, by both the use of subscripts as shown in the figure, and by the addition of abbreviated text using a
formalized language defined in the reference, which allows all the normally important attributes, parameter values
and options for data processing components to be defined. For example the notation for Mp could be

Mp(core; (tc: 1 ws/w; 4 Q96 - 32768W; (24 + l)b)/(tc: 650 nS/W;


16384 - 32768W; (24 + l)b)

indicating a core memory with two options, a 1 JUS cycle-time (i.e., time to read a word from memory and replace
it) memory expandable from 4096 to 32,768 words and a 650 ns cycle-time memory expandable from 16,384 to
32,768 words, both memories giving a 24 bit data word output with an additional bit (e.g., for parity).

The full PMS notation appears to be a useful tool for analyzing and classifying computers and computer systems.
However as yet it has no general acceptance and is not the form in which actual hardware is specified by manufac-
turers, and it has yet to be established as a practical notation for system design. However, the system designer must
use some form of notation, and the PMS notation has been introduced here to illustrate the components that must
be allowed for.

Aviation electronic hardware is normally packaged in the form of equipment modules (the so-called "black
boxes") which are interconnected via plugs and sockets and a wiring harness. The modularity has partly been
dictated by ease of maintenance, each box being a Line Replaceable Unit (L.R.U.) which, when it fails, can be
replaced by an identical unit as a means of first-line servicing. It also has attractions to the equipment manufacturer,
as each black box can perform a specified system function which can be tested, by providing test-signals at its plug
and socket interfaces, before being installed. This general approach has enabled the air forces and air lines to specify
function, mechanical dimensions, and plug and socket interfaces of equipment to manufacturers, while allowing
manufacturers reasonable freedom in choosing the technology and design of the equipment within the box. In
particular a range of modules to the ARINC 404 Specification have been developed covering a wide range of
equipment. It is now common practice to fit a particular equipment into a variety of aircraft types. For air forces
and air lines operating a number of aircraft types such use of common equipment has significantly reduced the
amount of training and logistic support required.

At first most black boxes used analog techniques and communicated using analog signals. The use of digital
techniques within the equipment has evolved slowly and where digit transmission links have been used between
units, the transmission standards have often been specified on a per system basis. This has already led to a prolifera-
tion of data link standards, although ARINC have suggested a common method of classification. Figure 6.1 shows
a typical system diagram involving such standards. There appears to be room for rationalization in order to allow
more flexibility in system configuration.

A general practical apprach to the design of future digital hardware modules will be to define the system at
the PMS level, determining information flows, information rates, and processing loads, including an analysis of the
interfaces with the analog world via sensors and transducers. The components can then be partitioned into suitable
L.R.U.'s with defined electrical interfaces. Ideally the number of types of interface should be rationalized, so that
alternative configurations of the same modules are possible to meet other operational requirements.

The processing loads determine the type of processor required and the number of words of memory required
by the program. The computer program is made up from instructions concerned with execution of the tasks
("tasks", "application" or "object" programs) and instructions required to regulate the flow of work ("executive",
"supervisor" or "organizer" programs), In simple systems the organizer program may be merely concerned with
ensuring that a sequence of tasks are obeyed or not, according to the system state. However, in more sophisticated
real-time systems it is usual for the computer to be run in a multi-programmed mode, a number of programs being
active at the same time in the same computer. Usually in real-time situations this will involve the facilities of a
particular processor being time-shared between different programs under the control of the organizer program
bS

DME
DISPLAYS

L
—OBB.ABB/BABB/DBBA/BBB >-ABB.ABB/BABB/DBBA/BBBB;

1 2
582 582
CDU
cou

o
• t ol r i f
?
,2 w l li
1 1
o o 582 DADS o u
o o DADS 582 (J u
m co FDSU FDSU DISPLAYS co m
DISPLAYS
m m m m
/\ ' I U .\
2
> > 571
> > ISS >- o
>• > >s <-o>
> ? 1 3 V CO
>- >•
> >•
> >
o > > o
n 571
ISS
'1 '
671
ISS
> £
>
> Si
>- o
v y
>-
CD

a > >H u
1
I
CD >• > a >• m

1 >
>
CO
CO

Fig.6.1 Typical avionics system interconnect diagram using classification system of


ARINC specification 419
69

PERIPHERAL
CONTROL
COMPUTER UNITS PERIPHERALS
A A A

PAPER
TAPE PAPER PAPER
CONTROLLER TAPE TAPE
PRIMARY READER READER
CENTRAL INPUT/OUTPUT
MEMORY PROCESSOR CONTROLLER

KEYBOARDS
COMPUTER
CONTROL
PANEL SLOW
SIGNAL
MULTIPLEXER

CORE CENTRAL INPUT/OUTPUT MISCELLANEOUS


MEMORY PROCESSOR CONTROLLER ANALOGUE SIGNALS
(BASIC FIT) UNIT

MAGNETIC
TAPE
CORE INTERFACE TAPE TRANSPORTS
MEMORY
(EXTENSION)

MAGNETIC
O^O
TAPE
CONTROLLER

PROCESSOR TO
MEMORY
PCUTO I/O
CONTROLLER
O'O
HIGHWAY HIGHWAY
('CLOSE I N ' O R
'BLEEDING STUMPS'
INTERFACE)


DISPLAY
BUFFER
o C.R.T.
DISPLAY
UNIT

o CONSOLES

DATA LINK MULTIPLEXER/ RADIO


CONTROL DEMULT.
UNIT MODEM
LINK

K \sf /
INPUT/OUTPUT PCUTO
HIGHWAY PERIPHERAL
HIGHWAYS

Fig.6.2 Block diagram of a typical computer system, showing principal 'highways' or 'busses'
70

T (Paper Tape Reader)

T. console T (Keyboard)

L
Mp •

r •Pc • Stm ' Stm f


T (Analogue)

Mp
•Ms

Sfx
Ms

T (CRT; display)
-Stm
T (CRT; display)

KT L (radio link)

Mp; = primary memory, holds data and directly executable programs


Ms : = secondary memory, holds data and/or executable programs which are not directly executed.
Stm : • switch, time multiplexed.
Sfx: • switch, fixed until changed (i.e. latched).

Fig.6.3 Computer system, as Figure 6.2, expressed in PMS notation (see text)
71

(see Reference 8). Computers typically have a variety of operating modes, task priority allocations, procedures and
facilities for protecting data and program from corruption, methods of response optimization, etc., and may require
a powerful control system comprising both hardware and software facilities. For the purpose of this chapter we will
use the term "operating system" for this control system, and call the associated programs the "supervisor" programs,
reserving the term "executive" for highest level of program control within a supervisor and any associated hardware.
However, it should be appreciated that these software terms are often used synonymously in the literature. The
specification of the response times of a real-time operating system can be complex, as they will depend on the
system state and its previous history. As with the hardware a modular approach to the definition of programs, both
task programs and supervisor programs, so that software can be reconfigured to meet other operational requirements,
is potentially advantageous.

In this design process other practicalities must be recognized. As avionic systems have become more sophisticated
and complex there has been a corresponding requirement for advances in test philosophy and test equipment. A
major development has been the increasing use of built-in test equipment (BITE) to monitor equipments during
flight. This facility may involve monitoring by flight crew in large aircraft, but in small aircraft must be essentially
automatic. Computer control of system testing offers potential advantages and this function may be performed on
a time-shared basis by some computer in the system or by a computer dedicated to the task.

A further consideration is the provision, characteristics and partitioning of power supplies. Although centralized
power supplies for logic circuitry are potentially more economic both to make and to operate, the considerations of
system flexibility and integrity usually lead to power supplies forming an integral mechanical part of the equipment
they supply.

Although the design of primary power supplies is often not under the influence of the computer system
designer, there is usually some strategy in any particular aircraft whereby power buses are allocated to various
services and there are usually special supplies for primary flight instruments and essential services. The computer
system architect should try to utilize these services as appropriate for BITE and standby conditions associated with
system failure and recovery. It is possible, in some applications, to arrange for failure of primary supplies to be
detected while the equipment voltage rails are still within tolerance and for the system to be shut down in an orderly
way under the control of the computer executive ready for re-start when the primary power is re-established.

6.3 METHODS OF ASSESSMENT OF COMPUTING POWER AND INFORMATION RATES

It is typical of many real-time systems that the computational load and information rates can vary with time,
and it is normally a design requirement for the system either to have a data-handling capacity sufficient to cope
with peak-load conditions or to have built in procedures (e.g., priority structures of changes in operating mode) for
handling peak conditions in a safe way. However the automatic detection of an impending over-load can be difficult
to arrange and the detection facilities themselves may well contribute to the overload.

Another parameter which must be estimated during the preliminary design phase is the memory size required
for both program and data words.

Ideally some method of analysis is required to determine the system work-load, and some measure of compo-
nent performance is required so that the system performance can be matched to the operational load. Although
this is a fundamental task for the System Architect there is as yet no completely satisfactory approach to the
problem. The task involves system analysis, computer architecture and programming. The final stage of any such
analysis is when the problem is defined, coded for a particular hardware configuration, and then mn (either on
the actual hardware in real-time or by simulation) using representative system inputs. In practice it is usually
necessary to make assessments of the necessary hardware and programming requirements at some earlier stage of
analysis. Usually there are some constraints to the analysis, for example, it may be that only certain types of
processors and memories of known characteristics can be considered.

It is usually necessary to analyze the computational requirements at various levels, namely the major system
level, the PMS level, and the programming level. A promising formal approach is developing from a discipline
originally aimed at preparing maintenance handbooks, but which has been extended as a method of disclosing the
design of a system, at various levels of detail, as it develops.

At each level of design at least four documents are required:


- A functional block diagram defining the component functional units of the system or equipment, by
showing the information flow between the units and in particular the main signal flow. It also allows the
boundaries of hardware units to be defined.
- A functional blocked text, laid out in blocks identical with those of the block diagram, with text
(including mathematical relationships) describing the functions of each functional entity within that block.
72

A dependency chart, relating functional output to the events on which they are dependent.

— A signal specification listing all signals and their origin, destination and type.

Having identified the functions to be performed a possible computer system can be defined at the PMS level
and the functions divided into separately identified tasks or "jobs" which are then allocated to computers or
processors. It is next necessary to establish whether each processor can meet the load placed on it. One technique
for doing this is "job-mix modelling" (see Reference 3). Typically in an avionics system many jobs are repetitive.
For a single iteration of each job the following parameters are estimated.

— amount of data and program obtained from each level of memory,


— execution time,
— amount of data returned to memory,
— input-output requirements,
minimum periodic execution rate.

Such an estimate involves the use of assumed processor characteristics and an estimation of the number of
program instructions involved. Program estimation is a technique in itself (see a previous chapter). However, initial
estimation is usually possible by some program analysis (i.e., macro flow-charting) allied with estimates based on
previous experience.

Programming for avionic applications has usually involved writing programs in some form of symbolic machine
language or assembler language very dose to the instmction format of the machine, typically so that there is nearly
a one-to-one correspondence between each program instruction and machine instruction.

With larger programs now being required for certain airborne applications there are advantages in adopting a
more linguistic method of writing programs, to ease communication problems between programmers and to reduce
the problems of generating, checking and maintaining software. The use of such an approach requires the specifica-
tion of "high level languages" suitable for real-time systems and the development of "compilers" to translate from
the program written in terms of high level language statements to the instruction code format of the machine (see
Reference 19). If programming estimates are made assuming a high level language, then translating high level
operations to machine code is necessary in order to arrive at estimates of memory size.

For some simple systems, where jobs can be executed in a fixed sequence and have no significant interaction,
the sum of the execution periods of individual jobs indicates the iteration rate for the total computation cycle,
which should be at least as high as the minimum periodic execution rate for each of the jobs. If it is not it may be
possible to process critical jobs more than once in each main cycle. Where the computer forms part of a control-
loop the techniques of sampled-data control theory can be utilized to determine the calculations necessary and their
minimum rate.

In many applications the job interact with each other, and with external systems. For example some jobs
may be required to be initiated as the result of a stimulus (interrupt) from some other system acting autonomously.
The processor will be controlled by some form of operating system, whose response times will depend on the state
of the system and which will allocate computer time between jobs. Typically the following additional parameters
may be involved in job-mixing modelling.

— interactions with other jobs,


— scheduling of other jobs via the supervisor,
— initiation of activities leading to future system loading.

At some stage of the design of a new project one hopefully has some indication of the hardware configuration
(including the number of processors), and indication of all or a representative part of the computational load of
each processor, and the size of memories required for each processor.

The characteristics offered by suppliers to match these system requirements would typically be in the form of
an instruction set with execution times and data-rates and response times for input-output. How can a valid
assessment of different processors required to perform a complex, but perhaps as yet ill-defined task, be made?
Ideally from the assessor's point of view a single figure of merit would be desirable. One possibility is to determine
the average number of instructions per second for each processor. However, this does not allow for size of data
word being processed or the efficiency of the program word structure. A number of bases of figures of merit have
been proposed, that of Knight 2 including processing time, input-output time, memory size and word length.

A figure of merit for instruction processing can be calculated by weighting and summing the times for various
classes of instructions. With no weighting one would add the times for each instruction and divide by the number
of instructions to give an average instruction time. Simple initial assessments are sometimes made on one or two
parameters (e.g., add, subtract, and multiply times). For simple single-address machines (e.g., only one address
73

specified in the instruction format) store cycle-time can give a rough measure of comparative performance. By
analyzing the instruction mixes of actual programs for specific types of applications a number of sets of weightings
or mixes have been established, of which the Gibson mix is probably the most well known. However, such factors
as memory addressing structure and data-length must also be taken into account. The latter has been allowed for
in some avionic assessments by indicating a distribution of required accuracies, so that shorter word length machines
have to allow for more double or multi-length working in their mixes.

There is a more sophisticated approach involving mixes that require specifications of a set of programs represen-
tative of the application, the execution times for which serve as a basis of comparison, or "bench marks" between
competing computing systems. A significant advantage of this technique is that the total problem can be represented,
including input-output and, if the program is to be written in some higher level language, compiler efficiencies. If
the bench-mark represents a known fraction of the total computational load, the loading of the computer system by
the bench-mark can be used as a pro-rata basis to establish whether the total system load can be handled.

A comprehensive bench-mark will involve the use of simulated input conditions, and a sophisticated simulation
may be necessary to determine worst-case conditions. Simulation can either be made in real-time on the actual
machine, or in "simulated" real-time using a program model of the processor running on some other machine.

In all assessments care must be taken to avoid unjustified bias towards one particular processor or processor
characteristic.

6.4 GENERAL PHILOSOPHIES AND TRADE-OFFS

In the previous chapter the allocation of functional tasks to processors was suggested as a trial design process.
In fact there are a number of ways of re-allocating functions and re-defining hardware boundaries which can be
employed by the system designer, but it must be appreciated that these do not alter the problem to be solved;
they only lead to alternative means of solving it.

We have so far implied that the processor is a conventional G.P. (general purpose) processor as defined in the
previous section. In practice a number of other types of processor are at the disposal of the system designer. These
include variations of the G.P. processor dedicated to specific computer systems tasks (e.g., input/output processors
dedicated to the management of the transfer of information across the computer interfaces. Display Processors
dedicated to the formation of data for C.R.T. display) or extended to give special facilities (e.g.. Array Processors
which are structured to process data in the form of arrays of one or two dimensions). Another type of computer
is the Special Purpose Processor, being usually a set of combination logic designed to perform a particular task. For
example, the F.F.T. (Fast Fourier Transform processor) is designed to perform spectral analysis of a signal using
the Cooley-Tukey algorithm (see Reference 7). Although in general the same tasks can be programmed on a G.P.
computer the S.P. computer (using the same hardware technology) usually has substantially higher performance.

It may also be advantageous to consider analog techniques (e.g., electromechanical and electronic analog
computers) or analogous digital techniques. For example the D.D.A. (Digital Differential Analyzer) is effectively
a set of digital integrators which are programmed by interconnection to perform in a way analogous to D.C.
electronic integrators. They can be used to perform continuous calculations, such as resolution through heading or
calculation of position from acceleration. As they can be elegantly realized using time-shared hardware several
early air-bome computers were of this form.

In the design of a major system it is usually necessary to determine the strategy for allocating functions to
processors. This is the classic choice of distributed as opposed to centralized computations, the extreme
configurations being:
- a complete centralized and integrated system capable of performing all the required computations in a single
computer (which could be a multi-processor computer). The tasks, which may be unrelated, are performed
in a multi-programmed mode system,
- a set of distributed processors dedicated to separate functions, loosely federated via a communications
network to give the required total system performance.

The centralized approach usually minimizes the computer system hardware content (because the same data
manipulations and storage can serve several functions) and simplifies communication paths. However, the total
system is vulnerable to a computer fault, the supervisory and control software can be complicated, and the testing
and integration of the separate functional subsystems can be difficult.

Distributed processors offer the possibility of reduced total software development (provided that the software
of each subsystem is transportable from one system environment to another due to "functional" interfaces - for
example an inertial-guidance platform and its computer can be used as a module in different systems), reduced
system development time (because subsystem development can proceed largely independently and in parallel until
74

final integration), simpler system installation and trials (as each subsystem is largely autonomous), greater system
fault tolerance (a single fault should affect only one subsystem).

In practice most major avionic systems will be a compromise between the two extremes, e.g., some centralized
activities and some dedicated processors.

Communication of information in digital form between peripheral subsystems and computers takes place over
groups of wires, often called highways. If one looks at simple computer systems, such as that shown in Figure 6.3,
a number of highways and associated interfaces or ports can be identified.

— Peripheral device to peripheral control unit (P.C.U.) interface. This is largely dictated by the peculiarities of
the peripheral and may not be under the System Designer's control. The interface may include analog
signals, which require conversion in the P.C.U. to digital form using the techniques of a previous chapter.
— P.C.U. to computer input/output controller highway.
Controller to processor highway. This is usually closely dictated by the detailed design and timing of the
processor, and is therefore not controlled by the System Designer.
— Processor to memory highway.

It should be noted that some of these terms (such as P.C.U.) are not universally accepted and other syndromes
may be encountered. However, in principle all specific realizations of these components can be represented in terms
of a PMS structure.

In general data is required to be transferred between the peripheral and the processor memory, where it can
be manipulated by the processor. This process is analyzed in more detail in Chapter 3, but here we will just
recognize the three modes of data transmission that are possible,
via the processor under program control, the processor being devoted to the task at that time.
via the processor under the control of the input-output equipment the processor "hesitating" in its normal
routine. Intervention by program is normally required at the end of a data transmission sequence (e.g., a
"Program Interrupt"). This type of input can be arranged to be either processor initiated or peripheral
initiated. This form of input has been termed "Data Interrupt".
— direct into the processor memory, by-passing the processor. This technique requires a memory highway
either from the I/O controller to the memory or from the peripheral P.C.U. to the memory. Again some
intervention by program is normally required at the end of the sequence. This technique is a form of direct
memory access (D.M.A.). The concept of memory modules allowing both processor and peripheral access is
called "Ported Storage".

Of the four highways classified above, two, the PCU to I/O Controller highway and the Processor to Memory
highway, can be influenced by the System Designer. There is a good argument for the standardization of these
interfaces, with resultant advantages of system flexibility. However, the interfaces must be designed to handle the
most demanding peripheral, and there are numbers of ways of implementing the basic features needed by the high-
way system. The I/O Controller and PCU for example must:
— be able to access the processor memory without corrupting other processor activities,
— be made to interpret words from the computer as either data, control or addressing and pass these to be
peripheral; and to send data, addresses, and information about the status of a peripheral to the computer,
— provide any buffering memory necessary to prevent the peripheral holding up the computer,
provide some method of allocating priority of service to peripherals, so that simultaneous requests from
service can be deat with.

Individual manufacturers have been able to standardize on I/O interfaces and memory "ports" and the
associated engineering and programming codes of use, but little success has yet been achieved at defining international
standards covering a range of peripherals and memories. Some manufacturers have adopted common I/O and
memory interfaces, which offers interesting system configuration possibilities. However, there are disadvantages to
this, as the memory interface may be of higher performance (in information rate) than is justified for input/output.

Consider the design of a highway system (e.g., links, switches and ports) to interconnect a single I/O controller
fitted with a number of ports with a number of peripherals. Assuming duplex links and switches it would be possible
to interconnect I/O ports with peripherals using a cross-bar switch (as shown in Figure 6.4). Such an arrangement
would allow simultaneous I/O dialogues and alternative switching paths in the event of switch failure. However, in
many applications the potential parallel working of a cross-bar switch cannot be utilized and it is more economic
in hardware to share links by time multiplexing them.
75

X ^t •v- ^T •v
N JN IN N N
Mp St m -
• Pc- • X
^t ^r ^r ^
N |N N s s
A: •^rH •v ^r -v
\ N N \ N
COMPUTER
r L

PERIPHERALS < K K

*> T

Fig.6.4(a) Input/output switching using a cross-bar or cross-point switch (PMS notation)

SOURCE 1 (e.g. AIR DATA)

SOURCE 2 (e.g. COMPASS)

SOURCE 3 (e.g. DOPPLERI


COMPUTER (e.g. NAVIGATION COMPUTER)
A
K-
• K-
-
Pc Mp
K-
(BROADCAST N A V I G A T I O N OUTPUT, e.g. L A T , LONG, V E L O C I T Y )
^ L

Fig.6.4(b) Data distribution by broadcast

( L A B E L STORE) M S

K S SEQUENCE C O N T R O L L E R

T K S T K S K S
PERIPHERAL SOURCES
& DESTINATIONS
( I N C L U D I N G COMPUTERS)
DATA
D ATA M M Pc
k
BUFF ER
I
Mp

Fig.6.4(c) Data distribution organised by an autonomous controller


76

A very simple method of distributing data, which has already found wide application in avionic systems, is for
a particular source of data to have a dedicated highway on which it broadcasts periodically, and in a fixed format,
its data output at a refresh rate high enough to provide, for all practical purposes, continuous data (see another
chapter). A computer requiring particular data "listens-in" to the appropriate highway. Such an arrangement is
shown in Figure 6.4(b).

A more sophisticated method is to arrange for some form of P.C.U. or autonomous controller to determine the
sequence in which data from a particular source is listened to by a particular destination using a time-shared bus.
Basically the controller has a local memory in which it stores the labels for the data types required in the sequence
in which they are to be called. The next label is taken from memory, placed on the highway, where it is recognized
by the appropriate source peripheral and any destination peripherals requiring it. In the next step the source
peripheral generates the required data and the destination peripherals interested accept it. The next step is to
repeat the sequence by first taking the next label from the controller's local memory. In this arrangement a
processor can be joined to the highway (see Figure 6.4(c)). A development of this system is to allow peripherals to
indicate when they have data ready for output by raising a common "attention" line. The controller then scans
the peripherals in turn ("polls") until the demanding peripheral is detected and then serviced.

Broadcast schemes may be used where relatively low data rates and response times can be tolerated. Where
higher performance is required it is usual for the computer to control the input/output sequence and allocate
priorities, although peripherals may be allowed to bid for service. Data messages are now not usually repeated, so
that it is important to check that a message has been received correctly. In a system with a centralized computer
a number of strategies for highway organization are possible. In Figure 6.5(a) each P.C.U. is connected to the I/O
controller by its own dedicated highway for the transmission of control signals, addresses and data. This arrange-
ment is termed a "starred" or "radial" highway system and is a relatively simple approach as all addressing and
priority conflicts can be resolved by hardware in the I/O controller or software in the processor. Another common
approach is to "bus" a single highway so that it calls in turn at each P.C.U. (is "daisy-chained") or each P.C.U. is
connected to a common highway by spurs from that highway (see Figure 6.5(b)).

This "bussing" minimizes system wiring, but presents difficulties when peripherals call for service simultaneously.
A reasonable compromise is to star some control signals to resolve priority conflicts (Fig.6.5(c)).

In determining the signal format and rates for highways it is often found that most avionic peripherals can be
serviced by quite moderate capacity interfaces (typically less than 500 kilobits a second in commercial systems, and
1 megabit a second in military systems) but a minority require significantly higher information rates. One possible
solution is to give the P.C.U.'s controlling such devices direct access to memory via their own ports on the memory
interface. This technique is called "ported storage" and requires special logic in the interface to resolve priorities
between ports. Ported storage is also used as a means of communication between processors (that is in "multi-
processor" systems). For example in Figure 6.5(d) the memory highway of each processor interfaces to a common
block of memory.

A further technique is to treat all external peripherals as part of the memory, all peripheral highways being
multiplexed onto a memory highway (Fig.6.5(e)).

A further fundamental decision is the control timing philosophy. Two basic methods are available: strobe
and handshake. In the first the data together with a validating strobe or clock signal is transmitted from the source
and sufficient time allowed for it to be propagated down the highway and recognized at the termination. Strict
control of timing and signal overlap is required. For the handshake case, two signals (J and K) are required. The
acceptor requests data with the J signal, the source indicates with the K signal that it has placed data on the high-
way, the acceptor removes it when it has accepted the data, and finally the source removes K when it has cleared
the highway again.

This handshake is not subject to timing rules and changes can be made in the length of the highway without
system timing having to be modified. However, more signal transitions of the highway occur in transferring one
word.

Very often a system justifying a sophisticated I/O system to deal with some of its peripherals will have a
number of slowly changing digital or analog signals which justify a simpler approach. A suitable approach is to
multiplex such signals together and present them in the form of a sequential scan which is input on a single I/O
channel.

Where communication with peripherals remote from the main body of the computer system is required, some
special form of link may be justified. Normally such a link will be serial in form and probably simplex, the data
and clock signals being combined as a single signal, the clock signal being implied in the data signal message
structure. For long distance transmission the digital signal is used to modulate a carrier suitable for transmission
over land-lines and radio-links. This is known as a data-link.
77

COMPUTER S T A R R E D HIGHWAYS PERIPHERALS


A -. y A . . A ,

I K-

Mp Pc -Stm- •K •T

I— K • K T

Fig.6.5(a) Starred input/output highway

'SPURRED'
'DAISY CHAINED'

Mp r~K T
1— L K j

1 K
L
"> i_-
" T
L
1 ,
L K T
f K f
L L
^ K T 1 L K T

Fig.6.5(b) Bussed input/output highway

Mp -Pc L
L L Data/Control Highway
L — • > K
Control Highway
L_ L

Fig.6.5(c) Starred control highway/bussed data highway

Mp Pc- s K

K
I >- K :} 1
LOW D A T A RATE PERIPHERALS

HIGH D A T A RATE PERIPHERAL WITH


M = K ? ESS TO MEMORY PORT
ACCE

M
} STORE WITH PORTS TO BOTH PROCESSORS

K
T

Mp" Pc
I
s • V
-*- K
T

Fig.6.5(d) Uses of 'ported storage'

Mp- Lm- • Pc

Lm K Lm = high speed memory highway


I Lio • bussed input/output highway
s • -Lio K T
multiplexor
Lm K K T

Lio- K T

multiplexor
K T

Fig.6.5(e) Input-output treated as extension to memory


78

All highway and link systems involve the definition of the electrical standards and codes of practice (e.g., line
terminations) to be used to ensure adequate performance in the electrical environment likely to be encountered.

6.5 RELIABILITY CONSIDERATIONS

The wider application of digital computers in aerospace systems is critically dependent on the achievement of
high reliability and the development of techniques to ensure that, when faults do occur, the effect on the system
should not be catastrophic. Faults can occur externally to the computer system, as for example due to the failure
of primary power supplies or the input of invalid data, or within the system, either due to the incorrect functioning
of hardware components or errors in software programs.

Initially the main effort has been directed to improving hardware reliability by good component and equipment
design, supported by the development of quality assurance and reliability engineering techniques aimed at detecting
and remedying weaknesses in equipment design during the development, manufacturing and in-service phases of the
equipment's life. With the improvement in component technology it is becoming increasingly possible to use
functional redundancy of components and equipments so that, in the event of the failure of a particular element,
its role will be carried out by other elements in a manner such as to maintain system performance (see Reference 9).
This philosophy is already widely applied in other aerospace disciplines, such as airframe design and hydraulic
control system design. Such digital systems are said to be "fault-tolerant", the degree of fault tolerance required
being derived from the target for the reliability of the system.

Typically it will be necessary to define the type of fault that will be tolerated (permanent, intermittent,
transient), the total number of faults, the minimum time between faults, and what degradation of system performance
(both short-term and long-term) is acceptable.

Software faults can arise from circumstances which have not been correctly anticipated (e.g., peak loading) or
from actual errors in programming or from the input of invalid data from external sources. This latter fault gives
rise to the concept of data integrity - the assurance that data, particularly that transmitted from one data area of
memory to another or one system to another, is valid. The problem of comprehensively testing system software is
discussed in another chapter.

Fault-tolerance and data integrity can be provided by a combination of hardware and software techniques
(see References 4, 5 and 6). As may be expected there is a trade-off between what is possible and what is practical.

A modem real-time computer usually has a number of operating modes or levels associated with its operating
system. The executive level has privileged modes of operation (e.g., access to all memory, internal status registers,
etc.) and task programs, so as to restrict the interaction between such programs, and hence the propagation of faults.
The operating system is designed to respond in a coordinated manner to changes in system requirements and status
(including system mulfunction) which may be indicated by status words, program interrupts and data interrupts.
An outline structure of a typical real-time operating system is shown in Figure 6.7, where the authority of control
increases from bottom to top.

Katzan 8 identifies seven main properties of an operating system, and these all warrant the attention of the
system architect:

Access — how the user operates with a system. An avionic system is usually sensor driven,
although it may involve man-machine interfaces.

Utilization — the manner in which the system is used. Avionic systems are normally pre-programmed
although operators may call up alternative programs from a backing store if alternative
or revisionary modes of working the system are required. Most avionic applications do
not involve a large data-base and associated data management techniques.

Performance — deals with quality of service. As the avionic system is real-time and sensor driven, the
system must provide adequate response time and through-put.

Scheduling — determines how processing time is allocated to jobs. Typically an operating system will
operate on a priority basis with a number of priority queues (e.g., corresponding to data
interrupts, object programs, and system test programs) with facilities to dynamically
reallocate priorities according to system status.

Storage — concerned with the allocation of storage to tasks. Typically storage can be allocated to
Management tasks in blocks, the limiting addresses of which are determined by the contents of base
and limit registers. Any attempt by a task program to communicate outside its allocated
area will be prevented by hardware and the executive notified by interrupt. By adding
additional bits to the memory word length individual words can be protected (for example
a bit can be added which, if set, prevents the particular word from being over-written)
79

or checked for corruption (e.g., by the use of an odd parity bit, which is set if the
number of ones in the corresponding binary word is odd). Any attempted violation of
the protection, or a failure of parity checks, is detected by hardware and again notified
to the supervisor by interrupt.
Sharing — the functional capability of the system to share program data and hardware devices. The
extent to which different task programs have subroutines, common data and input/output
facilities, must be decided and the appropriate facilities and software interfaces established.
Configuration - this is concerned with the real physical system and how it appears to the task programs.
Management It is required to define how the system is organized and how the organization can be
varied by the executive. For example, a computer may use the technique of virtual
storage, where each task program is written as though it has available a continuous range
of memory addresses; although the real memory of the machine is shared between all
the computer programs, and the actual addresses allocated for a particular program need
not be consecutive. Thus a translation is required between the virtual memory addresses,
which are continuous, and the real addresses, which are not. This dynamic address
translation requires special hardware facilities in the machine. It may be a requirement
for it to be possible to remove dynamically a failing module from a system. One example
of this is the reallocation of memory in the event of a limited memory failure by use of
the base and limit registers. An extension of this idea treats all input/output as memory
and defines also, by loading additional registers, the type of access that is permitted.
Memory and input/output facilities can then be reallocated under executive control",
although an alternative approach is to allow the registers to be controlled by a special
very reliable configuration control module12.

Although the above operating system properties are not exhaustive enough to automatically classify specific
systems, they do form a useful basis for comparison.

The techniques of function redundancy can be classified into two types:


- fault masking redundancy,
— standby redundancy.

The fault masking redundancy is achieved by implementing the function so that it is inherently error correcting.
One such approach is Triple Module Redundancy (see Figure 6.6) where a function is performed by each of three
identical modules working in parallel and a vote taken of the outputs, the majority signal being accepted as the
true output. The level of modularity used can be from, say, logic-gate level upwards. Another approach is the use
of redundant codes which allow errors, resulting for example from transmission, storage and arithmetic and logical
operations, to be automatically detected and corrected.

In standby redundancy the hardware system is divided into suitable modules and when a particular module is
believed to be faulty it is replaced by a spare standby module. This technique therefore involves:
- detection of a fault,
— location of a fault down to at least the replacement module level (with detection this constitutes diagnosis),
- prevention of the propagation of the fault, the corruption of essential data and the output of invalid data
or control signals likely to cause catastrophic effects,
— reconfiguration to give a working system restart from a valid state (recovery).

The control of the above action may be by software or hardware or a combination of the two and the system
may be self-repairing or repaired under external control.

All of these functional redundancy techniques involve the use of additional hardware. In fault-masking logic
this is the redundant modules, the voting logic, or the coding and decoding logic. Standby redundancy involves
not only the replacement modules but also the extra equipment required to diagnose the faulty modules (e.g.,
BITE) and effect the necessary switching to replace it. Extra equipment involves an increased probability of
component failure and increased equipment cost. Thus the design of a reliable system involves trade-offs which are
extensions of the system considerations already discussed. Again a modular level of analysis is applicable, but now
involves the reliability of possible modules and the corresponding additional hardware involved in the functional
redundancy. The reliability of the individual modules and additional hardware can be computed from the failure
rates of the basic components if these are known. Often measured failure rates for the components used under
the appropriate conditions of use are not available and some acceptable set of representative figures are used as a
basis of comparison. However, it must be remembered that, under these circumstances, the absolute failure rates
predicted for the system may not be valid.
80

SINGLE TMR STAGE

FUNCTIONAL VOTING FUNCTIONAL VOTING


MODULES LOGIC MODULES LOGIC

CHANNEL 1 A F A U L T CHANNEL 1 B FAULT

CHANNEL 1
TYPE A
V
\ \ /

y ^ C H A N N E L 2 A FAULT

CHANNEL 2
TYPE A
-fr
\ / \ /

7x\ CHANNEL 3 A F A U L T
N \
CHANNELS
TYPE A

V = two out of three voter

Fig.6.6 Triple module redundancy (TMR)


81

MACHINE BOUNDARY
EXECUTIVE

ALARM
INTERRUPTS
OVERRIDE CONFIGURATION
HARDWARE MEMORY
CONTROL
INTERFACE BASE & LIMIT
SUPERVISOR REGISTERS,
ACCESS
FAULT INDICATION REGISTERS.

STATUS FAULT PROCEDURES


PRIORITY
TIME OUT
ASSESSMENT
DATA FAILURES.
INTERRUPTS FUNCTIONAL
FAILURES.
CONTROL
INPUT/OUTPUT CONTROL

f
MEMORY
PROGRAM
PROGRAM SELECTION
INTERRUPTS
(MULTI-ACTIVATION)
(PRIORITY ASSESSMENT)

REAL-TIME
CLOCK CYCLIC FLAGGING

INTERLACED TEST
PROGRAMS

TASK OR OBJECT FLAGS FOR SERVICE


PROGRAMS

MEMORY

BASE LOAD PROGRAMS


(ZERO PRIORITY)
(TEST PROGRAMS)

Fig.6.7 Typical general structure of an operating system


82

Among the advantages of the masking technique is the immediacy of the corrective action, its capability of
dealing with permanent and transient faults, and the relative ease of conversion of a non-redundant design to a
redundant one. The disadvantages include the need to keep all the redundant units powered, the difficulty of
synchronizing the clocks of parallel units, and the difficulty of pre mission check out of permanently wired units.

Systems based on standby redundancy have the advantage that all the spares can be utilized (i.e., the systems
will continue to operate at the simplex level), the number of spares at each stage can be optimized in terms of the
reliability of the modules, only the on-line units need be powered, and the replacement switch provides fault
isolation between modules. Usually with each additional failure the capability of the system is reduced, although it
continues to operate in a degraded way: the so-called process of "graceful degradation". As interconnecting high-
ways and switches may themselves fail, usually some redundant form of interconnecting highway is required,
increasing the complexity of the system design considerations discussed in a previous section.

Most published designs of fault-tolerant systems are based on standby redundancy, but use error detecting codes
as an aid to fault location, and masking logic in the basic control equipment necessary for system reconfiguration.
Significant improvement in mission reliability is only possible if the switching devices and monitors used for recon-
figuration are much more reliable than the functional modules being switched.

A further consideration for the system designer is the choice of memory technology. The conventional core
memory is a destructively-read random access memory (R.A.M.) system. That is data in memory is destroyed during
the reading operation and must be restored from external buffer store if it is to be retained. This implies that there
is a possibility of information in store being corrupted by electrical interference during operation, although core
memories are normally designed to minimize such occurrences and can, for example, be designed to shut-down in a
safe sequence in the event of primary power supply failure. There are techniques for reading magnetic core and
thin-film memories non-destructively, or alternatively hard-wiring the information into the store, to give read-only
memory (R.O.M.) which can be used for essential data and program, including programs required for recovery.
Similar techniques are possible with memories based on semi-conductor techniques.

It is also possible to use some form of "backing store" as the source of program and initial data in a recovery
process, for example to replenish a R.A.M. store which has experienced a transient fault. The "backing store" holds
the data in some reasonable permanent form (for example on punched plastic tape or as digitally encoded magnetic
tape). Typically such stores have relatively slow reading speed, which must be allowed for in designing the recovery
sequence.

The above assumes that it is economic to use immediate access memory for both program and data. In some
applications (e.g., of a management type) the amount of storage required may justify the use of some form of bulk
storage such as drums, discs and magnetic tape, albeit that such devices are difficult to design for use in a hostile
environment. The system architect is then involved in planning a memory hierarchy and data management system.

6.6 EXAMPLES OF AVIONIC SYSTEM ARCHITECTURE

Other AGARD publications have given examples of the application of computers in aerospace systems
(References 12, 13 and 14) and Langley19 gives a useful review of avionic data processing requirements.

A simple matrix for system classification is given in Figure 6.8, and a number of typical avionic systems have
been related to it. The matrix identifies systems as being potentially capable of producing catastrophic and non-
catastrophic failure modes. However, usually the effect of failures of a system are complex and lead to a spectrum
of effects which can be analyzed by the Fault-Tree Analysis techniques described in another chapter.

It is difficult to think of any applications of dedicated or integrated single computer systems which can cause
catastrophic failure, as usually the user has required some form of manual override or reversionary mode to be
possible.

Interesting examples of system architecture for space applications have been described in the literature (see
References 14 and 17), but understandably little information has been published relating to modern military aircraft
and satellites.

However it is apparent that the implementation of the avionic systems of a number of new military aircraft
has been made possible by developing the system within a defined system architecture (e.g.. References 20 and 22).
It is anticipated that this approach will considerably simplify the additon of new equipments or operating modes
during the life of the aircraft concerned.

A particularly interesting area of international activity is the development of long Range Patrol Aircraft having
both a surveillance and attack role. Such aircraft typically carry a number of sensors, such as active and passive
radars, infra-red scanners, and optical devices (search lights and Low-Light T.V.) for detecting air and surface targets;
passive and active sonobuoys and Magnetic Anomaly Detectors (M.A.D.) for detecting sub-surface targets plus an
SAFETY,
DEDICATED INTEGRATED (CENTRAL COMPUTER)
MISSION FEDERATED
APPLICATION (SPECIAL PURPOSE
SUCCESS, (MULTI-COMPUTER)
COMPUTER) (SINGLE PROCESSOR) (MULTI-PROCESSOR)
ESSENTIAL

Inertial Navigation/Attack. Sensor processing Long range patrol air-


Navigation Stores management. craft system (includes
System. Automated ground/ sensor processing)
No Air data unit. air data exchange. (Ref. 21).
Engine health
Airborne early warning
Monitor.
(Chapter 11) system.
Area navigation.
Aircraft
Head-up Terrain following/ Stability Full authority engine
Yes Display Control avoidance. augmentation control.
system. Auto landing/
Auto pilot.

Inertial Satellite data Space laboratory


No information system,
Reference handling
(Ref. 15). (Ref. 14)
Missile auto-pilot.
Missile guidance.
Spacecraft/
Missile Spacecraft Deep-space probe
Yes guidance (with (Ref. 17).
manual reversion). Launcher guidance
(Ref. 13). and control
(Ref. 13).

Fig.6.8 Classification of typical computer-based avionic systems


84

array of missiles, guided and conventional bombs and torpedoes for air-to-surface attacks. Given also that such
patrol aircraft often operate in cooperation with other aircraft and ships and that this involves the exchange of
tactical data, often automatically via data-link, it is apparent that this is an application where most aspects of
computer architecture are involved.

A speculative block diagram of a possible system for such a patrol aircraft is shown in Figure 6.9. An illustration
of the complexity of a typical aircraft installation is provided by Figures 6.10 and 6.11. The various sub-systems are
divided into Sensor, Tactical, Flight Control and Weapon areas. Several of the sensors mentioned can generate large
quantities of analog and digital data (e.g., radar video) at high rates. The design of the processing of such signals
involves trade-offs between processing methods (e.g., the use of analog, special-purpose and general-purpose
processors) and a choice of ways of spreading the processing loads (e.g., using a central high-power processor or a
number of lower-powered processors). The strategy is to attempt to confine the high-rate signals to the sensor-
processing areas and to spread the computing load by using buffer-storage and ported-storage techniques. The
objective of the sensor processing during the surveillance role is to present essential target and other data to the
tactical system so that information can be correlated and compiled into a tactical picture which a human operator
can use to make tactical decisions and initiate or control any subsequent attack phase. The signals transmitted to
and from the tactical sub-system are relative low rate signals (e.g., target classification and position) which can be
handled by bussed serial highways and serial data-links. The assessment of the tactical situation involves threat
evaluation, engageability assessment and the allocation of weapons to targets, and such decisions can be computer
aided. The final attack phase involves the check-out, initialization, firing and guidance of weapons, perhaps via a
"stores management" sub-system, and possibly the control of the aircraft flight path either directly via the auto-
pilot or indirectly by displaying "director" signals to the air crew. In the diagram a number of signals between the
flying controls and the control surface actuators are shown as being digital. This assumes a "fly-by-wire" philosophy
which has yet to gain general acceptance.

Chapter 8 surveys the total systems and computer architecture considerations involved in a specific application:
the control of jet engines. This is an interesting application inasmuch as it is proving difficult to get digital
techniques accepted for full-authority control even though experimental digital systems have been successfully
demonstrated. It is a typical case of replacing an established and proved technique (namely hydro-mechanical and
electrical control) when aircraft safety is directly involved.

REFERENCES
1. Bell, C. Computer Structures: Readings and Examples, McGraw-Hill Book Co., New York,
et al. 1971.

2. Knight, Kenneth,E. Changes in Computer Performance, Datamation, Vol.12, No.9, pp. 40-54, September
1956.

3. Malach, E.G. Job-Mix Modelling and System Analysis of an Aerospace Multiprocessor, IEEE Trans,
on Computers, Vol. C-21, No.5, pp. 446-454, May 1972.

4. Avizienis, Algirdas Fault Tolerant Computing, an Overview, Computer, Vol.4 No.l, pp.5-8, January/
February 1971.

5. Carter, W.C. A Survey of Fault-Tolerant Architecture and its Evaluation, Computer, Vol.4, No.l,
Bouricius, W.G. pp. 9-16, January/February 1971.

6. Elspas, Bernard Software Reliability, Computer, Vol.4, No.l, pp. 21-27, January/February 1971.
et al.

7. Cochran, W.T. What is the Fast Fourier Transform? IEE Trans, on Audio and Electroacoustics,
et al. Vol. AU-15, No.2, pp. 45-55, June 1967.

8. Karzan, Harry Operating Systems Architecture, AFIPS Conference Proceedings, Vol.36, pp. 109-117,
May 1970.

9. Von Alven, William H. Reliability Engineering, Prentice-Hall Inc., New Jersey, 1964.
et al.

10. Williams, R.K.. System 250 - Basic Concepts, Conference on Computers, Systems and Technology,
IERE Conference Proceedings No.25, pp. 157-168, October 1972.

11. Crapnell, L.A. An Economic Architecture for Fault Tolerant Real Time Computer Systems,
Conference on Computers, Systems and Technology, IERE Conference Proceedings,
No.25, pp. 119-130, October 1972.
BS

12. Leondes, C.T. Computers in the Guidance and Control of Aerospace Vehicles, AGARDograph No.158,
et al. February 1972.

13. Miller, J.E. Space Navigation Guidance and Control, AGARDograph No. 105, 1966.
et al.

14. Keonjian, E. Automation in Manned Aerospace Systems, AGARD Conference Pre-Print No.l 14,
et al. October 1972.

15. Remmington, J.E. An Outline of the Ferranti Data Handling System Proposed by the Sud-Aviation Group
et al. for Project L.A.S., Conference on Aerospace Computers in Rockets and Spacecraft,
C.N.E.S. Paris, December 1968.

16. Ramamoorthy, C.V. Special Issue on Fault Tolerant Computing, IEEE Trans, on Computers, Vol. C-20,
et al. No.22, November 1971.

17. Hopkins, Albert L. A Fault Tolerant Information Processing System for Space Vehicles, IEEE Trans, on
Computers, Vol.C-20, N o . l l , pp. 1394-1403, November 1971.

18. Rolston, Anthony Introduction to Programming and Computer Science, McGraw-Hill Book Co., New
York, 1971.

19. Langley, Frank J. A Universal Function Unit for Avionic and Missile Systems, Proceedings of the National
Aerospace Electronics Conference, pp. 178-185. published by the IEEE, New York,
1971.

20. Elson, Benjamin E. BI - Avionics are Geared to Operational, Growth Needs, Aviation Week and Space
Technology, pp. 52-54, April 23rd 1973.

21. PlattnerC.M. Advanced ASW Gear, Space Economy Mark Design of S-3A, Aviation Week and Space
Technology, pp. 95-107, September 15th 1969.

22. Elson, Benjamin E. AWACS Uses Flexible Computer, Aviation Week and Space Technology, pp. 106-109,
September 11, 1972.
X
EXTERNAL ENVIRONMENT SENSOR SUBSYSTEMS TACTICAL SUBSYSTEMS

OPERATOFTS\
DISPLAYS* I TACTICAL
CONTROLS I CONTROLLER
AIR
SURFACE
SUB-SURFACE

OPERATOR'S
D I S P L A Y S * \ WEAPON
FLOATING CONTROLS I CONTROLLER
ACOUSTIC
SENSORS
ISONOBUOYSI
(Passive - e g DIFAR)
(Active - e g CASSI

RADIO AIDS
(eg Beacons. OMEGA. LORAN
DECCA. etc.l
CO OPERATING FORCES

KEV-

a_fe______ Analogue signal path

y High data-rate digital signals

—^ Low data-ratt digital signals

it Possible digital computer


application
FLIGHT CONTROL SUB SYSTEMS WEAPONS SUB SYSTEMS

Fig.6.9 Block diagram of a typical avionics system for a military long range patrol aircraft
87

Fig.6.10 An example of modem avionics system installation in a patrol aircraft

Fig.6.11 View of modules of avionic equipment installed in bays of patrol aircraft


88

CHAPTER 7

DEFINING THE PROBLEM AND SPECIFYING THE REQUIREMENT

S.Boesso and R.Gamberale

7.1 INTRODUCTION

The definition of the computer characteristics is largely an iterative cut-and-try process, where sets of often
conflicting parameters have to be chosen in order to satisfy the requirements at the minimum possible overall cost.

The primary requirements to be satisfied are functional, i.e., concern the capability of the computer to
perform the tasks assigned to it within the available time.

Other essential requirements are physical, as the computer must be able to operate properly in a certain
environment with acceptable reliability and maintainability and its weight, volume and power consumption cannot
exceed certain limits.

The present Chapter will deal only with the functional requirements and will aim at the definition of a
methodology for deriving them from the knowledge of the tasks to be performed.

The considerations presented may be applied both to determine the suitability of a certain computer architecture
and to compare different computers against a given application.

In any case, it should be kept in mind that no computer can be judged as adequate or inadequate for itself,
but only with reference to a well-defined job that it would have to handle.

Therefore it is necessary that the job, or mission, be clearly described not only quantitatively, but also in terms
which are desired of the computer and which allow the adequacy of a chosen architecture to be verified.

The treatment will start with a brief survey of typical tasks of an avionic system, from which a sample will
be picked up to be further analyzed as an example. System functions will then be introduced, to arrive at
defining what the computer is expected to do.

Finally, the computer tasks will be analyzed, also with the aid of examples, in order to show how the computer
requirements can be arrived at.

7.2 SURVEY OF TYPICAL TASKS OF AN AVIONIC SYSTEM

Avionics systems of today's aircraft are intended to perform, or to aid the crew in performing, a multitude of
tasks. Some of them (e.g., stores management) are peculiar to combat aircraft, other ones (like navigation) are
common to the military and to the civil sides of aviation.

Typical tasks have been identified and will be concisely recalled here. The definitions given in the foregoing
text, although not pretending to be standard, are considered to be general enough to cover variations which may
be encountered in individual real cases.

Navigation and Guidance


This is defined as the determination of:
present position of the aircraft with respect to the earth, by processing sensor data,
— course and distance from present position to selected destination points (steering information).

These parameters will normally have to be displayed to the crew, which in turn will have a means of introducing
position corrections in the computation, when the aircraft is flying over a known reference point. This latter action
89

is called "position fixing" and consists of the identification of reference points (on a map display and on the terrain),
determination of displacement between computed and actual position and relative correction of the navigational
computation.

Fuel Management

This task consists of the calculation and display of:

— Fuel remaining,

— Maximum range
— Endurance at present flight conditions

- Optimum range or optimum endurance with related required flight conditions.

Engine Control

The avionic system has to monitor and precisely control engine performance under the actual flight conditions
encountered, i.e.,

— Receive signals, from airframe and engine, that indicate present operating parameters,
— Store these data and compare actual performance with stored data indicating desired performance under
given flight conditions,
— Drive electromechanical devices to modify fuel flow or engine geometry to attain desired performance.

Stores Management

The "stores" are the missiles, rockets, and bombs carried by attack aircraft either under their fuselage or wings
or in the weapons bay. For this task, the computer-based avionic system has to carry out calculation and display
for the following actions:

— Selection of weapon stations (missiles and/or bombs),

— Weapon fusing,
— Weapon release (in proper sequence),
— Safety interlocks for weapons,
— Provide information and number and distribution of remaining stores,
— Indicate failure of a bomb or a missile to release.

Weapon Management (Air-to-Air Combat)

The weapons are in this case the aircraft guns and the air-to-air missiles carried on board. The avionic system,
which is a key aid for the pilot to hit the target, has to:

— Calculate range rate and rate of turn of aircraft,


— Calculate lead angles to displace target marker,
— Indicate to the pilot that the target is at a suitable range,
— Generate release instructions for air-to-air missiles or for the guns.

Air-to-Ground Attack

This task precedes, in time, the stores management, and consists mainly of:

— Target Acquisition (like steering for navigation),

— Target Tracking (inclusive of aircraft steering),

— Ballistic calculation for each type of weapon chosen and generation of weapon release information.

7.3 FROM OPERATIONAL REQUIREMENTS TO SYSTEM FUNCTIONS

An examination, with engineer's eye, of the tasks described in the preceding section shows that the avionic
system has to perform a few fundamental, clearly identifiable, functions which apply, more or less, to all tasks.
90

First of all, the system will have to acquire information from the outer world or from the crew: for example, flight
conditions, operator requests, engine parameters, radar range, etc. Such data are normally supplied by suitable
sensors (of pressure, speed, acceleration, temperature, etc.), converted into digital form, distributed and processed
as needed. The results of the processing are utilized to produce commands (for modifying the flight conditions,
for firing weapons, etc.) or are displayed to the crew to help them take decisions on the mission.

A number of data, either raw or processed, are stored, to be utilized at a later time, either in flight or on the
ground. Finally, the key parameters of the on-board systems (air-frame, engines, avionics, weapons) have to be
continuously monitored, in order to reveal incoming malfunctions and allow corrective actions to be taken, either
automatically or by the crew.

The following fundamental functions may thus be identified,

(a) Data Acquisition


— Collection, conversion, formatting of sensor-generated data in order to allow their subsequent processing,
— Collection of commands generated by the crew.

(b) Processing
— Application of mathematical and logical algorithms on collected data, in order to extract information
and commands required for the mission.
Interpretation of new commands and execution of related actions.

(c) Data and Command Distribution


— Distribution of data and commands, resulting from processing, to on-board users (i.e., actuators of the
guidance system, of the weapons, etc.).

(d) Data Storage


— Storage of mission parameters, introduced either on the ground before flight or in flight via a
telecommunication link.
— Recording of information collected in flight.

(e) Data Display


— Display to the operator in different forms of data obtained as results of processing (tabular, alpha-
numeric, moving map, head up, synthetic-on-radar PPI, etc.).

(0 Communication
— Transmission and reception of information from ground or other aircraft or satellites.

(g) Housekeeping and Check-out


— Monitoring of subsystems status during operation to detect possible faults.
— Fault localization and redundancy switchover.
— Evaluation of the ability of sub-systems to perform their functions under all operational conditions,
by simulation of the latter by means of proper stimuli.

The diagram of Figure 7.1 shows the relationships among the functions just described.

The preceding definitions are merely a properly grouped list of "actions" that the system is required to perform
to fulfill its role. As such they do not mention what "black boxes" will be needed on-board to implement the
system functions properly. Some idea of the hardware is, however, already present.

To begin with, let us take data acquisition: part of this function, which includes Analog-to-Digital conversion
of sensor data (as described in detail in Chapter 6) is normally performed by dedicated hardware, which ouputs
information in a form suitable for being assimilated by a digital computer.

Display and interface with operators is also performed mainly by dedicated hardware: normally a mix of
electronic and optical devices which translate digital information into letters, symbols, etc.

The main point is thus the definition of what are the system functions that the computer has to perform,
interfacing with peripherals which supply it with properly coded digital data, or which translate its output into
f o r m s n s p a h l p PICPU/V***!-^
STORAGE

jX Ji _l
I/)
cc s — t
DATA o
o i cc
oo
zLU
ACQUISITION en
s 1f COMMUNICATION
1— OO — *
u.
<c
a.
• — p
PROCESSING 1- oo
o l_l_ UJ
on
l-H ' <C 1—
cc M
< o _i
oo _x Qi _ l
cm f l—• U J
o DATA & <_ I "
1- A
<c COMMAND <_
Z3 • I CC 00
1— A -fl DISTRIBUTION UJ
_H Q
o
< ^ I— __r
' o <c
DISPLAY AND
MANUAL
INTERVENTION

r
CREW

Fig.7.1 Functional diagram of a typical computer-based avionic system


92

From the computer's point of view, all these system functions may be reduced to two main categories:
— Exchange of data with the "external world", i.e.. Input or Output,
— Processing, which takes place within the machine and accounts also for storage and internal data transfer.

Let us now make an example, which will be gradually developed throughout this and the following sections to
show the application of the concepts presented.

Our example task is a simplified form of navigation, where the avionic system has to inform the crew of the
present position coordinates and computer course and distance to a destination, starting from the present position.
Let us assume that the aircraft has a Doppler Radar and a magnetic compass. The operational requirements may
thus be synthesized as follows:
— Compute and Display to the crew the coordinates of the present position, by suitably processing ground
velocity vector, as supplied by the Doppler Radar and heading, as supplied by the magnetic compass.
— Compute and Display, upon request by the crew, course and distance to a preselected destination point.

The preceding requirements have to be translated into a list of system functions, part to be committed to
dedicated hardware and part to be handled by the on-board computer(s). The avionic system will:
(1) Collect and digitize speed data from Doppler Radar.
(2) Collect and digitize heading data from magnetic compass.
(3) Collect operator requests, from operator's panel.
(4) Process collected data to obtain desired information: present latitude and longitude, course and distance
to destination.
(5) Display position, course and distance information to the on-board crew.

As already said, fortunately for the computer designer, not all the preceding functions pertain to the computer.
This latter will;
(1) Input digitized speed and heading at predetermined time intervals.
(2) Input destination coordinates from operator panel, upon request by the crew.
(3) Process speed and heading to obtain latitude and longitude of present position.
(4) Process destination coordinates to obtain (or update) course and distance.
(5) Output latitude and longitude, in digital form, to display.
(6) Output course and distance, in digital form, to display.

The preceding procedure is to be repeated for the other tasks, to arrive at an overall definition of the
computer system functions.

These functions will, in turn, have to be translated into computer requirements, as described in the following
section.

7.4 FROM SYSTEM FUNCTION TO COMPUTER REQUIREMENTS

7.4.1 Presentation of the Requirements


As stated in the Introduction, the present Chapter aims at describing a methodology for specifying the main
functional requirements for the on-board computer, i.e.,
(a) for the Central Processing Unit (CPU)
(1) Instruction set and execution times,
(2) Instruction and data word length and format,
(3) Addressing techniques,
(4) Sub-program linkages,
(5) Interrupt techniques,
(6) Local storage.
93

(b) for the Memory


(1) Word length,
(2) Capacity,
(3) Cycle time.

(c) Input/Output
(1) Number and type of channels,
(2) Transfer time (related to the types of channel).

Other considerations concerning self-check and repair capability and memory protection are outside the scope
of this Chapter.

It has to be pointed out that the method does not yield a direct and univocal solution of the problem "Given
a set of tasks find out the right computer", but it rather allows the computer requirements to be arrived at after
a somewhat recursive and indirect procedure.

Once the tasks have been defined and quantized, limit values are derived for a number of computer parameters.
Preliminary computer characteristics are then assumed and (provided no mutual incompatibilities exist), these are
checked against the tasks. Usually more than one architecture may be found which satisfies the functional require-
ments; the best in terms of cost-effectiveness will have to be chosen.

A flow-chart of this process is shown in Figure 7.2. The single steps of it will be explained with some detail
with the aid of examples.

7.4.2 An Example Set of Elementary Operations

The set of elementary operations which is an application of the reverse Polish notation (so called after the
Polish mathematician Lukasiewicz) is intended to serve as an example. Other sets might be chosen; the methodology
would only be affected in detail, but its general lines would not change.

The set is not a formally complete language, as some operations comprise classes of instructions, which, though
different, would require the same computing capacity. For example, all N-place shift instmctions are covered by a
single operation (N$), irrespective of their type: left, right, logical, algebraic, rotate, etc.

The foregoing application of the language in our example is assumed to be performed by hand; many steps,
however, may conceivably be mechanized by letting a computer perform them.

The elements of the language

Symbols

The smallest language element is called a symbol. In our case, the symbols are all the capital letters of the
English alphabet, the decimal figures from 0 to 9, the decimal point, plus the following special ones;

+ -*:,&•!?%=<>"/(

Operands

One or more contiguous alphanumeric symbols, written from left to right, form an operand, that is the name
of a datum which is to be operated upon.

For example: ABC4. If, however, an operand consists of only numeric figures and decimal point, its name
is to be interpreted as its value in decimal notation; negative numbers begin with the letter N , positive numbers
are not preceded by any sign.

There is no limit to the number of symbols in an operand, but it is convenient to limit it to a few to save
writing effort and ease mechanization.

A particular operand is the "flag", which can assume only two values, 0 and 1, and is commonly used as a
program switch. Flags will be indicated as FLAGJ for easy identification; J is the flag reference number.

Operators

An operator is indicated by one of the special symbols presented and explained in Table 7.1. Operators are
applied to one, two, or three operands, as follows:
94

C START )

•r

FUNCTIONAL ANALYSIS
(MATCHEMATIC & LOGIC
MODEL & DATA DESIGN)

•>
TRANSLATION OF MODEL LANGUAGE (SET OF
(INTO SEQUENCES OF ELEMENTARY
ELEMENTARY OPERATIONS) OPERATIONS)

j'
MISSION STATISTICS INPUT/OUT AND
MEMORY (RECURRENCE OF EACH fe PROGRAM INTER-
FOR DATA l 1
OPERATION IN THE MISSION RUPTS

3r x•

INSTRUCTION EXECUTION TIMES AND 4


WORD LENGTH INSTRUCTION SET
AND FORMAT

i'

MEMORY
FOR PROGRAM

i
TOTAL
Wi
MEMORY

\ NO
^ACCE
PTABLES

YES

COMPUTER REQUIREMENTS

Fig.7.2 Methodology flow diagram


95

One-operand operators: ,!"$


Two-operand operators: +—*:&' =
Three-operand operators: > < ?

One operator applies always to the operands that precedes it (one, two or three, depending on the operator
class).

Macro-operators
A macro-operator corresponds to a more or less complex function, which cannot be expressed by means of
one of the operator symbols already presented.

TABLE 7.1

Elementary Operations

Operation Description Examples

y Duplicate (repeat) last operand in the sequence (4)


n
Complement preceding operand, bit by bit
+ Add
- Subtract (2)
* Multiply
Divide
& And
The last six operations are performed between the last (3)
two operands in the sequence (either retrieved from the
memory or obtained as a result of an operation).
The result is left in the sequence as an operand that
replaces the two operated upon.
M, Read operand M from Memory and put it in the sequence. (4)
M" Read operand M from Memory and complement it, bit •
by bit
M+ Add M, read from Memory
M- Subtract M, read from Memory
M: Divide by M, read from Memory
M& Logical AND, bit by bit, with M, read from Memory (1)
M" Logical OR, bit by bit, with M, read from Memory
The last six operations apply to M and to the last
operand in the sequence; the result replaces both
operands in the sequence.
For M— if there is no preceding operand, the two's
complement of M is put into the sequence.
X= Store preceding operand into Memory, at location labelled (1)
X.
N$ Shift the last operand in the sequence N places, left or (6)
right, i.e., multiply or divide by 2 N .
M% Call Subroutine M. This operation includes saving the (5)
Program Counter at the beginning of the subroutine and
restoring it at the end. A particular case of a subroutine
call is a macro-operator.
M! Unconditionally jump to location M.
M> A> B (7)
M< Jump to location M if A < B

(Table 7.1 continued)


96

TABLE 7.1 (continued)

Operation Description Examples

M? A = B
where A and B are the operands preceding M in the
sequence.
If there is only one operand (viz. A) they mean
respectively
A> 0
Jump to M if A< 0 (8)
A= 0
Input N, Input datum from channel No. N (integer), put it in the
sequence and treat it as an operand.
Output N= Output to channel No. N (integer) preceding operand in
the sequence.

A macro-operator is represented as a sequence of alphanumeric symbols, always followed by the special


symbol %.

A macro-operator can be implemented either by software as a subroutine (which would begin at memory
location labelled with the macro-operator's name) or by hardware as a special instruction.

In the former case, the symbol % implies therefore a "subroutine call", with related instructions which release
the control to the routine being called and return the control to the calling process at the end of the routine itself.
In the latter case, the symbol % is a reminder that the preceding alphanumeric symbols are not an operand, but an
operator, and implies the instruction fetch from memory.

Choice of either solution for each macro-operator will be made considering its recurrence frequency in the
mission, as will be said later. Different choices will yield different macro-operator execution time.

A macro-operator must be always preceded by the operand(s) to which it applies.

Indices
These are operand (address) modifiers which are appended to particular operands (e.g., n th element of a data
table) separated by an open parenthesis. Operands followed by one or more indices are treated by operators like
simple operands.

Indices pertaining to the same operand are separated by a slash, e.g., A(J/K. Table x.x.x summarizes some
of the most frequent cases.

Elementary operations
An elementary operation is the application of an operator to one or more operands. It is indicated either by
an operand followed by an operator symbol (e.g., M+) or simply by an operator, which in this case refers to the
preceding operands; these latter are in turn results of other operations. Table x.x.x shows the elementary opera-
tions and gives reference to a number of examples.

Statements
A sequence of elementary operations and macro-operators which describe a sequence of actions encountered
in the system task, constitutes a statement. A statement must always begin with a new line and with the first
symbol of an operand. It may extend over more lines, each line ending with an operator symbol, and it must end
with one of the following operators: = < > ? ! .

Some statements may be numbered or labelled, for the analyst's convenience. Their label or number will be
written at the extreme left of the first statement line, separated from it by a few blanks. For example:

FORM A,B + C : Z =
which means that the statement labelled FORM is

(A+B) : C = Z .
i>7

TABLE 7.2

Indices

Diagram of Operation
Mnemonic Description
Memory Memory
Address Contents

A no index -Operand

A(I one-entry array A


I
A+I • •Operand

A(I/J two-entry array A


I
A+I •Y

Y
ir
Y+J • -Operand

A(I/J/K three-entry array A


i
A+I •

Y
i
Y+J • -Z
Z
ir
Z+K •Operand

It can be noted that, in the Reverse Polish notation, operands and operators generally appear in a statement in the
same sequence in which the operands would be called in a computer and the operations performed upon them.

Short examples are referenced in Table 7.1 to help the reader in understanding. More complex examples
follow separately.

Examples

(1) X = A 'op' B where 'op' is one of the operators


+ — * : 'and' 'or'
is translated into:
A,B 'op' X =
which indicates the following sequence of operations:
A, Read operand A from Memory and put it in the sequence,
B 'op' Perform the operation 'op' between operand B, retrieved from Memory and operand A,
X= Store the result into the Memory location labelled X.

(2) X = (A+B)'op'(C—D) where'op'can be one of the operators


+ -* :
is translated into:
A,B+C,D+ 'op' X=
which indicates the following sequence of operations:
98

A, Read A from Memory


B+ Add B, read from Memory, to A
C, Read C from Memory
D+ Add B, read from Memory, to C
'op' Perform the operation 'op' between the two preceding operands, i.e., (A+B) and (C+D)
X= Store result into Memory location X.

(3) X = (A and B) 'op' (C and B) where 'op' can be one of the operators 'and", 'or, and the line over
the expression indicates the operator 'not'

is translated into
A,B&C,D+'op'"X=
which indicates the sequence of operations:
A, Read A from Memory
B& Perform logical 'and' between B, read from Memory, and A
C, Read C from Memory
D& Perform logical 'and' between D, read from Memory, and C
'op' Perform operation 'op' between the two preceding operands, i.e., (A and B) and (C and D)
Complement result of 'op', bit by bit
X Store result into Memory location X.

(4) X=(A+B)2
is translated into
A,B+,*X=
which indicates the sequence of operations:
A, Read A from Memory
B+ Add B, read from Memory, to A
Repeat (A+B)
* Multiply the two preceding operands, i.e., (A+B) by (A+B)
X= Store result into Memory location X.

(5) X= Log(SIN)A+B)-°-3S)+C
is translated into:
A,B+ NO.35, EXP% SIN% LOG% C+X=
which indicates the following sequence of operations:
A, Read A from Memory
B+ Add B, read from Memory, to A
NO. 35 Read -0.35 from Memory
EXP% Call macro-operator EXP, which applies exponent —0.35 to operand (A+B)
SIN% Call macro-operator SIN, which computes sine of preceding operand, i.e., of (A+B)-0-35
LOG% Call macro-operator LOG, which computes decimal logarithm of preceding operand, i.e.,
SIN(A+B)-°-3S
C+ Add C, read from Memory, to preceding operand, i.e., to LOG(SIN(A+B)-<)-3s)
X= Store result into memory location X.
99

(6) Extraction of a field (FIELD) from a word (WRD) using a mask


(MASK)

WRD FIELD
i N bits
and

MASK 0 0 1- —1 0 0
right shift, N bits

result X= FIELD

The operation is described by the following sentence:


WRD.MASK & N$ X=
where N$ indicates the shift operation.

It is not important to specify the direction of the shift for the purpose of the analysis.

The evolution of the recurrence of the shift operation will give an indication on the required speed and,
indirectly, on the shift implementation (one bit, two-bits, or multiple bits at a time).

(7) Logical IF (see FORTRAN)


IF (A.EQ.B) GO TO 3 translates into A,B,3 ?
IF (A.NE.B) TO TO 3 translates into A,B,CONT?3!
(where CONT indicates the following statement)
IF (A.GT.B) TO TO 3 translates into A,B,3>
IF (A.LT.B) GO TO 3 translates into A,B,3A
IF (A.GE.B) GO TO 3 translate into A,B,3>3?
IF (A.LE.B) GO TO 3 tranlates into A,B,3A3?
IF (A.GT.B.OR.C.LT.D) GO TO 3 translates
into A,B,3>C,D,3<
IF (A.GT.B.AND.C.LT.D) GO TO 3
translates into A,B,X>CONT!
X C,D,3<
CONT
where X is the label of the statement corresponding to
the second condition and CONT is the label of the state-
ment to be executed if neither condition is satisfied.

(8) Arithmetic IF (see FORTRAN)


IF (arithmetic expression) ml,m2,m3
e.g.: IF (A-B) 1,2,3
is translated as:
A.B-K2? 3>

7.4.3 Functional Analysis


All the functions defined under the heading "Processing" in "From Operational Requirements to System
Functions" which are to be implemented by the computer (hardware + software), are analyzed in order to arrive
at an acceptable mathematical and logical model.

This analysis will produce:


- algebraic and logical procedures on data,
- design design: definition of variable and constant data to be operated upon, with specifications of their
resolution and range,
- program organization: sequencing and timing of programs, logical procedures for control.
100

A flow diagram describing the overall functional organization and algorithms is drawn (Fig.7.3).

Data are listed in tables like Table 7.3 and Table 7.4 including: mnemonic label, number of bits, and a brief
but clear description.

Let us recall the navigation example task introduced in the preceding section, and, as it implies two different
computing sequences, let us split it into two example tasks which will be called Task 1 (Dead Reckoning) and
Task 2 (computation of great-circle course and distance).

Example Task 1 - Dead Reckoning (from Doppler Radar).


The flow-chart is shown in Figure 7.3. Let us also assume that the task is repeated once every 0.05 seconds.

The exact formulas for the computation would be:

LAP = LAI + K f VG . COSHG dT

I SIN HG
COS LAP
dT

HG = HC + VA + HDR
where
LAP, present latitude
LOP, present longitude
HG, ground track
VA, magnetic variation
HDR, drift angle (from Doppler Radar)
VG, ground speed (from Doppler Radar)
T, time
LAI, initial latitude
LOI, initial longitude
K, earth's curvature parameter
HC, magnetic heading (from magnetic compass)

A digital computer, however, cannot integrate continuous quantities. For this reason, the integrals of the
exact formulas are replaced by sums, to yield trapezoidal approximate integrations, as follows:

LAP = LAO + KT* (VGP* COSHGP + VGO* COSHGO) & DT


LOP = LOO + KT* (VGP* SINHGP : COSLAP + VGO* SINGO : COSLAO)* DT
HGP = HCP + VAP + HDRP.
Suffixes P and O indicating present data and old data respectively, KT = K/2. The meaning of each
variable is explained in Table 7.4.3.1; the number of bits is symbolically indicated, as actual values depend on the
particular application.

Example Task 2 — Computation of Great Circle Course and Distance from present point P to point Dj (one
out of a few possible destinations). Repeated once per second. The flow-chart is shown in Figure 7.4.

The applicable mathematical formulas are the following:


(a) Great Circle Course formula
HCGj = ARCTAN(COSLADi*SIN(LODi-LOP):(COSLAP*SINLADi-SINLAP*COSLADi*
*COS(LODj-LOP)).

(b) Great Circle Distance formula


DGCj = ARCSIN(COSHGCi*(COSLAP*SINLADi-SINLAP*COSLADi*COS(LODi-LOP)) +
(SINHGCi*COSLADi*SIN(LODi - LOP))
where:
HCGj, Great Circle Course to point Dj LODit Longitude of Dj
DGCj, Great Circle Distance to point D-, LAP and LOP have been computed in the
LADj, Latitude of Dj preceding Task 1.
101

c DEAD RECKONING
3
INPUT PRESENT GROUND
SPEED & DRIFT ANGLE

INPUT PRESENT MAGNETIC


HEADING

YES
INPUT MAGNETIC VARIATION
FROM OPERATOR PANEL

COMPUTE GROUND TRACK HG

COMPUTE AND STORE SIN HG


AND COS HG

COMPUTE PRESENT LATITUDE


AND REPLACE OLD VALUE

COMPUTE PRESENT LONGITUDE


AND REPLACE OLD VALUE

EQUAL PREVIOUS LATITUDE


TO PREVIOUS LATITUDE

EQUAL PREVIOUS VG, HG,


SINHG, COSHG, COSLA, TO
PRESENT VALUES

YES OUTPUT PRESENT


LATITUDE AND LONGI-
TUDE

/RETURN A-

Fig.7.3 Task I flow diagram


102

INPUT DESTINATION NUMBER


FROM OPERATOR PANEL

YES

NO
REQUEST DATA FROM OPERATOR

INPUT LATITUDE (LADi) AND


LONGITUDE (LOD-j) FROM
OPERATOR PANEL

t
COMPUTE GREAT CIRCLE COURSE HCGi
FROM DESTINATION COORD. (LADi,
LOD-j) AND FROM PRESENT POINT
COORDINATES (LAP,LOP)

COMPUTE GREAT CIRCLE DISTANCE


DGCi FROM GREAT CIRCLE COURSE,
DESTINATION COORDINATES AND
PRESENT POINT COORDINATES

OUTPUT AND STORE HGC, AND DGC.

YES

Fig.7.4 Task 2 flow diagram


103

The same data are presented in Table 7.4, using the rules for indices.

7.4.4 Translation of the Model


Once the overall mission has been described as in the preceding section, it has to be "translated" into a list of
statements making use of the elementary operations, in order to arrive at the mission statistics (i.e., type and
frequency of each elementary operation).

The translation process will be explained with the aid of the two example tasks already presented. The reader
will also have to make reference to Tables 7.1 and 7.2.

Each task is assumed to be activated by a program interrupt, which is treated like a macro-operator.

Example Task 1: Dead Reckoning


INTDEAD% Call of DEAD task
INTNAV INPUT 1, VGP=
INPUT 2, HDRP=
INPUT 3, HCP=
FLAG 1,0.OPER ? Is old value of VAP
STILL VALID?
INPUT 4, VAP=
OPER HCP, VAP+HDRP+HGP=
HGP,SIN%SHGP=
HGP,COS% CHGP=
VGP,CHGP* VGO, CHGO*+KT+DT+LAO+LAP=
LAP,COS% CLAP=
VGP,SHGP*CLAP : VGO,SHG)*CLAO: +KT*DT*
LOO+LOP=
LAP,LAO=
LOP,LOO=
HGP,HGO=
SHGP,SHGO=
CHGP,CHGO=
CLAP,CLAO=
FLAG2,0,END ?
LAP.OUTPUT 1 =
LOP.OUTPUT 2=
END return to the calling program (included in % operator).
where
INPUT 1 is the channel providing VGP
INPUT 2 is the channel providing HDRP
INPUT 3 is the channel providing HCP
INPUT 4 is the channel providing VAP
OUTPUT 1 is the output channel for LAP.
OUTPUT 2 is the output channel for LAP.
104

TABLE 7.3

Task 1 Data Definition

Mnemonic No. of Bits Description

LAP nl Present latitude


LAO nl Old latitude
KT n2 = k/2, where K is the earth's curvature parameter
VGP n3 Present ground speed
HGP n4 Present ground track
VGO n3 Old ground speed
HGO n4 Old ground track
DT n5 Time increment
LOP nl Present longitude
LOO nl Old longitude
HCP n4 Present magnetic heading
VAP n4 Present magnetic variation
HDRP n4 Present drift angle

TABLE 7.4

Task 2 Data Definition

Mnemonic No. of Bits Description

HCG(I n6 Great Circle Course to point Dj


HGC(I n7 Great Circle distance to point Dj
LAD(I nl Latitude of point Dj
LOD(I nl Longitude of point Dj
LAP nl Present latitude
from Task 1
LOP nl Present longitude

It can also be noted that, in writing the statements, we have introduced for convenience a number of inter-
mediate variables, which will have to be stored into memory places. These variables will be listed, for each task, in
tables like those described in the preceding section, and taken into account when estimating the size and organization
of the data memory (as described later). For this example the intermediate variables are shown in Table 7.5.

TABLE 7.5

Task 1, Intermediate Variables

Mnemonic No of Bits Description

FLAG1 1 Flag indicating the availability of a new VAP


SHGP ml sinHGP
CHGP ml cosHGP
CHGO ml cosHGO
CLAP ml cos LAP
CLAO ml cos LAO
FLAG2 1 Flag indicating that an output is requested.
105

Example Task 2: Great Circle Course and Distance


INTGREAT% Call of GREAT task
GREAT INPUT 4,1=
FLAG3,1,CALL? Are coordinates in memory?
REQ,OUTPUT3= Output coordinate request
INPUTS,LAD(I=
INPUT6,LOD(I=
CALL LAD(I,COS%CLAD(I=
LAD(I,SIN%SLAD(I=
LAP,SIN%SLAP=
LOD(I,LOP-COS%CLDP(I=
LOD(I,LOP-SIN%SLDP(I=
CLAD(I,SLDP(I*CLAP,SLAD(I*SLAP,CLAD(I*CLADP(I*-:ARCTAN%HGC(I=
HGC(I,COS%CLAP,SLAD(I*SLAP,CLAD(I*CLDP(I*-*
HGC(I,SIN%CLAD(I*SLDP(PV+ARCSIN%DGC(I=
DGC(I,OUTPUT4=
HDG(I,OUTPUT5=
INFLAG7.1,GREAT ? Are there other operator requests?
where
INPUT4 is the channel providing the destination number I
INPUTS is the channel providing the latitude of destination I, unless already in memory
INPUT6 is the channel providing the longitude of destination I, unless already in memory
OUTPUT4 is the output channel for DGC(I
OUTPUTS is the output channel for HGC(I

The intermediate variables are shown in Table 7.6.

TABLE 7.6

Task 2, Intermediate Variables

Mnemonic No. of Bits Description

FLAG3 1 Flag indicating that the coordinates of destination


I are in memory
REQ m3 Request message
CLDP(I m4 cos LDP(I
SLDP(I m4 sin LDP(I when I = 1, . . . . N
CLAD(I m5 cos LAD(I (N words)
SLAD(I m5 sin LAD(I
SLAP m6 sin LAP
INFLAG7 1 Flag indicating an operator request

7.4.5 Mission Statistics


Once the translation of the tasks into elementary operations has been accomplished, the recurrence frequency
of each elementary operation can be calculated for each task and for the whole mission. This process will be called
"Mission statistics" and is conveniently performed in two steps.

In the first step the elementary operation (Table 7.7) and the macro-operations (Table 7.8) for each
task are counted. This first count will be needed for defining program size and related memory requirements. In
the second step, the operation distribution of each task is multiplied by its recurrence frequency in the task (Table
7.9) and the same procedure is applied to the macro-operations (Table 7.10). This second step defines the
"mission spectrum", which will be used to arrive at defining the instruction set. The machine registers have to be
also counted, i.e., those temporary storage devices for intermediate results or operands which will be needed more
106

TABLE 7.7

Operations Distribution

Tasks ^ y
* + - * & - II
M, M+ M- M* M: M& M- X=
^ ^ Operations

I. Dead Reckoning 2 20 4 8 2 18

2. Great Circle 1 : 1 1 17 2 9 10

3. Other Tasks 150 220 10 130 50 10 10 10 550 500 80 280 150 20 10 400

TOTAL 150 223 12 131 51 10 10 10 587 504 82 297 152 20 10 428

Tasks ^ y
NS M% Ml M< M> M? M" 1NPUTN, OUTPUTN= (1 (l/J
^yOperations

1. Dead Reckoning 1 1 2 4 2

2. Great Circle 10 1 1 3 2 27

3. Other Tasks 300 50 100 100 100 90 10 50 50 200 100

TOTAL 300 64 102 100 100 93 10 57 54 227 100

NOTE: Totals do not include contribution of macro-operators


Total operations = 3609

TABLE 7.9

Mission Spectrum

Tasks ^ y .
+ * & II
M, M+ M- M* M: M& M x=
^^Operations

I. Dead Reckoning 40 400 80 160 40 360

2. Great Circle 1 2 1 1 17 2 9 10

3. Other Tasks 1000 1000 100 1000 500 100 100 100 6000 6000 900 3000 1500 200 100 4000

TOTAL/SEC 1000 1041 102 1001 501 100 100 100 6417 6080 902 3169 1540 200 100 4370

Tasks ^ ^
N$ M% M! M< M> M? M" 1NPUTN, OUTPUTN= (I (I/J REGISTERS
^ y Operations

1. Dead Reckoning 80 20 40 80 40 2

2. Great Circle 10 I 1 3 2 27 3

3. Other Tasks 3500 500 1000 1000 1000 900 100 500 500 2000 1000 2

TOTAL/SEC 3500 590 1021 tooo 1000 941 100 583 542 2027 1000 3

NOTE: Totals do not include contribution of macro-operators


107

TABLE 7.8

Macro-Operations Distribution
(not to be used in determining program size)

Tasks ^ ^ ^ SIN COS TAN ARCSIN ARCCOS ARCTAN OTHERS


m*^^ Operations (20)

1. Dead Reckoning 1 2

2. Great Circle 4 3 1 1
3. Other Tasks 8 8 3 5 1 5 20

TOTAL 13 13 3 6 1 6 20

TABLE 7.10

Macro-Operations Spectrum

Tasks ^ ^ SIN COS TAN ARCSIN ARCCOS ATCTAN OTHERS


^ ^ ^ Operations (20)

1. Dead Reckoning 20 40

2. Great Circle 4 3 1 1

3. Other Tasks 50 30 10 20 4 16 300

TOTAL 74 73 10 21 4 17 300

times during a statement. Every operation of the type ' , ' , 'M', 'INPUTN,' 'M' ", encountered during one statement,
read from left to right, adds one register to the number of those required. All the other operations or macro-
operations encountered modify the number of registers according to the following rules:

(a) + — * : & ' decrement the count by one,


(b) M+ M - M* M: M" " N$ M! leave the count unaltered,
(c) X = OUTPUTN = M< M < M? reset the count to zero if encountered at the end of the statement;
otherwise they leave the count unaltered,
(d) M% affect the count depending on the number of their operands and of their results. For example,
trigonometric functions operate with one operand which they replace with one result, leaving the count
unaltered.

It has to be remarked that the number of registers that appear to be required depends on how the statements
have been written. One can find out the statements which yield the maximum count and take this latter as the
number of registers needed. Such number, in any case, cannot be unreasonably high; if this were the case, the
contributing statement would have to be split into more parts, each part entailing a storage of an intermediate result
into memory.

In this way, machine registers would be saved, at the expense of additional memory cycles.

Macro-operations are listed, as said before, in separate Tables 7.8 and 7.10, with related quantities and
recurrence frequencies, for each of them. Each macro is then to be analyzed in terms of elementary operations,
exactly as if they were tasks of their own.

The results of this analysis are shown in another section, (7.4.8).

The reason for this procedure will be more clear later, when the instruction set is introduced: having the
macros separated is very important to dedde whether they have to be implemented by hardware or by software.

The procedure described before does not give a complete spectrum of the mission, when a significant amount
of data is stored in memory in the form of fields of words. This situation produces an additional work load in terms
108

of masking and shifting, which can be estimated to be proportional to the totals of the fetch and of the storage
operations performed on fields. With reference to the example 7, the following rules apply (the underlined part
representing the overload
(a) fetch of a field (FIELD) from a word (WRD)
FIELD: entails: WRD,MASK&N$

(b) fetch, with operation, of a field from a word


FIELD'op'ENTAILS: WRD,MASK&N$'op'

(c) storage of a field


RESULT,FIELD = entails: RESULT,N$WRD,MASK&WRD=
where RESULT, as its name implies, is the result of a series of preceding operations.

To render such estimate easier, it is advisable to call field operands with names which recall their nature (e.g.,
appending an F to their mnemonic).

Other additional workload is required for scaling shifts. The extent of this depends on the task, but in most
cases it can be estimated that about 20% of the arithmetic operations require scaling. Within such limits, fixed-point
arithmetic is suitable. Should the scaling load become excessive, floating-point arithmetic may be advisable.

7.4.6 Memory for Data


Data and instructions are to be stored in a certain number of memory words.

The problem is to choose the word length in order to obtain the best compromise between memory size (cost)
and workload to store and retrieve data and instructions.

The following considerations are to be made.

If each data occupied a single word only a memory cycle would be required to fetch it.

Since data usually have very different lengths, this solution would probably waste memory.

On the other hand, short word length would help to increase memory utilization but would require multiple
accesses to fetch long operands.

A compromise between these two extremes must be found: the proper word length should match the length of
most data without excessive memory wasting. The few long data would fit in double words. Short data could be
either put in single words, if very few, or grouped, two or more together, as fields of words. In the latter case, their
fetch would require overwork for masking and shifting.

The choice of the word length can be made easier by plotting a histogram of data lengths as shown in Figure
7.5 for a hypothetic example, which could apply to the complex tasks already described, assuming certain
values for word lengths.

From the figure it can be seen that if a word length, say 12 bits, were selected 600 + 800 + 550 = 1950 data
would occupy a single word each, while 1600 + 3200 + 1100 = 5900 data would occupy a double word each. A
total of 1950 + 5900 x 2 = 13,750 memory words would be required.

Memory capacity in bits would be: 13,750 x 12 = 165,000 bits.

Memory occupancy would be 99,400/165,000 = 0.6.

If a 16-bit word length were chosen, the following results would be obtained:
600 + 800 + 550 + 1600 + 3200 + (1100 x 2) = 8950 16-bits words: 8950 x 16 = 143,200 bits.

Memory occupancy: 99,400/143,200 = 0.69.

If the three shorter groups of data were grouped in two-field words, they would be contained in 1000 words,
so the total words would be 8000. Then 8000 x 16 = 128,000 bits.

Memory occupancy: 99,400/128,000= 0.78.


109

In the latter case, the time for fetching would increase. The extent of the increase can be evaluated as said in
a previous section.

It is to be remarked that grouping should be limited to data belonging to the same table message, i.e., data
used by the same programs, in order to simplify addressing.

The procedure for organizing memory for data can be summarized as follows:
(a) The data defined during the functional analysis are subdivided in tables according to their usage by sub-
programs (see, for example, Tables 7.3 to 7.4).
(b) Very-short-length data belonging to the same table are joined together to form composite data (two- or
three-field words).
(c) A histogram of data lengths is drawn (Fig.7.5) where composite data are considered with their resulting
lengths.
(d) The total of memory bits for data is evaluated by adding all the lines of the histograms, each weighted by
its corresponding number of bits.
(e) A memory word length is tentatively selected and is plotted on the histogram. The length should be
preferably chosen as a multiple of four or eight to simplify hardware.
(0 The total of memory words required for data is evaluated by adding all the lines of histograms included
between a single and a double word-length and so on.
(g) The number of bits of the resulting memory is evaluated multiplying the total number of words by the
selected word length.
(h) The degree of memory occupancy is evaluated as the ratio of the infonnation bits derived in (d) and the
memory bits derived in (g).
(i) The steps from (e) to (h) are repeated for a different word length in order to maximize the memory
occupancy. Few tentatives are required to obtain a good result.
The memory capacity obtained in step (0 is to be increased by a certain amount (say 20 + 30%) to compensate for
possible underestimates. The value thus obtained will have to be added to the words required for program storage,
as later described.

600 x2 = 1200
800 x5 = 4000
550 x 10 = 5500
s
CM
1600 x 13 = 20800
3200 x 14 • 44800
=
1100 x 21 23100
TOTAL 99400 BITS

2- O
o
lO
o
Zm

O g
o o
o
CO

o
o

-J I I I I I I I I I I I I I I I I 1 I L I I I I I I I I I I I L
8 16 24 32
WORD LENGTH (BITS)

Fig.7.5 Data length statistics


110

In our example, if 8000 words have been estimated (in step (0) and an allowance of 20% is applied, the total
will be 9600 words.

7.4.7 Input/Output
The term Input/Output indicates the data exchange between the computer and the rest of the system. From
the mission statistics the following parameters have been made available:
— Number of Input Channels,
— Number of Output Channels,
— Throughput for each Channel.

The organization of data acquisition and distribution is described in another chapter; the problem dealt with
here is the data exchange between the computer Input/Output (I/O) section considered as a whole and the computer
memory.

Let us recall how I/O exchanges are written in terms of elementary operations. For Input from channel N
to memory location LOC we would write:
INPUT N, LOC=
while for Output we would write:
LOC.OUTPUT N=

If either operation is performed under program control, two instructions will be required corresponding to
four memory cycles (t^ , as explained in the following section).

If, otherwise, Direct Memory Access (DMA) is used, one memory cycle per word will be required.

The latter solution is to be preferred for those channels with higher throughputs, which would otherwise
consume excessive time.

External program interrupts represent also a communication from the system to the computer, to ask for the
modification of the program sequence in order to have some particular actions executed. Interrupts do appear in
terms of elementary operations where they are treated like macro-operators; use of proper names (e.g., INT K%)
may permit easy identification of interrupts. Number of interrupt channels and related recurrence frequency is
thus available from mission statistics. Generally speaking, a program interrupt system can be single level or multi-
level.

In a single level interrupt system, as soon as an interrupt is accepted, further requests from peripheral devices
are automatically locked out until a reset signal is received.

The priority in executing the routines associated to different requests is stated by software under control of
an Executive program.

In a multilevel interrupt system, instead, the execution of an interrupt routine is stopped if a higher priority
request occurs. This implies an automatic storage of the CPU configuration related to the current, lower priority
routine and a reloading of the proper machine registers when the routine associated with the higher priority interrupt
has been completed. The single level system is simpler from a hardware standpoint, but increases the burden of the
programmer as well as the execution times.

Multilevel interrupt organization is more powerful but makes use of more hardware and therefore it requires
more power to operate.

This latter organization is preferable when multi-programming is employed.

The interrupt execution time, i.e., the time required to call the related interrupt routine to save the Program
Counter and other machine registers, and to resume the preceding program at the end of the routine, restoring the
contents of the above registers, depends on the solution chosen. Many possibilities exist: a few will be presented
as examples with the estimated execution time, assuming that two machine registers (Program Counter and
Accumulator) are saved.
Ill

(a) The interrupt channel supplies the memory address to jump to, for starting the interrupt routine:
registers' saving 2 tM
fetch of resume instruction 1 tM
registers' restoring 2 tM

TOTAL TIME 5 tM

(b) The interrupt channel defines a memory location whose contents are the address to jump to, for starting
the interrupt routine:
registers' saving 2 tM
fetch of jump address 1 (M
fetch of resume instruction 1 *M
registers' restoring 2 lM

TOTAL TIME 6 tM

(c) The interrupt channel defines a memory location containing the first instruction to be executed.
The instruction calls the subroutine:
fetch of call instruction 1 »M
registers' saving : 2 lM
fetch of resume instruction 1 lM
registers' restoring : 2 tM

TOTAL TIME 6 tM

Method (b) and (c) are to be preferred to method (a) because they realize full program control on jump
address.

As far as the estimate of mission execution time is concerned, we can assume 6 t^j for every program interrupt.

7.4.8 Execution Times and Instruction Set


From the description of the tasks and from the mission statistics, a tentative definition of the instruction set
may begin.

It is to be pointed out that this problem does not have a unique solution, as more instruction repertoires can
be conceived which comply with the mission requirements. The parameters involved are so many that, to our
knowledge, no quantitative and object method has yet been devised.

A solution is thus found after a cut-and-try proces which relies considerably on the skill and experience of the
designer.

The first quantity which is to be estimated is the mission execution time. To do this, a parametric duration is
assigned to each elementary operation, as a linear combination of memory cycle t^ and of CPU cycle tcpn • The
operation M+, for example, is assumed to last 2t^ (one for fetching the instruction and one for executing it); the
operation N$ is assumed to last t^ + Nt^pij . These assignments are based on preliminary ideas of the machine
timing that the designer has already in his mind. A simplifying hypothesis which is generally reasonable is to assume
tfgpu is less than tM (for core memories and parallel CPUs this is certainly true); thus during one memory cycle,
more CPU cycles can be performed.

An example of the process just presented is shown in Table 7.11, which applies to the example mission
statistics of a previous section. It is advisable to treat macro-operators separately; in our case, each macro has been
developed into elementary operations and a tentative execution time (in parametric form) assigned to it by adding
those of its elementary operations. The steps of this process are not shown; approximate results appear in
Table 7.12.

Now, the mission execution time per second, or mission "duty", can be estimated in parametric form, by
adding the frequencies F M of tM and Fppjj of tCpy which appear in the two rightmost columns of the
two Tables 7.11 and 7.12.
112

The result will be an expression of the type:


D F x 4
E = M M + F CPU x l
CPU < ! •
The value of the expression is to be significantly less than 1 (e.g., 0.5) to take into account the approximation of
the method and to allow for contingencies and future expansions of the tasks.

The expression contains two independent variables, but neither one is allowed to vary freely according to the
designer's fancy. Hardware constraints exist, which establish upper limits that it would be impossible or simply too
costly to exceed.

If, using acceptable values of t ^ and t^pu , the above expression cannot be satisfied, the times assigned to
some elementary operations or macros have to be shortened and the computation repeated for the new values until
a satisfactory Dg is found.

For example, if a macro operation orginally assumed to be implemented by software, is implemented by


hardware (i.e., as a single instruction), each memory cycle t ^ required to fetch a constant or an instruction from
the memory, except the first one, would be replaced by one or more CPU cycles t^py ( a s the constants would
be stored in machine registers and the intermediate instruction would be replaced by sequences of machine states).
The execution time would be thus considerably shorter, but at the expense of a more complicated hardware.

Before we show an example computation, let us introduce another important parameter: the "response time"
IR , i.e., the time elapsing between the input of a set of variables and the output of the related processing results.
The value of I R is very important whenever the computer is part of a control loop. The procedure for estimating
IR , i.e., the time elapsing between the input of a set of variables and the output of the related processing results.
The value of I R is very important whenever the computer is part of a control loop. The procedure for estimating
tR for each task is like the one just described for the execution duty Dg , but applied to the operation distribution
(not spectrum) of each task. If n^M is the number of memory cycles and njcpjj the number of CPU cycles
pertaining to a task, the following expression is to be satisfied.:
l = n x
R TM *M + n
TCPU x l
CPU < T
R
where T R is the value of the required response time, taking approximation and contingencies into account. If the
expression is not satisfied, once again the operation execution times have to be adjusted.

Let us now introduce an example.

Let us assume a tfgpy = 0.250 //sec and t ^ = 1.0 /asec . From the totals of Tables 7.4.8.1 and 7.5.8.2,
we have:

FM = 62,090 + 66,368 = 128,458 sec"1


F
CPU " 141,890+173,880 = 315,770 sec - 1
whence:
DE = 128,458 x 1(T 6 + 315,770 x 0.25 x 10~ 6 = 0.207 sec/sec
which is acceptable.

Once the execution time and response times have been found to be satisfactory, the further step is the definition
of an instruction set. A few choices have already been made, when assigning execution times to each elementary
operation: for example, that macros can be implemented by software, that multiplication and division and perhaps
some macros are to be implemented by hardware. Let us now return once more to the elementary operations and
to the mission statistics, and describe a number of qualitative rules to determine the instruction set.

Some operations can be directly implemented by means of one instruction; for example M+ becomes: ADD
contents of a memory place (specified by instruction address) to contents of a certain machine register ( a similar
reasoning applies to M—, M, M* M:, etc.). If more machine registers are required, it will be desirable to specify
on which register an instruction is to operate: hence more ADD or SUB (subtract) or MPY (multiply), etc., will be
specified.

Other operations, more exactly M! M > M > M? translate directly into unconditional (for M!) and conditional
jumps.

Other operations, those indicated by a simple operator symbol, would correspond to "interregister" instructions.
For example: * specifies that the two operands to be multiplied together are contained in machine registers. Such
instructions may operate on two registers of a random-access scratch-pad, or on the two upper locations of a stack,
or on two machine accumulators.
113

Frequent encounters with indicators like (I, (I/K, etc., reveal that one or more index registers will be required.
As an index register is of no use unless instructions are provided for loading, storing and testing its contents, such
instructions will have to be provided. Use of index registers will be normally associated with actions of this type:
"Test contents of index register; if equal to N go to statement X , otherwise go to statement Y (i.e., continue
a loop)", which would be written as
I,N,X?Y!

Such pattern of elementary operations could be implemented by means of one instruction only, of the type
"Test contents of X , if equal to contents of memory location N , go to next instruction, otherwise skip it".

This same reasoning can be applied to other patterns of consecutive elementary operations which are recognized
to repeat themselves frequently throughout the mission description: it would be desirable to implement such patterns
by means of dedicated instructions, as this would reduce mission execution duty. For example, whenever a memory
location M is used as a counter of events, we will find an expression like this:
M, 1 + M=
which can be implemented by an instruction of the type:
"increment a specified memory place by 1".

Tests on status of flags are other typical cases.

To end with, no instruction set would be complete without provisions for testing and for resetting Carry and
Overflow and other machine status indicators.

Once a tentative instruction set has been defined, provided that the number of the required instructions is not
in contrast with the requirements, often conflicting, of the instruction format (see the following section), a final
adjustment has to be made. Once more, no standard method exists; trial programming of some tasks or of the whole
mission, aided, if the case, by simulation (see the following chapter) is perhaps the way most commonly followed
to verify that the proposed set fulfills the mission requirements.

TABLE 7.11

Execution Times of Operations

Single Time
Frequencies
(1)
Operations Spectrum
T T F F
M CPU M CPU

) 1000 1 1 1000 1000


+ 1041 1 1 1041 1041
— 102 1 1 102 102
* 1001 1 20(2) 1001 20020
501 1 36(2) 501 18036
& 100 1 1 100 100
- 100 1 1 100 100
a 100 1 1 100 100
M, 6417 2 12834 -
M+ 6080 2 12160 -
M- 902 2 1804 -
M* 3169 2 20(2) 6338 6338
M: 1540 2 36(2) 3080 55440
M& 200 2 400 -
M" 100 2 200 -
X= 4370 2 8740 -
N$ 3500 1 8(3) 3500 28000
M% 590 4(4) 2360 -
M! 1021 1 1 1021 1021

(Continued)
114

TABLE 7.11 (continued)

Single Time
Frequencies
(I)
Operations Spectrum
T T F F
M CPU M CPU

M 1000 1 2 1000 2000


M 1000 1 2 1000 2000
M? 941 1 2 941 1882
M" 100 1 1 100 100
INPUTN, 583 1 1 583 583
OUTPUTN= 542 2 1084
d 2027 1 2027
d/J 1000 1 2 1000 2000

TOTALS 62090 141890

NOTES: to Table 7.11.


(1) Includes the fetch of the instruction
(2) In case of 16-bit words
(3) Average value in case of 16-bit words

(4) Includes the saving and resuming of program counter

TABLE 7.12

Execution Times of Macros

Single Time Partial Time


Macro Frequency
T T F F
M CPU M CPU

SIN 74 32 120 2368 8880


COS 73 32 120 2336 8760
TAN 20 32 120 320 1200
ARCSIN 21 32 120 672 2520
ARCCOS 4 32 120 128 480
ARCTAN 17 32 120 544 2040
OTHERS (2) 300 300 500 60000 150000

TOTAL 66368 173880

NOTES:
(1) Trigonometric functions are evaluated as 6-term series expansion,
x with 16 bits.
(2) The execution time is considered as an average value.

7.4.9 Instruction Word Length and Format


Instructions can be classified into two main classes:
(a) memory reference instructions, concernting operations on operand(s) contained in Memory besides in the
machine registers;
(b) non-memory reference instructions, concerning operations on data in the machine registers.

The instruction format for class (a) will consist of two basic parts: one defining the operation and the other
the operand(s). The latter part could be subdivided into more sub-fields defining the procedure to find each operand
115

involved in the instruction and the location where to store the result. In the simplest case (see Figure 7.6) one
operand is always stored in a machine register called accumulator, while the other is in memory. In this case, no
special code to specify the accumulator is required: it is understood from the operation code. The operand part
(AF) of the instruction is then completely devoted to indicate the second operand in memory.

INSTRUCTION WORD

OPERATION PART OPERAND PART

OP I X AF

OP OPERATION CODE
I INDICATES INDIRECT ADDRESSING WHEN IT IS 1
X INDICATES INDEXED ADDRESSING WHEN IT IS 1
AF OPERAND ADDRESS FIELD

Fig.7.6 Example of single-address memory-reference instruction

In more complex formats, the operand part of the instruction defines two or three memory locations where
the operands are to be retrieved and the result is to be stored. In this case, the full addresses of the operands and
of the result cannot be explicitly contained in the instruction: a too long word for the instruction would be
required.

Usual solutions to solve this problem are "indirect addressing" and "indexing". In the former solution, the
instruction gives a reference to a location, either in the CPU or in a reserved part of the memory, where the full
address of the operand is stored; in the latter solution, the instruction contains a partial address which is to be
modified by the contents of an index register to obtain the full operand address.

These two types of addressing are used also for operational purposes, to access for instance a location in a
one-entry array or in a two-entry array, as will be said later.

The problem of limiting the instruction word length is also presented in the simplest case of a single-address
instruction, mentioned before.

A solution often adopted consists of an organization of memory into pages. A page is a block of 2 n contiguous
memory locations, small enough, with respect to the overall memory capacity, to be directly addressed by the n
bits of the address field (AF) contained in the operand part of the instruction. An "addressing mode" code, also
contained in the operand part of the instruction, indicates the way to derive the location of the page inside the
full memory addressing range.

Many different addressing modes have been devised. The most frequent ones are summarized in Figure 7.7,
where the procedure to evaluate the operand effective address (EA, through EA5 , for the cases considered) is
shown.

The classification has been made on the basis of the page location, which can be either relative (to the program
counter (PC) or to a point (POINTER)) or fixed (e.g., page "zero"), and of the page boundaries, which can be
either variable or fixed.

Variable boundaries are obtained when AF is added to the reference address, given either by the program
counter (case 1) or by the pointer register (case 2). In case 1 the page is called "mobile" since it "moves" following
the program counter during program execution.

Fixed boundaries are obtained when the least significant bits of the address are directly derived from AF while
the most significant bits (MSB) are derived either from PC (case 3) or the POINTER (case 4) or are set to a fixed
number K (case 5).

To choose the page size the following considerations are to be made.


116

PAGE LOCATION

RELATIVE TO PC RELATIVE TO A POINTER FIXED


(CURRENT PAGE) (ANY PAGE) (e.g., PAGE "ZERO")

(1) (2)
Variable EAl = PC + AF EA2 = POINTER + AF
oo not applicable
UJ (mobile page)
1—1
cc
<C
c__
z
__3
(3) (4) (5)
o Fixed EAS = MSB.PCfAF EA4 = EAS = K + AF
CO
= MSB.POINTERfAF (K=0 for page "zero")

LEGEND:

EA operand effective address


PC program counter
AF address field
MSB. most significant bits of
• junction
POINTER pointing register
K fixed number (K=0 for page "zero")

Fig.7.7 Addressing modes

Data are usually organized in tables. A table is a set of data which have either the same source (input channel)
or the same destination (output channel) or the same utilization, i.e., they are used by the same sub-program. To
speed up the operand fetch any data inside a table should be addressed easily, once the table has been defined.

For this reason, the page size should be enough to contain the longest table. If this is not practical because
most of the tables can be accommodated in a reasonable page size while a few are longer, these latter could be
subdivided into subtables of suitable sizes.

The memory page should be chosen taking also into account the requirements for program sequencing: loops,
for instance, should not be longer than a page to allow jump with direct addressing.

The requirements for a certain page size are often in conflict with those of the operation code. From the
programmer's point of view, having large pages means little use of indirect addressing, and therefore simpler
programs which are also shorter to execute (instructions with indirect addressing require one more memory cycle
than their counterparts with direct addressing).

Large pages require long address fields, thus leaving few bits for the operation code and reducing the number
of available operation codes for memory-reference instructions.

Thus the designer is faced once more with the problem of finding an acceptable compromise between page size
and operation codes, in order to optimize execution and response times.

Non-memory-reference instructions, on the other hand, should not present coding problems as they would
typically be specified by a dedicated OP code, leaving the other many bits of the instruction word free for the
instruction code.

Input/Output instructions are often in a form similar to that of memory-reference instructions, but with AF
specifying the number of the input or output channel.

In any case, it is desirable that the information required to define instructions be packed into a word having
the same length as the data word, or in some cases a multiple of it. A mutual adjustment can be sometimes
necessary to achieve a suitable compromise.
117

Indirect addressing and indexing are used also to reference subscripted locations. Two examples are considered:
one-entry and two-entry arrays. More sophisticated cases can be encountered in actual programming.

One-entry array

Let N locations be organized into a table beginning at location TAB. Let also J be a number, ranging
between 0 to N — 1 , used to reference the said locations inside the table. The effective address of a generic
location is given by: EA = TAB + J , where TAB is derived from AF in one of the ways said before (according
to the type of page chosen) (see Figure 7.4.9.1) and J is supplied by an index register.

In case of loops J can be used to control the number of iteration and, at the same time, to reference data
inside the table on which the program applies.

Two-entry array

Let us consider a table of N x M memory locations organized as an array with N rows of M locations
each. Let also J , ranging between 0 to N — 1 , be a number defining a generic row of the table, and K be
another number, ranging between 0 to M — 1 , defining a generic location in the J1*1 row.

The overall table can be considered as composed of N subtables, with M elements each, beginning at locations
TABJ (J = 0, . . . , N — 1 ) . The addresses TABJ can be contained in a reference table, beginning at location
TAB and having N elements, addressed by J with reference to TAB .

The effective address of a generic location is given by: EA = TABJ + K , where TABJ is given by:
TABJ = (TAB + J) ; TAB is derived from AF as said before for the type of page used (Fig.7.6). The
parentheses indicate the contents of memory location specified. J, K are given by two index registers.

Using the notation described in a previous section, an operand in a one-entry array is defined as TAB(J; an
operand in a two-entry array is defined as TAB(J/K (see example in Table 7.2).

7.4.10 Memory for Program

The number of memory locations required to store the program instructions can be evaluated according to the
following steps:
(a) From Table 7.7 the total number of elementary operations in the mission is evaluated by adding all
the numbers in the bottom row.
In our example this total number is 3609 words.
(b) The macros are expanded in terms of elementary operations.
(c) For each macro, the number of elementary operations involved is evaluated. In our example a number
of 20 elementary operations has been assumed to be required for each of the macros SIN, COS, TAR,
ARCSIN, ARCOS, ARCTAN, and an average value of 100 operations for each of the OTHER 20 macros.
The operations required by the macros are:

6 x 20 + 20 x 100 = 2120 op.s


(d) Considering, as a first approximation, a memory location for each operation (as if a single word instruction
corresponded to each elementary operation), the number of memory locations for program is obtained by
adding the number of operations obtained in (a) to those in (c). In our example:
3609 + 2120 = 5729 locations.

(e) An allowance is to be left for the following reasons:


— double-length instructions (including indirect addressing),
— constants,
— contingencies.
A total of 50% can be estimated and added, yielding:
5729 x 1.5 = 8600 locations.

7.4.11 Total Memory Requirements

The total requirements for the memory are derived by adding memory words for data and memory words for
118

programs. The number thus obtained will represent the memory words required for the mission. In our example:
Memory for data: 9600 words
Memory for programs: 8600 words
TOTAL MEMORY : 18200 words

The figure obtained must be rounded up to take into account the following considerations.

Memories generally consist of a number of identical modules, i.e. blocks of words to which contiguous
addresses are assigned. Each module is a well-defined physical entity having a number of dedicated electronic circuits
and sharing other circuits with the other modules. Hardware constraints define practical module sizes, which are
normally multiples of 1024 (!K) words. Typical sizes are 4K and 8K.

As the total memory will have to consist of a number of modules, the memory required by the mission shall
be approximated by the nearest multiple of module size which exceeds the theoretical figure. In our example,
assuming 4K modules, total memory size will be 20K words.

This last figure is the estimated memory requirement for the actual mission.

As already said, future expansions should be foreseen. Addition and utilization of other memory modules is
possible only if the full memory address has a range wide enough (the full memory address is normally not longer
than a memory word).

If the full memory address is b bits, the full addressing range is 2 b memory words. In our example, where
20K words are assumed to be presently required, an expansion possibility up to 32K, i.e., 60% of present memory,
seems appropriate. Hence the full memory address should be 15 bits long, as 32K = 2 " .

Should this be considered not enough, a maximum memory of 65K would have to be chosen, corresponding
to a full address of 16 bits.
119

CHAPTER 8

MONITORING AND CONTROL OF AEROSPACE VEHICLE PROPULSION

E.S.Ecdes

8.1 GENERAL INTRODUCTION

This chapter discusses the application of digital computer systems to a specific problem and cross-refers to
preceeding chapters. It illustrates the practical significance of individual sections and brings particular aspects into
sharper focus. The discussion is concerned with a single problem — the design of systems for monitoring and
control of the propulsion of aerospace vehicles.

The general context is related to commercial operation of vehicles using airbreathing engines. This limitation
permits concentration of attention on the systems problem and removes the need for any extensive discussion of
powerplant characteristics. Detailed treatments of powerplants and their operating or control characteristics are
available in the literature (e.g., Reference 1).

The restriction is not serious in terms of broad powerplant characteristics. The basic features of control
requirements are common to all plants using chemical energy sources and combustion for energy conversion. There
are strong conceptual similarities, for instance, between throttleable rocket motors and augmented (reheated) gas
turbine powerplants. Time constants and thrust levels differ but the basic problems of mixture control via indepen-
dent fuel and oxidant flow control and their pumping (in the general sense) remain the same. The plant control
details will be less relevant to systems using hypergolic fuels and nuclear or electric propulsion.

In the same way, the operational criteria will be similar for vehicles which, in themselves, are as different as
the space shuttle and STOL feeder liner systems. Commercial and military operational criteria also have many
analogous, if not entirely homologous, features. Similar design trade-offs are involved for both types of organiza-
tional structure and mission objectives.

It is hoped that, within these limitations, the discussion will identify the basic principles involved and enable
read-across to other types of operation and to other avionics systems disciplines.

8.2 STATEMENT OF THE PROBLEM

The problem was given the introduction as the "design of systems for monitoring and control of the
propulsion of aerospace vehicles".

The solution of the problem takes place in stages (of time), through various levels of "system" hierarchy and is
bounded by many interfaces between organizations. There are two major phases in the life of a system. One leads
up to entry into service and includes the R and D steps, design validation and certification. The second stage
embraces the service life and operation of the system. The objectives of these two stages differ and can produce
conflicting requirements on the overall system which need careful resolution.

The levels of the system hierarchy and their relationship with the general field of technological development
are illustrated in Figure 8.1. At the highest level, technology and the sodal-economic system interact to provide
an operating environment. The environment is defined primarily by legislation and controlled by regulatory bodies.
This "environment" is expressed in terms of permitted noise levels, minimum safety criteria, communication
spectra, traffic scheduling limitations and so on.

At the next level, operators recognize new market opportunities or competitive threats from which they
generate strategies leading to identification of market requirements and possibly the need for legislative modification
of the operating environment. The market requirements in turn lead to definition of the primary mission to be
accomplished, the facilities required to complete the mission and the support arrangements required to sustain the
total activity.
120

Soc ia1-Economic
Technology «-
System

Operating Environment
(Legislation, Statutory Bodies)

Market
Requirement

I i
Support Mission Facilities
Definition Definition Definition

I 1 •
Vehicle Specification

I i 1
c Airframe
3 ( Propulsion )
c Systems
D
Flight Communications
Powerplant Navigation
Dynamics
Displays

Controls

Fig.S. 1 Overall system hierarchy


121

There are clear military parallels to this sequence leading through strategies based on response to threat or on
technological innovation. The same government agencies will not be involved but the same function will be
performed and the military operators will provide the same three component definition of their requirements.

The operator produces a vehicle specification which fulfills his identified mission and also takes account of
support and facilities available. Legislation provided the interface between the highest "system" level and the
operator level; the vehicle specification is the interface between the operator and contractor levels.

The "vehicle" area divides into three primary areas: the airframe/flight dynamics areas, the propulsion area
and the "systems" area embracing communication, navigation and power services. The propulsion area itself divides
into the powerplant per se and its monitoring and control.

The propulsion control and monitoring system is usually sub-contracted by the power plant prime contractor.
However, to be successful, the design of the system must take into account much more than the powerplant
characteristics. It must be considered in relation to other aircraft systems such as flight dynamics, power generation
and cabin pressurization. It must be influenced by operator support costs, (in repair and logistics), and by customer
service (in its potential for creating departure delays). It will also be strongly affected by minimum safety levels
in the choice of its failure characteristics and system reliability.

The control and monitoring of the propulsion system involves direct interfaces at all levels in the system
hierarchy. These interfaces frequently pose more intractable constraints on the system design than any of the
technical problems involved in realizing a sound system.

8.3 THE REQUIREMENTS OF PROPULSION CONTROL AND MONITORING

The simplest type of propulsion control is concerned with a single powerplant. This restricted system will be
considered before discussing configurations involving several powerplants and integration of the propulsion system
both with other systems and with the vehicle. The overall requirements can be conveniently dealt with in two
groups:
- The basic control modes for normal plant operation.
- The system failure characteristics and failure responses.

The Requirements of Propulsion Control and Monitoring

Definition of Basic Control Modes


There are four basic groups for control modes:
- Start-up/shut-down modes,
— Steady state control modes,
— Transient (power modulation) control modes,
— Protective control modes.

The start-up/shut-down modes are essentially sequencing operations which (for start-up) engage the starter,
mn up pumps, apply ignition at appropriate flow and pressure conditions and schedule further combustion flows up
to idling condition for the engine. At this point the engine will be self-sustaining and capable of full thrust
modulation. In shut-down, the control runs down the pump flows, closes down fuel supplies and purges the lines
to the combuster leaving the system inhibited.

Various steady state control modes can be used. They can be based on open loop scheduling of fuel flow or
on closed loop control of a specific parameter such as pressure, temperature and, for rotating machinery, shaft
speed. The parameter chosen will be closely related to thrust level and may be either directly measurable or a
quantity derived from one or more directly measured parameters. Different steady state modes may be engaged
at different engine regimes.

As an example, the idling speed of a gas turbine is a function of the density of the air entering the compressor
intake. The idling fuel flow, however, is sensibly constant. It is therefore simpler to schedule an open loop control
of idling fuel flow than to derive a compensated speed demand for closed loop control of idling speed.

On the other hand, the ratio of absolute shaft speed to maximum shaft speed usually approximates quite
closely to the ratio of actual thrust level to maximum thrust level available at a given flight condition. "Percentage"
speed is therefore a useful measure of "percentage" thrust and a power level position related to engine speed is a
good ergonomic arrangement.
122

An arrangement which has been used is to combine the two modes using logical switching as in Figure 8.2.
The logic is organized to select the control mode resulting in the higher engine speed and the speed control loop
demand is shaped to ensure that the fuel flow schedule will always be chosen at idle.

100% -
SCHEDULE

SPEED

FLOW
DECREASING
SCHEDULE
AIR DENSITY,

POWER LEVER SETTING

Fig.8.2 Steady-state control modes

The logic operates on the algebraic value of the control error in the two modes. Mode switching occurs
smoothly because it takes place when the two errors are equal in magnitude. The logic is not restricted to two
inputs alone but can be used for several modes which need to be confined to the same group.

The transient modes of control operate when rapid, large-scale thrust changes are called for. They prevent
combustion mixture limits being exceeded during transient flow changes and the resulting flame extinction. They
also prevent instability occurring either in the combustion or in the pumping systems. In this context, compressor
instability in a gas-turbine is the same as pump instability in a rocket motor. The transient limits of control can
be represented on a diagram such as Figure 8.3 in which compressor (pump) pressure ratio is plotted against
compressor mass flow. The compressor instability and combustor extinction boundaries are shown together with a
steady state operating line and constant (non-dimensional) speed lines.

Increase of thrust results in a pressure ratio higher than for steady running while reduction of thrust moves
the operating point towards weak extinction. The function of the transient control modes is to ensure that the
locus of the instantaneous operating point lies within the two boundaries during thrust changes.

The protective modes of control prevent damage to the engine which could be caused by exceeding structural
design limits for temperature, pressure or rotational speed. These design limits are absolute values. A unique
maximum speed boundary cannot be drawn on a "non-dimensional" diagram such as Figure 8.3 since non-dimensional
speed is a function of inlet temperature. The same applies to limiting temperatures in combustor or turbine blade
materials. These limits will normally be invoked during a full scale acceleration with a locus such as is shown in
Figure 8.4.

It is normal for five different modes to be selected during a large scale acceleration within the space of a few
seconds. Mode selection must therefore be automatic and there will always be several modes available for each
controlled output variable.
123

PRESS
RATIO

THRUST
INCREASING

INSTABILITY
BO

EXTINCTION
BOUNDARY

MASS FLOW
Fig.S.3 Transient control limits

Selection of Basic Type of Control


The time between mode changes can be comparable with the engine time constant itself. It follows, therefore,
that real-time digital computer control has a short, environment-determined response time and must use dedicated,
continuously running programs for each mode of control. The type of logical mode selection described also requires
that the mode switching be effected by "polling" rather than by interrupt-type procedures.

The basic program sequence for control is then:


Read demand
Read present parameter states
Compute compensated control error values for each mode for each control output
Poll error values for each group of modes associated with each control output
Select one error from each group for each control output
Scale and output the selected errors.

(The inter-parameter cross-feeds and manipulation involved in multi-variable controllers has been omitted for
simplicity.)

So far, we have ignored any response-time criteria. Most of the time constants in the propulsion control system
are similar or can be handled by simple forms of multi-rate sampling. All the protective loops considered have been
concerned with protection against potential failure. There are some protective loops which require very rapid
response to external disturbances, or to actual failures in order to prevent catastrophic secondary damage.
124

OVER-SPEED
LIMITER
OVER-TEMP
PRESS
LIMITER
RATIO
'TRANSIENT*
CONTROL

FLOW
SCHEDULE

MASS FLOW

Fig.S.4 Mode sequence during acceleration

There is now a choice in the structure of the system which requires a careful trade-off to be made. Is it more
satisfactory to complicate the digital processing by introducing a priority interrupt stmcture and recovery procedures
to respond to these conditions, or is it better to provide for them by independent-sub-systems which over-ride the
primary control demands on the output actuators?

There is no blanket solution to this question. There can be a real requirement for fitting independent actuators,
if not complete sub-systems. The question was generated by considering abnormal operation or response to failure,
particularly where rapid action was required. We are therefore introducing into the systems-choices other criteria
than control of the plant under normal operating conditions.

Even the most basic control requirements, such as a loop sampling rate, can be affected by considerations
which have nothing to do with control theory but a great deal to do with the system as a whole. The design of
the system must encompass all the possible operating conditions of the plant and controller including all their
possible failure modes.

Various failure responses can be required of the system. The simplest is to force a hard-over failure to a fixed
limit. The result may be an uncontrolled thrust excursion in either direction but the magnitude of the excursion
must not prejudice the integrity of the vehicle as a whole.

This type of arrangement can sometimes be obtained by putting fixed limit stops on actuator movements, It
is a method which must be used with care, however. It is appropriate to limit fuel-reducing excursions by an
"idle-flow" stop. The method cannot be so easily used to restrict fuel-increasing excursions.

An engine can only be accelerated by supplying it with more energy than is necessary for steady-state running.
Acceleration therefore depends upon over-fuelling. Good accelerations usually require a maximum fuel flow well in
excess of that needed for steady state operation at the structural limits for the engine.
125

A simple stop cannot therefore be used to limit an "upwards" hard-over excursion. Any stop would have to
be modulated if a failure was known to have occurred. This defeats the whole object of a protected hard-over
failure arrangement which is to make failure detection unnecessary.

Where failure detection is provided, then it is as easy to arrange for soft failures to occur as to arrange for limit
stop modulation. A soft failure forces the system to a defined condition. In a flight control system this condition
is with the control surfaces in a neutral position. In a powerplant, the same basic "no disturbance" response is
achieved by failing with the engine condition unchanged. For many failures this requirement cannot be met and
a small thrust excursion will take place before the system reacts and checks further changes. The modulated
throttle stop can be looked upon as an extreme version of a soft failure.

However, the magnitude of the transient excursion is a fundamental system parameter with a major influence
on system design. Large excursions can rarely be tolerated.

The definition of a tolerable disturbance involves not only the propulsion area but the vehicle characteristics,
human factors and also operating conditions. It is a "systems" parameter. It has to take into account the worst
safety threat which can arise under all combinations of circumstance.

Once defined, this excursion can be related back through failure detection, confirmation and reaction time to
limiting actuator rate. Alternatively, where the actuator rate is fixed by other criteria, the sampling rate for the
system has to be adjusted so that the failure response-time and resulting actuator excursion are compatible with
the safety requirements.

If it is found that the reconciliation of these requirements is difficult using a single actuator if it involves
serious penalties, it becomes necessary to return to the trade-off of interrupt stmcture against separate sub-systems.

The failure characteristics are now also involved. We see too, how actuator response and failure excursion
interact with sampling rate and that some of the basic "control" parameter choices can be determined by overall
system considerations rather than by a simple approach to control of the plant itself.

8.4 DEFINITION OF DESIRED FAILURE CHARACTERISTICS

The definition of a set of failure characteristics is a full-scale systems operation. It involves all levels of the
systems hierarchy.

Aircraft accidents are inevitable. Commercial operators fly into many countries and few accidents are purely
internal matters. The risk of an accident does not depend exclusively on the vehicle. It depends upon ground
facilities as well, upon features which are determined partly by economic factors and partly by technological
capability. The operating environment is therefore involved.

The certificating and operating regulations vary from one country to another. Failure characteristics for a
vehicle which is to be sold in several countries must take account of the most severe regulations it will have to meet.

Unrestrained commercial determination of safety levels is inadmissible. Minimum standards of safety are set
by certificating bodies in consultation with the operators and with the manufacturers. While commercial factors are
deeply involved in setting the level, they are not the only factors considered. The levels cannot be manipulated to
reduce operating costs once they are agreed.

From this point on, there are three levels of working, one appropriate to each of the certificating authority,
the vehicle contractor and the systems contractors). They have different responsibilities and different working
methods.

Conditions can occur during operation of an aircraft which involve a potential lowering of the level of safety.
These occurrences may have their origin in equipment failures, in human errors or in uncontrolled events outside
the vehicle. They may be encountered singly or in combination.

Each occurrence will give rise to an effect which may be classified as minor, major, hazardous or catastrophic.
The function of the certificating body is to define levels of probability for each of these effects, the combination
of failures, errors and outside events which must be considered and the procedures to be used in demonstrating that
the requirements have been met.

The aircraft manufacturer's task is to define all the situations and occurrences which could cause the effects.
Some occurrences, such as weather or turbulence will be uncontrolled. Some human factors will be only partly
controlled. The remaining factors must be manipulated so that the permitted frequency of each effect is not
exceeded. These factors will include loss or degradation of various functions in the vehicle and the manner of their
loss. The permitted failure rates and failure trajectories are then incorporated in the manufacturer's equipment
specification.
126

The manufacturer may use several analytical procedures in arriving at the equipment specification. One of the
most satisfactory methods of formalized working is the Fault-Tree Analysis (FTA) originally developed by the Bell
Telephone Company while devising systems which would prevent the accidental launching of a Minuteman missile.
The results of the analysis are presented on a flow chart, usually using Boolean operator symbols.

An illustrative partial FTA for a hypothetical VTOL aircraft is shown in Figure 8.5. It starts from an "effect"
specification given by the certificating authority and proceeds to analyse the possible routes to this effect. The "loss
of aircraft" effect can arise with the aircraft airborne through collision or sabotage (which are essentially human factors)
or through stmctural failure (e.g., through fatigue or by atmospheric effects such as clear-air turbulence). Similar
reasoning can be applied when the aircraft is on the ground.

Surface impact can arise in several ways. Figure 8.6 expands the analysis for surface impact in the VTOL mode
of flight due to causes associated with loss of control or temporary degradation of control. It indicates how the analysis
can be extended into the equipment manufacturer's area. The extension is illustrated for the particular case of sudden
thrust changes caused by failures in a control loop operating in a protective mode and limiting a critical temperature
in an engine.

The failure can be in the temperature datum. False datum selection caused by relay failure and datum offset
caused by parametric drift in transistor circuits are indicated. Incorrect datum may also be selected by the crew,
bringing in ergonomic factors at the crew interface. An alternative source of failure is a false, high indication of the
actual engine temperature. Illustrative hardware failures are shown.

The total probability ofthe "loss of aircraft" is arrived at by summing all the contributions through the tree,
thereby exposing the relationships between uncontrolled events and detailed equipment failures. The procedure will
often disclose implicit coupling of responses - for instance, the way in which pilot reaction can convert a sudden thrust
increase into an actual thrust decrease.

There are, in fact, two forms of "fault" which must be considered. One is a thorough-going failure such as those at
the foot of Figure 8.6. The other is a degradation of performance — an insidious failure which is difficult to detect
in normal modes, defining their effects and frequencies, but also bounds the tolerances on performances in normal
operation.

This last feature should remove most of the "conceptual deficiency" failures shown in Figure 8.6 - as indeed is its
primary purpose. The confidence in the analysis and the probability of its including a significant error must be matched
to the permitted probability of the ultimate effect.

This stage of the system definition is particularly complex because of the extreme rigor which is demanded and the
inter-action and interchange between the aircraft and equipment manufacturers. It ends with a general specification of
performance tolerances as well as the characteristics and frequency of abnormal deviations in performance.

The "reliability" specification for an equipment is fully defined at the aircraft contractor/equipment supplier
interface on the FTA. It appears in terms of failure modes and their maximum permitted probabilities. A rapid,
preliminary estimate at this stage will usually indicate the type of failure protection required for each failure mode.
Single failures with no protection against their consequences may be acceptable for modes involving very reliable
equipment or minor effects. Others may need fail-safe protection (either hard or soft) while others again may involve
a combination of less reliable equipment and catastrophic effects which demands failure surviving, fail-operational
redundancy. In extreme cases it may be necessary to provide survival for multiple failures.

The problem solving has now been shifted to a lower level of the hierarchy and is concerned with the stmcture
of a particular system function.

8.5 SYSTEM SELECTION AND ARCHITECTURE

Two steps have to be taken once the equipment supplier/aircraft contractor interface has been fully defined in the
FTA.

These steps, in order, are:


(a) To define those functions which will fall within the compass of a single system or sub-system.
(b) To define the architecture of the system or sub-system so that it meets the performance and failure
characteristics required of it in as near optimal a manner as possible.
EFFECT
MAXIMUM PERMITTED
PROBABILITY ' A " PER FLIGHT
CERTIFICATING AUTHORITY

AIRCRAFT CONTRACTOR

BOOLEAN "AND" FUNCTION -


ONLY A OCCURING WITH B RESULTS IN C

BOOLEAN "OR" FUNCTION -


EITHER DOR E RESULTS IN F

Fig.8.5 Illustrative fault tree analysis


128

Fig.S.6 Continuation of illustrative fault tree analysis

Selection of Functional Groups for Systems and Sub-Systems


Digital computing and the general introduction of digital techniques into avionics is changing the ways in which
functions are grouped. They provide ways of divorcing hardware stmcture from the problem solution which is
embodied in a computer program. They provide ways of time-sharing, so that several programs can reside and run
in- a single computing element. They provide ways by which data signals are readily multiplexed over a simple
circuit and, furthermore, make the isolation of the various signal sources simpler. They are therefore making it
easier and potentially more profitable to combine functions within a single sub-system.

The combined automatic functions can be approached in two ways, bodily, as an integrated whole, or
alternatively as a cohesive assembly of inter-communicating but relatively autonomous functional groups. These
approaches underlie the "integrated" and the "federated" systems approach to avionics.

The two approaches can produce radically different sets of functional grouping. Integrated systems generate
new product specializations - for instance multiplexed data transmission or displays and controls. The arrangement
where each sub-system included its own controls and displays is not appropriate to an integrated approach, they
become a part of a separate sub-system which exists in its own right.

Generally, systems management organizations will try to simplify the procurement interfaces. Thse interfaces
are partly determined by historical influences and industry stmcture which change neither so readily nor so rapidly
as the available technology.
129

A particular functional grouping may be precluded because, although the grouping is natural and conceptually
satisfying, it deflects an existing management interface and is a potential cause of deficient human communication.

The particular grouping of functions can also depend upon mission characteristics. The traditional division of
control in an aircraft powerplant is shown in Figure 8.7. Separate controls are provided for air inlet, the flange to
flange dry engine and the tail-pipe/augmentor section. Responsibility for the plant components themselves is also
split in the same way, the air intake being the responsibility of the airframe manufacturer while the engine and tail-
pipe sections may be undertaken by different engine manufacturers - possibly in different countries.

INDEPENDENT VARIABLES
b COMMAND
INPUT
Q
AUGMENTOR
SELECT

« _ » —

LJJ
INLET
CONTROL
Li
INLET
CONTROL
I
ENGINE
CONTROL.
I
ENGINE
CONTROL
u
AUGMENTOR
CONTROL
A B A_ B

SENSOR/ SENSOR/ SENSOR/ SENSOR/ SENSOR


ACTUATOR ACTUATOR ACTUATOR ACTUATOR ACTUATOR
SET SET SET SET SET

JL T T
INLET ENGINE TAIL-PIPE
FLIGHT
CONDITIONS

AERODYNAMIC AERO-THERMODYNAMIC
COUPLING COUPLING

Fig.S.7 Powerplant control system

Now the mission profile may be such that the augmentor is only called into operation at take-off and during
the early climb phase. Somewhat lower levels of reliability could be tolerated for its control system than for the
bare engine. Similar criteria can also apply to the inlet control if the periods of supersonic flight are short or a
requirement for a sub-sonic reversionary flight mode is not economically or tactically onerous.

The three functions can therefore require three different levels of redundancy in order to meet their reliability
criteria. It can be argued that the powerplant is an entity and that its total control forms a natural functional
group. An integrated powerplant control forms a natural functional group. An integrated powerplant control is
shown in Figure 8.8. It shows a need for isolation where the data bus system connects to common equipment.
It shows a need for consolidation when two possible drive signals are present.

The consolidation function could be avoided by only sending one signal to the augmentor actuators. This
could be arranged by having the control programmed in only one of the two control computers shown. The arrange-
ment would lead to logistic problems if the programs were hard-wired and two different computers were used.
130

It would be difficult to offset the cost penalty by savings in hardware unless the augmentor control were extremely
complex, and then the reliability of the engine control itself could be compromised.

A program flag arrangement can be used to inhibit the unwanted augmentor control in one of the computers.
It has to be implemented so that it is impossible to flag the program in both computers simultaneously and that
there is an immediate, external and visible indication that a program has been inhibited.

INDEPENDENT
VARIABLES
mK
y POWERPLANT CONTROL LANES
COMMAND
LANE A LANE B INPUTS

I
ISOLATION

^
ii £ DATA
T ilit 1 T ± T £ BUS
A &B
1
ISOLATION ISOLATION
I
I

SENSOR
SET C
ICONSOLIDATION

SENSOR/ SENSOR/ SENSOR/ SENSOR/


I
SENSOR/
ACTUATOR ACTUATOR ACTUATOR ACTUATOR ACTUATOR
SET A SET B SET A SET B SET

en I I
INLET ENGINE TAIL-PIPE

Fig.S.8 Integrated powerplant control

Assuming that this is done, there remains the problem of allocation of responsibility if an augmentor mulfunction
occurs. Did it originate in the hardware on the augmentor side of the data-bus, in a data-bus failure, in a computer
hardware failure or in the augmentor control program? Integration of functions must be accompanied by precise
fault isolation where the realization of a given operation cuts through several areas of responsibility.

All of these implications must be included in selecting the functional groups which define a system.

We have so far only considered functional grouping at the powerplant level. In normal operation, the power-
plant settings are manipulated collectively, controlling the overall propulsion complex for the vehicle. Differential
control of individual powerplants is generally only used during ground manoeuvres or to make small adjustments
for powerplant-to-powerplant performance scatter.

Automatic control of total thrust is already in use in all-weather landing systems and in some autopilot modes.
Much more could be done than modulate the thmst set-point.
131

Figure 8.9 shows a system arrangement with a high level propulsion control sub-system. Full use of the
capabilities of a digital computer could optimize the individual plant control inputs to obtain optimum aircraft
performance at any flight condition. It could trim engine time-response to large commands and avoid yawing
couples set up by different engines changing thrust level at different rates. It could monitor individual engine
performance and generate displays.

CONTROL
AND
HIGH-LEVEL
PROPULSION r OTHER
i
CONTROL
DISPLAY | SUB-SYSTEMS
SUB-SYSTEM

1 L „
~1
I
i

DATA
y~
X
»i —y OUo
1

mZ
, t 1 I i DATA
>. —r BUS 2

11 i

CONSOLIDATION
MANUAL INDEPENDENT
AND
CONTROL DISPLAYS
AUTO/MANUAL
INPUTS ^ CHANGE-OVER

-J
a * *

POWERPLANT POWERPLANT POWERPLANT


CONTROL CONTROL CONTROL

POWERPLANT POWERPLANT
1
1
POWERPIANT

Fig.S.9 Integrated propulsion control

However, none of these functions are essential to keeping the vehicle in the air. They have economic significance
but no safety of flight value. It is important that functional groupings allow for this type of division. Under many
circumstances it will be of greater value to make a sub-optimal flight without the high-level sub-system than to make
no flight at all. The two groups of functions could be so inextricably bound up into a single operating configuration
that any failure, including one in the non-essential sub-system, would ground the aircraft.

The consolidation point where the propulsion and powerplant sub-systems interface with each other must
therefore also be a point at which the two sub-systems can be easily decoupled. The two systems must be segregated
to prevent fault propagation between them and a reversionary interface must be provided at the same point so that
manual or an alternative form of automatic control may be substituted for the propulsion control inputs. In present-
day systems, all of these features are provided in a single element which drives the power levers collectively through
individual clutches and a common, electrically isolated shaft.

Selection of the functional groups therefore depends upon a complex set of systems influences involving the
plant structure, the technology to be used, systems management considerations and operational factors. However,
once it has been achieved, the selected sub-set of FTA specifications define the system failure response characteristics
and allow the architecture of the system to be defined.
132

8.6 SYSTEM ARCHITECTURE

The stmcture of the particular system selection will almost invariably require the use of redundant elements in
one form or another.

Even the simple hard-over limit stop can be considered as an extreme form of redundancy. The next level of
sophistication does not attempt to survive a failure but forces the system to respond in a pre-defined way such as the
soft failure response mentioned in the preceding section. This operation implies a "monitor" function, to detect
failure, and an "executive" function to force the correct response.

These two functions may be achieved satisfactorily by crew members — manually. More often, the system will
respond to failure in a violent way and the operating trajectory will be carried beyond the permitted limits. The
excursion of the trajectory which must be considered passes through all the operating points between two stable
states. These states are first with the vehicle operating normally prior to failure and secondly when the vehicle has
attained a new steady state following failure recovery. The trajectory analysis must include not only the segment
leading up to failure reaction and the resulting output excursion but also upon the trajectory followed during
recovery. It does not follow that a safe terminal state at the instant of failure will also be a state from which a
safe recovery can subsequently be effected.

The variability, or the slowness of human response are frequently the deciding factors in determining the use
of automatic reversion although examples of inability to realize acceptable intermediary states are not unknown.
The same considerations appear in digital systems where the monitor is time-shared. The period between checks
must be short enough to enable output excursions to be contained.

Ideally, all the failure modes of the monitor and executive should be fail-safe and trigger the same effect as a
failure in the monitored system itself. This cannot always be ensured but the probability of residual failures is
generally low. The worst type of monitor failure is one which evokes no response and is undetected. There will be
a definable probability of such a latent failure occurring which depends upon the time at risk. Even highly
improbable latent failures will occur within a sufficiently long time-span.

This type of latent failure is protected by periodic checks on the monitor/executive functions so that the time-
span is reduced and the probability of latent failure is made acceptably small. Similar periodic checks are also
required on the normal performance parameters in order to verify that the assumptions on performance tolerances
used in the FTA are not violated. Significantly, and for the same basic reason, the aircrew performance is also
checked at regular intervals.

Provision for easy checking and the determination of safe check periods are important features of system
design.

Failure surviving systems need the monitor and executive functions of a fail-soft system plus a capability of
reconfiguration. The new configuration may be a degraded version of the original system or may be of identical
performance.

Reversion to degraded operation can be achieved by a crew member switching to a simpler stand-by control.
The reconfiguration is manual and the redundancy dissimilar. It is sometimes possible to reconfigure by switching
out a part of the functional repertoire of control and retaining the rest of the system in full operation. The
procedure is almost invariably automatic and the executive function has to be enlarged to permit control of
reconfiguration. It is also important to consider the control of trajectory during the whole of either reversionary
sequence to ensure that the degradation occurs "gracefully".

Undegraded failure survival is also known as a fail-operational response. Its use implies that the total system
includes at least two identical channels or control lanes. The lanes may be connected to the plant in either of two
ways, sometimes called active redundancy and passive redundancy. In an active redundant configuration the identical
lanes are connected simultaneously to the plant; in passive redundancy they are connected singly, unused lane(s)
running as standbys.

The two forms of redundancy share many similarities but also possess significant differences.

Both configurations involve switching operations during reversion. In active redundancy a failed lane is
switched out; in passive redundancy a "good" lane must also be switched in. The one can usually be achieved
more reliably than the other. Switching-in for instance can involve two or three operations which must be performed,
not only correctly, but in correct sequence. Switching out might be effected by the correct performance of any
one of them, regardless of sequence.

Both configurations require monitors. Active redundancy can be monitored very reliably by comparing lane
outputs. However, if both lanes exhibit the same abnormal deviation at the same time, the failure will not be
detected. Such a common-mode failure is unlikely to arise from simultaneous and identical random failures in
133

each lane. Its source usually lies in some disturbance which effects each in the same way, interference for example,
or to some common design deficiency shared by both lanes. Software failures in digital systems can be a potent
source of common-mode failures if the same program paths are in use in each lane.

Similar common-mode phenomena exist in passive redundant systems - but with different effects. The
problem here is not the non-detection of a genuine failure but the false detection of a non-existent failure. If this
occurs through a software fault common to both lanes, and if the conditions provoking the failure are present
throughout reversion, then the standby lane will disengage in the same way and for the same reason as the first.

Whereas active redundant systems are usually monitored by cross-comparing nominally identical lanes, passive
monitoring must be done by reference to an absolute standard or to a process model. It is therefore necessary to
know the absolute limits within which a correctly operating system will lie. If the limits are set too wide, then
some failures will not be detected; if the limits are set too close, false failure detection will lead to nuisance dis-
connects. Under the right conditions the nuisance disconnect will recur in the reversionary lane(s) and the whole
system will shut down.

A similar threshold limit occurs in cross-comparison of active lanes. The lanes are never identical, particularly
in their dynamic response. The threshold has to be made narrow enough to prevent the permitted output excursion
being exceeded when a failure occurs. It may then be narrower than the worst case dynamic tracking error between
the compared lanes. Nuisance disconnects will now occur but without causing the whole system to shut down.
Their frequency is amenable to a degree of control through design standards and manufacturing controls. Nuisance
disconnect rate for passive systems can only be controlled through improved definition of the absolute behavior
of system and plant, taking account of operating point and external point and external environment statistics.

The discussion so far has not been concerned with the number of failures which a system may be required to
withstand. Any of the procedures may be used for a first failure. The system will then be reconfigured. Its new
configuration determines its response to a second failure when it may be closed down or subjected to further
reconfiguration and so on.

Rather than start from the initial configuration, the system structure must be built up from the terminal
condition permitted by the FTA. This may define a redundant system in itself. It may further be found that it is
necessary to survive more than one failure. Fail-operational lanes must then be added until the specified probability
of encountering the terminal condition is met. At each configurational escalation it is necessary to reconsider the
terminal probabilities. The probability of terminal failure will have been changed and monitor requirements may be
modified. It is often found that the significant failure sites cluster in one area of the system and that, elsewhere,
the monitoring can be relaxed.

The system will possess a range of different failure effects as well as different failure sites producing the same
effect. Different methods of protection may be applicable to some, but not others. It may be sufficient to allow
degradation in some modes while others must retain full operational capability following a first failure.

There are many possible combinatorial configurations ofthe individual structures as indicated in Figure 8.10.
The selection of the set which best meet a given requirement is complex. It must take into account:
— Human performance
- Plant performance
— Vehicle characteristics
— Equipment installation and accessibility
— Monitor check frequencies (including digital sampling)
— Control performance tolerances
— Reconfiguration procedures
— Software stmcture
— Sensor/actuator performance
— Environmental effects
— Detection thresholds
— Manufacturing and design standards.

The range of choice is often restricted by some overriding requirement. It will not remove the need to consider
the impact of each of these factors on the final design of a system.
134

4 Active 3 Active 1 Active


lanes lanes lane + model
1 model 2 StaAdbys

AUTOMATIC RECONFIGURATION

3 Active 2 Active 1 A c ;t i:v e


lanes lanes l a n e + model
+ model 1 Standby

AUTOMATIC RECONFIGURATION

2 Active 1 Active
lanes l a n e + model

AUTOMATIC MANUAL
DETECTION DETECTION

AUTOMATIC MANUAL NO
REVERSION REVERSION REVERSION

Graceful 1 Active lane Dissimilar limit Uncontained


Degradation (Manual R e c o v e r y ) Primitive stop
System

_L J
SOFT HARD
FAILURE FAILURE
I

SYSTEM DISENGAGED

Fig.S. 10 Reconfigurations through successive failures for redundant system architectures


135

8.7 MONITORING IN DIGITAL COMPUTER SYSTEMS

The need for a monitor and executive function in almost all systems structures was explained in the previous
section. It has also been shown that program repetition rate may be critical where monitoring functions are time-
shared with other computer tasks. There are some other general aspects of monitoring digital computer systems
which we will consider in this section.

A correctly functioning computer can be used to monitor a system. Such a monitor can be both comprehensive
and complex but its validity is restricted to the situation where the computer functions correctly. An incorrectly
functioning computer cannot be used to monitor any system component, least of all itself. The key issue in a digital
system is therefore detecting computer malfunction. Fault tolerance is not sufficient in itself. Latent failures must
be exposed for rectification.

There are two possible types of malfunction to consider, software failures and hardware failures. There are
also two fundamental monitoring methods, direct comparison and indirect checks. Each has its advantages and
drawbacks.

The indirect method allows a single computer to operate in isolation. It implies that every possible fault must
be considered, its effects analyzed and some means devised to detect these effects. Alternatively, and more realistically,
the possible effects can be identified regardless of cause. This approach divorces the problems of detecting the
presence of a fault and identifying its site.

In attempting to group effects, they can first be divided into those which produce an effect at the system
outputs and those which do not, i.e., are latent.

Two forms of latent fault occur. In the first form, the fault lies in an unused mode or program segment but
its presence will be manifested at the output when the particular mode or segment is called into use. Mission
analyses will determine the potential periods of latency in different modes and periodic checks have to be introduced
if the resulting failure probabilities are greater than the permitted levels. The effects are not usually troublesome
because they appear as a variability in the probability of loss of the computer on any one flight. Double, indepen-
dent failures are highly improbable. They can only be significant if they occur in areas having the same period of
latency and called into use simultaneously.

The second form of latent failure occurs in either the monitor or executive function in such a way as to
prevent correct response in the presence of a second fault. All of these failures must be exposed by periodic
checks. The ability to carry out such checks is an important feature of the detailed design of the monitoring and
executive functions.

The outputs of an on-line digital system are up-dated at regular intervals. They can therefore be described
completely by the time at which the up-date occurs and by the magnitude of the up-date. A large class of failures
will dislocate the program timing causing the time at which the output occurs to be shifted. The remaining failures
will occur at the right time but be of incorrect magnitude.

Failures causing timing errors (dislocation failures) are usually detected by an external timer. This timer will
check that the computer executes a known program in the correct time and that it responds correctly to the next
real-time interrupt which synchronizes the control sampling rate. The arrangement fails safe. A failure is declared
if either the timer or the computer malfunctions.

Errors in magnitude can arise in one of several general ways:


- Errors in control (wrong mode selected)
- Errors in arithmetic
- Errors in memory (corruption of data).

The first two sources of error are usually sought by using a self-check program which runs the computer through
a short sequence of instructions exercising both control and arithmetic to arrive at a predefined arithmetic output.
The output is validated by independent external hardware.

The self-check has two deficiencies. It is time-shared and cannot detect failures which arise outside the time
when it is running. It uses a very restricted section of program memory although it can be arranged to exercise the
data store.

The program memory contains important control constants which must be checked explicitly. It is therefore
necessary to add an external check-sum arrangement which validates key store areas continuously.

The arrangement described conducts a failure test on each and every program iteration. Two further refinements
are necessary. Single, isolated output errors are unlikely to produce significant effects after filtering by the plant
136

transfer function. Error carry-over through multiple-order holds in the control transfer function also decays
rapidly in most systems. It is therefore necessary to distinguish between this type of fault and one which persists.
A simple "fault-duration" discriminant is not sufficient to detect intermittent faults. The executive response is
normally inhibited unless the number of failures declared in any given time period exceeds a threshold value.

These procedures do not provide a 100% check of the computer. They will not detect context-dependent
faults nor will they detect control or arithmetic faults which occur regularly at times outside the run period of the
self-check program. Some power supply faults could give rise to this type of failure if sampling rate and supply
frequency are harmonically or almost harmonically related.

The frequency of these residual failures has to be estimated and accepted or the use of this type of monitoring
relegated to a level of the reversionary hierarchy where it does not introduce an unacceptable risk.

Direct comparison methods can be used in active redundant systems. The outputs from two nominally identical
control lanes are compared and discrepancies outside a threshold used to declare the existence of a failure. The
method detects either timing or magnitude differences in the lane outputs and usually the threshold is wide enough
to prevent a failure declaration for a single isolated fault. Where the output is not a simple zero order hold, inter-
mittent fault discrimination can usually be arranged as well.

The primary failure detection is non-specific since it does not identify the failure site directly. This is the
function of the executive. It also suffers from the weakness, mentioned earlier, that software failures may be
undetectable unless suitable precautions are taken. This involves comparing results obtained using different program
structures and different store allocations - a feature which also reduces the incidence of common-mode context
dependent failures. Difference of program stmcture introduces potential problems of the type covered earlier when
discussing integration of augmentor and engine control.

An important feature of system monitoring is that it should not confuse equipment failures and plant failures.
If the plant fails it is usually important that the controller continues to function rationally. It is worth noting in
this context that active redundancy sees plant abberations as a common mode disturbance leaving an output
comparison monitor unaffected. The same type of behavior is much more difficult to achieve when an indirect
monitor is used.

The primary function of the executive is to define the site of the failure, at least to its location within one
or other of the active control lanes. The problem of fault isolation within the confines of one lane are similar in
both monitoring configurations but potentially more positive in active redundancy because (for single failures) the
indications of a greater part of the overall system can be available and relied upon.

Identification of a failed lane can be positive where there are three or more active lanes and multiple faults
are excluded. Various algorithms may be used of which the most common are:

(a) Majority vote - disengage the lane with the greatest divergence from the system average. Revert to
two active lanes.

(b) Median select - retain the lane with the output closest to the system average. Revert to a single active
lane.

Even when only two active lanes are connected, it is possible to devise algorithms which determine the most
probable lane to have failed. A reversion to that lane can then be made provided that an incorrect reversionary
selection permits manual recovery within the permitted output trajectory under failure conditions.

Complex logical algorithms may be required to determine the selected lane. In some system modes simple
"select highest" or "select lowest" algorithm can be used.

The executives used with both types of monitor must effect the necessary lane disengagement/engagement
procedures and generate any status or warning displays. In both types of system the executive hardware will be
independent of the computer and may be redundant.

8.8 DATA ACQUISITION. COMMUNICATION AND PROCESSING

Up to this point we have only been concerned with features of power systems involved in control or in failure
protection. The vehicle is a component of a much larger operating system which places other demands on the
propulsion system equipment than on-line control.

It is necessary to monitor plant operation for other reasons than protection against sudden catastrophic
failure. It will always be necessary to overhaul the powerplants from time to time and the procedure is expensive.
137

For many years, powerplants were removed for overhaul at regular intervals. The period between overhaul was
set to keep the probability of critical failure at a sufficiently low level. It was short when a new powerplant entered
service and gradually increased as background experience was accumulated. Early failures occurred and their causes
were removed by development. Some of the failures occurring between overhauls were not associated with any
predictable cause or with causes which were subject to wide variability. The relative frequency of these incidents
increased as the overhaul life increased.

Overhaul lives for mature engines are not extremely long and there is a need to monitor the engine behavior
in order to predict premature failures before they occur. The interest is not only greater safety, the maintenance
costs of a large fleet can be appreciably reduced if an aircraft can be scheduled through a maintenance base and an
engine change before a failure actually occurs. By definition, the type of failure involved is one which gives early
warning well outside the duration of one flight.

Two other failure time-scales are recognized in the stmcture of the total monitoring operation. Some failures
will occur relatively slowly but the time between detecting the threat and experiencing the event is less than a
typical flight time. Others occur so rapidly that an immediate corrective response is demanded. The response must
be automatic if it lies within the combined attention-span/reaction-time limits of the crew. The control must
respond correctly to this type of failure as mentioned in an earlier section.

The slower type of failure can be handled manually and there are good reasons for leaving the crew to make
the decision to shut-down a powerplant. Monitoring for this class of failure therefore requires a display of critical
data to enable the progression of the failure to be observed. The requirement is not that the condition should be
recognized via the display although this is effectively the procedure when individual instmments are used.

Much more sophisticated systems are possible using digital analysis and processing of measured data to detect
incipient failure, and with flexible CRT displays to alert the crew and provide selective displays of powerplant
status. The displays can be reconfigurable, either on demand, or under program control. They are both discrimi-
natory in selecting the data and, through processing the memory, able to present more meaningful displays than
instmments reading engine quantities directly.

All of the monitoring/display operations must be conducted in real time but the sampling rates and solution
rates can be much slower than those for control. The computation can be either inter-leaved or run in a background
mode.

Many of the control system inputs may be shared and be common to the monitoring system. However, there
may be a requirement for duplication of the sensors and of the display. The need arises when the plant spends
long periods at a fixed power setting. If the controller fails in a fail-soft mode early in the segment it is often
desirable to keep the plant operating until a change of condition is required. The fixed setting can be retained even
longer if sufficient thmst change can be obtained by over-modulating remaining powerplants. Abandoning a super-
sonic cruise segment is obviously undesirable. It can be avoided with a fail soft system provided that sufficient
independent displays are present to allow the crew to monitor a control-less powerplant adequately.

The monitor functions for the very short term failure and for the failure appearing within one flight time must
be real-time on-board systems entirely. The long-term failures require both an on-board feature and a link with a
much larger fleetwide data processing operation. The on-board feature provides a quick-look between flights. It is
particularly useful with damage accumulation failure mechanisms where the rate of accumulation can vary widely
from flight to flight.

A purely on-board system of this type has one grave disadvantage. The data on which the presentation is
based is destroyed and only the result preserved. A further problem arises when an engine is changed at overhaul.

Damage accumulation failure modes usually relate to a given component in an engine. At overhaul, an engine
is stripped and rebuilt. The rebuilt engine can have a different mix of components each with a different accumulated
damage figure. It is therefore necessary to track component histories through a fleet rather than the history of a
given aircraft. As an added complication, different versions of the same basic component may have different
damage and accumulation rates, as a result of design modification or material changes.

The on-board system is an aircraft item. It would therefore have to be subjected to frequent up-dating to take
account of all of these variations at each engine change. Any interference with store contents is a potential cause
of degraded integrity. The data store would have to be non-volatile and electrically readable but non-alterable by
the propulsion system alone. Additionally, the reliability of the propulsion system would not have to be degraded
very significantly by whatever arrangement was used.

For all of these reasons, the on-board elements must be kept simple. The major data processing functions are
conducted off-line and preferably at a fixed location. The essential features are therefore an on-board arrangement
for recording data which can be quickly recovered after a flight, a data transmission network between operating
stations and the main data-processing centre and a large D.P. installation.
138

The precise form of the on-board configuration depends upon the data processing on the ground and particularly
upon the communications network. The system configuration for a large-wide airline operation is not necessarily the
same as that for a more localized operator. The first might require data compression prior to transmission in order
to keep operating costs and queueing delays within reasonable limits.

There are therefore several identifiable levels of monitoring operation:


— Data logging for batch input to a major data processor on the ground,
— Quick-look status checks for use during vehicle turn-around,
— Flexible generation and presentation of selected powerplant status during plant operation,
— Detection of slow approaches to failure and generation of crew alert,
— Detection of imminent failure and generation of immediate automatic response,
— Provision of monitoring continuity after a control failure which leaves the plant operating normally.

All of these, except the last, form an integral part of an automatic propulsion control and monitoring system.
They will require the use of input scanners with program controlled frame format, foreground/background computa-
tional operation, generation of data for a display sub-system and the preliminary processing and formatting of data
for compatibility with D.P. recording and line communication standards.

8.9 MAN-MACHINE INTERFACE

The man-machine interface in a propulsion system must, of necessity, be as simple as possible. Innovations must
be introduced slowly and with caution. The deceptively primitive interface of the traditional "power lever and
instmment" interface embraces subtleties of which we are only partially or vaguely aware.

The power lever (and the more recently introduced thrust vector lever) is the primary control channel
between the crew and the propulsion machinery. The lever position cannot be calibrated in a universally meaningful
way. Furthermore, it is designed to be easily pushed - and "ready to hand" ergonomics conflict with any "easy
to read" location.

Conventionally, some measure of power setting is displayed on an instmment and the reading adjusted by
moving the lever until the required setting is obtained. The power setting required is often set up on a "bug" on
the same instmment so that actual reading of the instrument is not required. It is sufficient to align the bug and
the indicator.

Different measures of power may be used in different flight regimes and in different aircraft. The measure
may be a direct engine parameter such as compressor speed, a gas temperature or an engine pressure ratio. Under
other circumstances the measure may be indirect and displayed on another instmment (such as airspeed or rate of
descent) but the same principle is used.

The basic requirements are therefore for the generation of an appropriate setting, a measure of the actual
setting and a manual input channel through which the actual power level may be modulated.

At different points in the mission the powers may be required to be vectored in some way (e.g., thrust
reverse during a landing roll). At others, the augmentor must be engaged and controlled. Frequently, the propulsion
system will be controlled automatically. Interchange between automatic and manual operation must be provided
simply and smoothly.

All of these requirements are met through a single lever. Thmst reverse is engaged by pulling a toggle which
is only operable at "idle" settings. The lever then controls the level of reverse thrust. Augmentor control is
operated by advancing the lever through a gate. The lever is included in the automatic control loop which drives
the lever mechanically. Its physical position always corresponds to the reigning engine condition. For multiple
powerplants the levers are grouped and can be operated collectively by the palm of the hand or staggered to give
assymetric thmst.

This simple arrangement is unlikely to be changed. The generation of the setting and the indication may
change. The present collection of engine instmments is cumbersome, heavy and wasteful of panel space. It could be
replaced in a smaller space by a CRT display on which the data is selected according to need and is probably more
directly related to the power requirement.

The introduction of such a display introduces two changes. The first is one of display format. Conventional
displays are needle and scale, plus digital readout for some parameters. The display, although used primarily for
139

power control, is used in a secondary role as a monitor. Angular divergence between needles appears to be more
readily assimilated than bar or thermometer displays. This and other ways in which perceptual correlations are
involved require careful exploration in establishing any change of format.

The individuality of existing displays has advantages. There are many fewer types of indicator than there are
indicators on the panel. Each one is relatively cheap. Spares can be distributed without too great a logistic invest-
ment problem because an adequate spares kit usually costs much less than the full instrument complement. This is
changed drastically when all the displays appear on a single instrument. Logistic costs will increase unless the single
instmment price is as low as the original spares set.

Again, with the independent indicator arrangement, there is a fair degree of redundancy in the displayed data.
An aircraft can be despatched with certain instruments non-operational. It is unlikely that a CRT display would be
regarded as an allowable deficiency even though the displays would certainly be duplicated. Every display failure
therefore becomes a delay and spares must be rapidly available at all stops. The man-machine interface is subject to
the same system constraints as all the other aircraft equipment, the same trade-offs of performance versus economics.

Digital methods could be used with advantage to generate the power demands or power limits and various
proposals for push-button or "dial-up" arrangements have been made. None of these has yet been used.

8.10 PRACTICAL REALIZATION

Up to this point we have been concerned with the definition of requirements of one sort or another. We have
considered the type of control and monitoring functions which may be required for propulsion. We have considered
the definition of functional groupings which will be involved. We have considered the process by which safety of
flight affects performance, system architecture and some of the fundamental control parameters and we have looked
at some of the ways in which the system might be structured.

A preliminary set of requirements for a particular system will include a specification of:
— Functions to be provided (with defined performance characteristics),
- Plant data,
- Failure characteristics (effects and frequency of effects),
— Interface definitions (mechanical, electrical, environmental, data, procedural).

It is unusual for the plaat and its control to be designed and made by the same organization, yet they operate
as a whole. There are two approaches which can be used in the control specification. In the first, the control per-
formance is specified independently of the plant. In the second, the performance of the controlled plant is defined,
together with the nominal plant characteristics. The control designer is left free to select his control strategies
within this framework. The most significant difference between the two is probably that the first involves a
much narrower dissemination of plant performance data.

The preliminary specification will be refined by exploratory trade-offs. A given function can usually be
realized using different combinations of measured parameters. The cost, size, weight, accuracy, response and
technological risks of the alternative approaches must be assessed and a particular solution chosen. This process
builds up a detailed definition of sensor interfaces and the parametric relationships involved in the control laws.

Similar trade-offs are required in the system stmcture to define the best arrangement to meet the safety
requirements. These must also take account of interactions with performance requirements through the actuators
and plant characteristics. The trade-offs will involve the definition of computing and data characteristics for the
alternative architecture, the study of failure monitor and executive hardware, the selection of candidate computers
and input/output structures, the assessment of mn time, store size and system reliabilities. Selected versions will
then be compared on estimated size, weight, cost, timescale and risk. The weighting of each factor will depend on
the particular goal of the overall vehicle or powerplant design.

At this stage the system outline is approaching final definition and the residual trade-offs become more detailed
within it.

The preliminary trade-offs will have used simulation procedures. Certainly, the plant and controller will have
been simulated to verify control law assumptions. Indeed, a simulation is often the only way of arriving at plant
characteristics at this stage of the work.

A controller is required very early in the powerplant development program, if not for the first runs of a new
engine. Engine development starts by rig testing of individual components - compressors, burners, turbines, etc.
140

Although initial simulation may be purely theoretical, improved data allows it to be improved as component test
data becomes available. However, rig data is not always representative of the component performance in an engine
and the simulations used may be in error until full scale engine mns have taken place. This applies to both dynamic
and steady-state characteristics.

Simulation of the controller itself may be general, in the sense that no attempt is made initially to represent the
characteristics of any particular control computer. Later trade-offs may require the use of an emulator to identify
realistic mn-time and store size values for particular machines.

Two types of simulation are used. The first type does not run in real-time. The plant characteristics are
simulated accurately but time constants are scaled arbitrarily. This permits investigation of sample rates and the
use of emulations with no restrictions on the computer in which the work is actually conducted.

System hardware may be included in a simulation, for instance to permit proper representation of nonlinearities,
in mechanical components. The simulation must then mn in real time and a simpler, less accurate simulation is used.
Very often the investigations are concerned with restricted ranges of engine operation and simple transfer function
simulations can be used.

These simulation procedures are used extensively and throughout the system development. Where the specifi-
cation is written in terms of the combined controller/plant performance a reference simulation may be used in
quality assurance testing of the system performance prior ro release for delivery.

Proper use of emulation techniques allows the development of the hardware to proceed independently of
program development. Many of the essential software characteristics can be defined and program debugged in
some detail before a real-time evaluation is possible.

Long development times are characteristic of aircraft powerplants. The lead-time from first run to entry into
service is normally much longer than the time between first flight for the aircraft and entry into service. This
reflects the greater uncertainty in the performance of the powerplant at the time it makes its first mn. It arises, partly
because of the component characteristic uncertainties mentioned earlier and partly because the complexity of the
aero-thermodynamics is much greater than for an aircraft.

An engine is usually subjected to considerable development modification prior to entry into service. It is often
changed after entry into service to realize performance stretch as well as to cure in-service problems.

These changes must be accepted and the system designed to take account of them. Major changes in hardware
are very unlikely to occur, or can be buffered, but the program will be subjected to frequent changes in service.
These features have considerable influence on software and on the choice of memory stmcture.

Software Considerations
The primary demands on software are:

(a) It should be efficient. The requirement for development change implies a reprogrammable store. Cost,
size, reliability and environmental tolerance can all demand a read-only, minimum-size store for service use.
Program translation between the development and production phase is an added cost since a thorough
validation process is required on the translated version.

(b) It should be reliable. The reliability of the program must obviously be of comparable level to that of the
hardware itself in order to achieve proper levels of safety.

(c) It should be flexible in permitting different problem solutions to be freely set up.

(d) It should be portable so that fixed, proven solutions from one system may be carried over into another at
minimum cost and risk.

(e) It should be easy to modify and maintain programs.

(0 It should be easy for engineering specialists to use. A systems team involves many disciplines.
Programming skill must not be a barrier in access to the computer.

This set of features has conflicting requirements. The need for efficiency drives towards machine code
programming which will certainly not be easy for control specialists to use. The need for reliability conflicts with
frequency modification. Any software approach must therefore be a compromise.
141

The best compromise for a system using a nominally fixed program appears to be an assembler and defined
set of macro/sub-routine functions operating as shown in Figure 8.11. A general set of functions comprised of lists
of individual coded macros has several advantages. It is efficient and generates predictable machine code. It reduces
the amount of store actually involved in program changes. It is simpler to understand and document. The machine
code sections are relatively small, easier to debug thoroughly and less likely to have hidden restrictions - thus
helping reliability. In addition, the macro set is extendable, new macros can be added if needed, or when available,
and machine code sections can be readily inserted into a program if desired.

Assembler
(any convenient
language)

source program
for Target Computer

Object Code
macro code equivalents ^___ for Target Computer

coding structure G.P.


binary record and COMPUTER Listing for Target Computer
listing structure ~^m*

control statement

Fig.8.11 Assembler operation

The same macro is used when a manufacturer uses his same computer in different systems applications. Often,
some of the applications program segments may also be transferred. There is therefore a good measure of portability
between applications. In addition, programs become portable between computers at the price of defining an
assembler language input coding scheme and recording the macro set or desired sub-set. High portability is a very
valuable feature of any software system because established problem solutions usually outlive particular generations
of hardware.

With proper choice of macro/sub-routine sets, the program writing can be made simple and readily learnt.
Programs in some assembler languages are claimed to be readily generated from a conventional control schematic
and almost as easily transposed in the opposite direction.

It is not recommended that this method is suitable for all applications. For the particular problem of dedicated
operation it has certain advantages. There is, however, a general feature which it is desirable to incorporate, where
possible, into any development system.

We assume that there will be a change of store technology between reprogrammable development hardware and
service equipment. The development phase is relied upon to expose potential problems in service. An established
sub-routine package, parts of the operating system and service routines or possibly diagnostic programs can be
stored in a read-only section from the outset. This arrangement gives an economical way of gaining hardware
experience. At the same time, it reduces the size of reprogrammable store required.

Four types of store will probably be required in a service system. These are:
- Data store (volatile)
- Program store (ROM)
- BITE store (non-volatile RAM)
- "Modifiable Constants" store (alterable ROM)
The BITE store holds data to be used in post-failure fault diagnosis.
142

The "modifiable constants" store serves two purposes. Some of the control characteristic constants may need
to be changed as engines are up-rated or even to match individual powerplants. The same store, possibly combined
with designated areas of the ROM store, can be used for minor in-service changes in the program itself.

The particular technology used to provide these store capabilities will change with time and will be the subject
of trade-off at the hardware design phase. The essential requirements will be common to most dedicated on-line
systems.

8.11 CONCLUSION

This chapter has reviewed some of the system procedures, requirements and design features of the control and
monitoring of the propulsion system in aerospace vehicles.

It has been biased, in the examples given, towards aircraft powerplants and towards commercial operations.
The principles and procedures will not differ significantly in other types of application. Even the control laws
could show a strong family resemblance.

The material presented has followed the sequence of the proceeding chapters and attempted to illustrate in
practical terms, some of the points raised in those chapters. To this purpose it has looked beyond the present
state of the art and described possible future extensions. Much has been, and is continuing to be done in this
general field. Systems of this type are now flying.

However, future extensions will be helped by simplifications in two specific areas. These are standardization
of digital interfaces and standardization, or at least restriction, of programming languages. The task of standardiza-
tion will not be easy. However, the benefits to the user are clear and there is ample historical evidence of the
value of standardization to industry as a whole.

Acknowledgements

The author wishes to thank his colleages for help and constructive criticism and also the Institution of
Electrical Engineers for permission to use Figures 8.5 and 8.6 from this paper given at an I.E.E. Colloquium
on "Living with Unreliability in Computer Based Control Systems", Colloquium Digest 1972/74 I.E.E. London.

References and Further Reading

1. Sobey, Albert J. Control of Aircraft and Missile Power Plants, Wiley, 1963.
Suggs, A.M.

2. Young, P.H. Propulsion Controls in the Concorde, i. R. Ae. S.September, 1966.

3. Shutler, A.G. Digital Computer Control of Gas Turbine Engines. ASME Paper 70-GT^0, May 1970.
Eccles, E.S.

4. Grose, V.L. Status of Failure (Hazard) Mode and Effect Analysis, Fault Tree Analysis and
Prediction, Apportionment and Assessment. Annals of Reliability and Maintainability
1971 Vol.10, pp.415-422, ASME New York NY, 1971.

5. Salt, T.L. Evaluation of Mission Severity in Cumulative Damage. Annals of Reliability and
Maintainability 1971, Vol.1, pp.104-113, ASME New York NY, 1971.

6. Kockanski, K.B. Condition Monitoring. ASME Paper 69-GT-66 March, 1969.

Monitoring Data from Jet Engines. 10th AGARD Avionics Panel Symposium, Paris
7. Taylor, H.N. 1965.

New Philosophies in Automatic Power Unit Control. BALPA Symposium, London,


8. Eccles, E.S. November, 1968.

A Non-Linear Digital Simulation Method Applied to Gas Turbine Dynamics. 4th


9. Dennison, C. IFAC Congress, Warsaw, 1969.

10. Johnson, W.A. Pilots Response to Stability Augmentation System Failures and Implications for Design.
Weir, D.H. AIAA Paper 68-819 AIAA Guidance, Control and Flight Dynamics Conference,
Pasadena, California, 1968.
143

CHAPTER 9

MAN-MACHINE INTERFACE

E.Keonjian

9.1 INTRODUCTION

As the complexity of aerospace systems grows, the requirement for augmenting, expanding and simplifying
crew control capabilities becomes more demanding. Thus the man-machine interface, which essentially is a problem
of exchanging data between the system and the human has become more crucial for the operation of modem
aerospace systems.

To cope with this problem a new class of information - processing systems (aerospace computers, multi-
processors, multiplexers), control systems and displays have been developed, and the trend toward greater integration
is realized. As a consequence, the degree of pilot/operator involvement with the machine has increased in scope and
complexity. This chapter reviews briefly some basic elements of the man-machine interface optimization process
and its relations to the total avionics system design.

9.2 HUMAN CAPABILITIES AND LIMITATIONS OF THE CREW

The emphasis on automatic controls in modern aerospace systems has considerably altered the role of the
human operator in such systems. His task lies more in monitoring and decision making areas than in control.
Hence a systematic methodology is needed to test the adequacy of the human operator to perform the tasks,
procedures and required decisions, optimized with respect to the functional requirements imposed upon the man.
This requires a quantitative understanding of human capabilities in complex decision and control tasks. Considerable
data are available on the information gathering and processing aspects of human behavior1 which illuminates the
necessity to dwell on this subject in this brief chapter. From these results mathematical models of human decision
processes and adaptive behavior have been proposed for specific control situations.

9.3 ALLOCATION O F FUNCTIONS T O M A N A N D MACHINE

The process of determining those functions to be assigned to the human operator in avionic systems is termed
crew analysis2. The allocation of functions to the system or to the human operator is a process which is most
important to ultimate system effectiveness. The allocation must be established early in systems design, prior to
hardware constraints, and according to established principles of allocation. Subsequent modifications are inevitable,
but the process is designed to accommodate changes.

In multi-crew avionics systems, allocation of functions among crew positions is of extreme importance, parti-
cularly due to avionics integration and the inherent flexibility of computer-serviced displays.

The following are the main points that should be considered:


(a) crew workload,
(b) crew skill,
(c) communications among crew positions,
(d) hand-off of functions from one position to another,
(e) possible crew contribution to reliability through primary and secondary crew function allocations.

In a single-piloted advanced aerospace vehicle, the proliferation of controls and displays will demand a consider-
able level of occupation by the pilot. In the case of a high performance fighter, additional complications include
low-altitude flight and supersonic speeds requiring a highly discriminating and rapid target acquisition capability in
the diverse environments of geography and weather, both day and night.
144
i
A system designed to serve the complex needs of an advanced tactical fighter may be expected to be vulnerable
to equipment failure and battle damage, jeopardizing flight safety and/or mission success. The analytical approach
used to develop a cockpit concept for the next generation of tactical fighters has been described in the literature3.

The man/machine interface for vehicles and other process control, utilizing computers, multiprocessors, multi-
plexers, dedicated sub-system processors, sensors and effectors has also been described in literature4 including the
specific case of the space shuttle orbiter5.

9.4 ESTABLISHING REQUIREMENTS FOR INFORMATION, DISPLAY AND MANUAL


AND AUTOMATIC CONTROLS

An integral part of the process of opimizing the man/computer interface is the detailed determination of the
information required by the man and categorization of his response. This information is utilized for the specifica-
tion of control and display requirements for the computer interface. The lack of due consideration of the information
required (and its format level) for the operator to make the transition from a monitoring stage to more active involve-
ment as a system effector is a very common pitfall in man-machine interface. Chapter 8 treats this subject at
length.

9.5 DESIGN OF THE MAN-MACHINE INTERFACE

Data regarding the exact content and format of the information required for transition from one mode to
another must be available to design engineers early in the process to influence preliminary design and design trade-
offs. The design should also reflect the human factors considerations to ensure that the design is not compromised
in that respect.

It also is necessary to separate design for normal operation from design for degraded and contingency modes of
operations. Normal operation design divides into design of procedures, message language and format and following
that design of hardware by means of which messages will be interchanged (survey of hardware means available and
desired). Design for cases of mulfunction has to cater both for continuation of operation when the system is unable to
carry out all of its tasks so that the human has to take over some of them or parts of them, and also if possible the
provision of means whereby humans can diagnose and repair malfunction while continuing with limited operation.
Here it is much more difficult to foresee and work out procedures for all eventualities and it must be assumed that
an attempt to do so will not be fully successful, hence the importance of providing means for interchange of
elementary message building blocks and extensive information on malfunction so as to enable the human to build
up what he may require from the basic elements. The preliminary design of the man-machine interface is impacted
through the trade-off study process. The design will undergo change as a function of
(a) any change in the allocation of function to the computer, system, or to the crew,
(b) any change in apportionment of function to a specific crew position, or
(c) any change required to permit the human to perform a function that has been demonstrated to be
deficient in meeting one or more functional requirements. The interactive nature of the method permits
modifications to occur, provided the outputs are phased properly in time2.

Of course, it may appear hard to ask someone who had just finished spending a great deal of time and effort
on designing something to wrench himself away from what he has just done and to consider alternative approaches.
It must, however, be pointed out that in producing a design the designer has learned a great deal about the problem
and the means of solving it, and is very much wiser than he was at the outset, so that he has reached a stage when
he can have an overall view of the whole problem and is in a position of seeing and evaluating alternatives. If there
appears to be a more attractive alternative, this has to be worked out in comparable detail and this may have to be
done more than once. Once it is decided which alternative to adopt, an internal optimization within this alternative
has to be carried out considering the various possible trade-offs from the point of view of the various criteria
applicable to that particular system. These will normally include reliability, availability, integrity, cost, cost effective-
ness, weight, and space. The requirement for flexibility and growth potential must be considered in man/computer
design. This means the anticipation of future requirements and the modification of existing functional requirements.
When choosing a display and control interface for the digital computer through trade-off and control interface for
the digital computer through trade-off study - the growth capability of the software and display hardware must be
considered. The capability of the human to use additional information or to assume additional functions must be
evaluated as well. Finally, simulation and operational tests should be performed to check:

(1) whether the design achieves the objectives, i.e., whether the system will perform the tasks which it is
designed to do by the combination of tasks of the system and human;
(2) whether the system is capable of presenting all the information which may be required with a satisfactory
response time;
145

(3) whether the information is presented in a form which will enable the human to digest and use it for
decision making, and then communicate that decision to the system within available time limits.
If the design is viable, the next step is to see how well it meets other criteria, e.g., cost, reliability etc.

9.6 EQUIPMENT FOR MAN-MACHINE INTERFACE

The first and simplest devices for enabling the operator to communicate with the system were switches and
push buttons. Signals from the system to the operator were given (and still are) by lighting up lamps or sounding an
audible alarm. An extension of the push button is the keyboard. Keyboards used with real time systems do not
normally have a typewriter layout; the keys are usually arranged in alphabetic or numeric order. An example of an
alphanumeric keyboard is shown in Figure 9.1.

A further stage of development came with the introduction of functional keys, in which case a single key
transmits a complete message. This in turn led to the so called programmed function keyboard. With this device
the message which any one key transmits to the system is changed by the system itself as required; since at any one
stage in the operation of the system certain messages may be relevant while others will not be required. With this
type of device some means for indicating the particular function which is assigned to a given key at any one time is
required. An example of a programmed function keyboard used in an air traffic control system is shown in
Figure 9.2. A far more flexible programmed function keyboard can be achieved with CRTs. One type of CRT
based programmed function keyboard is known as a touch-wire. It consists of a CRT display with some 16 to 64
wire ends fixed on the implosion screen. The function appropriate to the particular wire end at a given point in
time is displayed above that particular wire end. In order to communicate one of the available functions to the
system, the operator simply touches the appropriate wire; hence the name touch-wire.

An alternative implementation is the digitatron. This has 8 light-beam sources along, say, the right hand edge
of the CRT with 8 photocells opposite them along the left hand edge of the CRT. Similarly there are 8 light-beam
sources along, say, the upper edge of the CRT with 8 photocells opposite them. The user's finger at any one of
the 64 beam interaction points will intermpt 2 of the 16 light beams and inform the system that the user has
chosen one of 64 possible functions displayed to him. An example of the possible sequences of sets of functions
displayed on a touch wire in the case of an air traffic control system is given in Figures 9.3. to 9.7.

In this particular system only twelve wires were used superimposed on the bottom end of a graphic display as
illustrated in Figure 9.3. CRT displays are nowadays widely used not only for the display of messages made up of
characters, but also for the presentation of graphic information. One method used to reduce the load on the system
when dynamic graphic information is to be superimposed on static information (or on information which varies
comparatively infrequently) is the rear port projection tube. These are CRTs which have on the rear of the tube, next
to the neck, a built-in slide projector. With such a tube the static information is projected onto the face of the
tube optically. A typical use of such a tube in an ATC system would be to project optically the relevant map of
the airways and navigational aids while the symbols representing the position of the aircraft would be generated
electrically by the system.

Color is now also being introduced into CRT displays for real time systems. One way of doing this is to use
a television type triple-gun tube. There is, also another type for use in real time systems known as the penetron.
This is a single-gun tube with a double layer of phosphor, the outher layer being green while the inner one is red.
The greater the speed of the electrons hitting the screen the further they penetrate towards the green layer. Thus by
varying the acclerating potential a range of colors between green and red can be obtained.

When graphic displays are used, means are also usually required to enable the operator to specify to the system
any desired point on the display. The three main methods used for doing this are the light-pen, the joy-stick and
the rolling-ball. The first of these is the most widely used in general, but it is not normally used in avionics and
associated real-time systems. It operates by picking up the light-spot generated on the particular point of its CRT.
The joy-stick works on a different princple in that it is used to move a special, easily recognizable symbol on the
CRT. When this special symbol is in the position to be designated to the system, the operator pushes a so-called
entry button thus informing the system that this is the identified spot. There is a version of the joy-stick which
does not require moving the stick and where the spot is moved across the CRT just by pushing the stick in the
direction in which it is desired to move the spot. The rolling-ball operates on a similar principle but in this case the
operator's instrument is a billiard ball, largely embedded in the operator's working surface. The special spot on the
CRT is moved as the operator rolls the exposed part of the ball with his fingers.

No practial method is as yet available which enables the system to recognize spoken messages. Audio commu-
nication to the human has however been employed, by means of messages assembled by the system out of pre-
recorded words. This method is currently in experimental use in an air traffic control system generating messages
from the system to the pilots of controlled aircraft.
146

ERROR '•MW.'V.Wl ««W« vv


i ii'r"u'i\W\v^3

B C D E F 6 ENTER
CLEAR I 2 3
m.iiim«MiW| pU.mM-.MVA
"•«« «'w\ immnv:
BACK H L M
J
SPACE! 5K 6
_a_H_i_a Kotam PMHl
N 0 P Q R S
7 8 9
iwmwmm . VMKV. MV.«

LEFT RIGHT
ALPHA - A
V V w
ALPHA

0 X Y
^a

ANK

Fig.9.1 RBDE-5 alpha-numeric keyboard

r
SELCT
CATEG

S
SIM Simulation Category
BESSZSESa
Track Category T
TRACK

MODIF
M
r^ M o d i f y Category

Hold Category m Q
HOLDL
K p^\i
HNDOF Handoff Category
BEE2._r_£S_l
D
Display Category DPLAY
,;^-- ? x
c
CRD Category

D F G / C C C Category m CRDEV
F
DF/CC

DEC
DFG WIRED Category
^VIP_ED
VSiSma\}f^ilA in
8_5_8
v_
Fig.9.2 Select category display
147

Touchwires

Fig.9.3 Function control by means of touchwires

HVT « HANDOVER TRANSMIT


HVR = HANDOVER RECEIVE
CFL - CLEARED FLIGHT LEVEL
RESET . = STEP BACK TO START OF
PREVIOUS PAGE

HVT ACT LBL CON

CFL *et- -tftt RESET

HVR SEL QL MOVE

HVT, CFL & HVR ADDED

Fig.9.4 Action page including handover and CFL options


148

N,S,E,W = QUADRANTS FOR LABEL POSITIONS


RELATIVE TO SYMBOL
WHERE N - ABOVE
S = BELOW
E - TO RIGHT OF
W = TO LEFT OF
LEA - LEADER

RESET = STEP BACK TO START OF PREVIOUS PAGE

W LEA E RESET

SEQUENCE TERMINATES O N TOUCHING N , S, E OR W

Fig.9.5 Position page

0-7 = NUMERALS FOR USE WITH S.S.R. CODES


A » USED IN LIEU OF 3rd and 4th FIGURES
OF S.S.R. CODE FOR SELECTION OF
N O N DISCRETE GROUP
RESET * STEP BACK TO START OF PREVIOUS PAGE

RESET

FOUR STEP PAGE

Fig.9.6 Code page


149

A = S. S. R. mode A
B • S.S.R. mode B
D • S.S.R. mode D
SUP - CALL UP SUPERVISOR START PAGE
RANGE - INPUT MAXIMUM DISPLAY RANGE REQUIRED
TRL = TRAIL DOTS
RESET = STEP BACK TO START OF PREVIOUS PAGE

SUP

TRL RESET

RANGE

SINGLE STEP PAGE

Fig.9.7 Controller start page


150

CHAPTER 10

NOVEL DEVICES AND TECHNIQUES

E. Keonjian

10.1 INTRODUCTION

During the last decade, considerable advances have been made over the whole range of avionic devices and
techniques. These novel devices and techniques, have been finding their way into avionics systems, making them
more effective in terms of reliability and operational capability; coupled with simplicity and lower cost for mainte-
nance and ownership. This process has been accelerated particularly by the rapid progress in microelectronics, with
its far reaching consequences especially for future avionic computer systems.

In this chapter we will review some advanced devices and technologies still in development, which, when matured,
could further improve the effectiveness of avionic computer systems.

10.2 LSI TECHNOLOGY

The concept of Large Scale Integration (LSI), offers new and exciting possibilities for avionic computers.
Coupled with automated intercommunication techniques, this concept permits not only unique circuit combinations,
but also lower hardware cost, increased reliability, and improved overall system performance.

Below are some LSI definitions which have been established in this field since the beginning of 1970.

The term LSI commonly refers to a technology which permits the integration of a large (conventionally, over
100 equivalent gates) number of electronic devices, such as diodes and transistors into one single functional
package such as a shift register, multiplexer, decoder, counter, etc., built normally on and within a single semi-
conductor chip or wafer1. Figure 10.1 illustrates such a circuit, Intel Model 1402 Four 256-bit MCS Dynamic Shift
Registers.

In addition to characterizing LSI microdrcuits by their complexity, they can also be characterized by the
technology or device structure (Bipolar versus MCS), and by the interconnection technique (discretionary wiring
versus fixed wiring approaches).

In bipolar devices the conduction takes place by the flow of both holes and electrons as in the ordinary p - n
junction transistor. As opposed to this, in MOS devices the conduction is due to a single type of carrier, either holes
or electrons. These devices are also called the field effect devices, or FET, because the modulation of carrier flow is
due to an electric field. When the field is across an oxide layer at the semiconductor surface, the device is called
the metal-oxide-semiconductor, or MOS. Figure 10.2 illustrates the cross section of bipolar and MOS structures.
There has been a considerabl discussion on the relative merits of these two basic types of LSI devices. Rather than
to add to this "controversy", the reader is directed to the available corresponding literature, especially to AGARD's
Lecture Series No.40 (Ref. 2). In general, bipolar devices offer relatively high speed and greater "drive" capability.
The MOS devices on the other hand are less expensive in manufacturing, can be made in smaller stmctures and hence
a more dense package can be easily derived. In addition, it is possible to use MOS transistors as resistance elements
and a number of functions can be physically implemented in MOS forms using fewer circuit elements.

Complimentary MOS (CMOS) - MOS LSI circuits can be made to incorporate both P- and N-channel devices
on a chip. These circuits dissipate power only during a change of state. The instant of change is the only time that
significant current flows and this current can be kept extremely low, on the order of a few microamperes. Along
with low power consumption, CMOS also offers greater speed than conventional MOS circuitry. CMOS also has
good noise immunity characteristics and a strong insensitivity to supply voltage variations. Two extra masking steps
however are required in CMOS fabrication: one step to add the N-channel transistors and another to electrically
isolate them from the P-channel devices. Contamination precautions must also be more elaborate with CMOS devices
because of the high sensitivity of N-channel transistors to contamination.
151

Fig. 10.1 Intel model 1402 four 256-bit MOS dynamic shift registers
152

P BASE
INSULATION
SECOND LAYER METAL

FIRST LAYER METAL

THERMAL RESISTOR
OXIDE

N EPITAXIAL
LAYER

ISOLATION
REGION

BURIED ISOLATION
LAYER REGIONS
N SUBSTRATE

METAL
INTERCONNECT

N SUBSTRATE

DRAIN GATE SOURCE


OXIDE

Fig. 10.2 Cross section of bipolar and MOS structures


153

Below are the definitions of two basic wiring techniques.

Discretionary Wiring — A technique which permits the selective interconnection of only good cells on the chip,
bypassing the defective units. The layout for such interconnection paths is generally computer programmed and
generated. Each cell is a complete basic circuit (a "building block") with pads for preliminary probing. This means
that there is no efficient usage of potentially good silicon material. On the other hand since the interconnection
patterns required connect the large probe pads only, it is not necessary to have high resolution masks. To simplify
routing of the interconnections, two additional levels of metallization are required. The discretionary wiring
approach is capable of producing a wide variety of functions within a short time at low design but relatively higher
manufacturing costs.

Fixed Wiring - (Or the "Chip Approach") - A technique whereby identical chips are used across the wafer,
each chip with complete identical interconnection patterns regardless of fault location. Since there are no probing
pads provided for each cell, a higher circuit density (and hence a more efficient utilization of silicon wafer) can be
achieved. Test pads are provided only for input-output access to the circuits: therefore the circuits cannot be readily
tested until fabrication has been completed. The fixed wiring approach requires greater design time and cost and
much higher resolution masks, but is capable of producing quantities of components relatively inexpensively.

Between discretionary and fixed wiring approaches, there are various compromises which tend to optimize the
manufacturing cost of circuits for a particular application. Typical examples are:

Micromatrix - (Per Fairchild Semiconductor Corporation)


Using standard cellular arrays, complete, except for metallization of interconnections — each array consists of
a predetermined matrix of component pattern cells which may be interconnected to form the required custom
circuits. In addition, each cell may be individually customized by cell interconnections to become one of a variety
of building blocks, such as AND-OR gates, flip-flops, etc.

Polycel — (Per Motorola Semiconductor Company)


Very similar to micromatrix, and geared more toward computer aided design - this is also called "Master Slice"
or "DRA" (Discretionary Routed Arrays, per Texas Instruments, Inc.).

Below are a few new promising techniques in LSI technology:

Nitride Passivation - The use of silicon nitride instead of silicon oxide as the gate/channel insulating layer
resulting in low threshold voltages. The protective properties of nitride passivation may make the hermetic chip a
reality.

Silicon Gate - MOS technology using highly doped silicon instead of aluminum for the gate electrode. In
fabrication, the number of masking steps are the same as in conventional MOS but in etching the oxide over the
source and drain, the polycrystalline silicon acts as a mask, preventing the gate oxide from being etched. This results
in a predsdy-formed, self-aligned gate. Silicon gate technology allows low thresholds compatible with bipolar
devices, higher component density, higher speed, and the fabrication of MOS and bipoloar devices on one chip.

Field Shield - Self aligning passivated MOS process allowing bipolar speed compatible with N-channel devices.
It also results in very low threshold and extremely high field inversion voltages.

Ion Implantation - A method of doping semiconductors using a high energy (90-300 Kv) accelerator to drive
the dopants into the bulk silicon - using a focussing mechanism the accelerator selects by mass the dopant to be
used emanating from an Ion source such as boron trichloride or boron trifluoride. Ion implantation is a faster
process than diffusion and can be done at room temperature. It lowers device capacitances allowing higher operating
speeds. In addition, Ion implantation lowers device thresholds while delivering a high ratio of field oxide to device
threshold.

The above mentioned and other new techniques have already introduced many innovations in the processing
of bipolar device. However there has been improvement also in MOS technology, such that the overall size reduction
remains in favor of MOS, while speed/power figure of merit remains in favor of bipolar.

What about the application of LSI devices in advanced avionic computers which we will call "the 4 t h
generation" of digital equipment? In such equipment, where LSI is used primarily for memory and logic function,
LSI offers not only a reduction of size and power consumption of the equipment, but it also offers a choice
between a single large central processing unit and a number of smaller special purpose units scattered about the air
or space craft. We have barely touched upon the great versatility of LSI devices. Their use for computer memories,
displays and other avionic applications will be discussed briefly, further in this chapter. However the maximum
benefit in avionic system design with LSI, will depend on the trade-offs of hardware versus software, remembering
that system software is a generally more complex entity than processor logic, and is usually not debugged until
154

long after the hardware is complete. Considerations such as reliability and maintainability will also enter into the
picture to make the final decision in favor of LSI more complex. Nevertheless, the era of LSI is here. It has been
used as a building block for many advanced avionic systems and its pace of acceptance will be accelerated for years
to come.

10.3 SEMICONDUCTOR AND OTHER TYPES OF MEMORIES

Semiconductor technology has an inherent advantage over many other types of technologies for computer
memory, that it lends itself rather easily to batch fabrication on single large chips.

Semiconductor storage elements have already superseded magnetic films in fast scratchpads. Figure 10.3
shows a photograph of a high density, fixed wiring, 256-bit, random access memory using MOS technology. The
central portion of the chip contains the memory array while the buffer, decoder and drivers are located around the
outside, which permits the utilization of a single 16-pin package.

A bit-organized, 16-bit silicon chip compatible with TTL logic is already commercially available. The chief
dimensions: 225 x 225 mils., power dissipation: 250 mW, access time (with a 30 pF load): 20 nsec. The chip has
a self contained driving, sensing and storage circuitry, which permits optimization of overall circuit design and
provides the system designer with a considerable flexibility. Figure 10.4 illustrates the Intel 1024-bit dipolar Read
Only Memory.

Low-power dissipation (0.1 mW per cell, steady-state) MOS memory circuits for the aerospace application, has
been described in the literature3.

Using high performance drivers and RC networks to simulate an 8K bit array, the memory is estimated to
operate at 12 ns access, 35 ns read cycle, and 60 ns write cycle, with a system dissipation of 43.5 watts.

Beam lead transistors are especially suitable for extremely small size memories. A 30 x 38 mils stmcture was
achieved containing 16 cells, which corresponds to 95 bits per square inch density. The memory system constructed
of these chips, is a 64-word, 16-bit system which operates with a 100 ns read-write cycle time.

In general, large arrays of semiconductor cells, using present day semiconductor technology, require line
currents comparable to those in core. Anticipated advances in the lowering of MOS threshold voltages plus the use
of two layer metal indicate the possibility of power levels less than in core. As to the volatility of semiconductor
memory, use of a back-up storage invalidates this problem, to a degree.

Semiconductor technology has an inherent functional advantage over magnetics in Content Addressable
Memories (CAM), namely that the former can be searched in a bit-parallel as well as a word-parallel mode, i.e., all
bits in each word can be interrogated simultaneously. Magnetic CAMs have an inherently poor signal-to-noise ratio
due to the considerable variability in analog sense signal from element to element and, as a consequence, tend to
be limited to bit serial operations. In addition, large semiconductor retrieval-type CAMs (in the several thousand
word and larger category) are more economical than similar magnetic CAMs.

In read-only memories, one of the significant approaches today lies in the use of permanent MOS techniques.
Fixed a-rays of typically 1024 bits are now available, with about two microseconds access time. The economical
advantage stems from the fact that only one mask operation must be specified by the user. The likelihood of future
improvements in MOS speed is good, and for this reason, its wide use in micro-programming can be anticipated.

What are the more advanced concepts in semiconductor memories? The following is a partial listing of some
of the new developments.

(1) Two-terminal Transistor Memory Cell using breakdown. This is a transient charge storage memory cell
utilizing a two-terminal transistor structure and junction breakdown. (Bell Telephone Laboratories.)

(2) A High Performance N-Channel MOS-LSI using Depletion-Type Load Elements. A 2048-bit read-only
memory with 300 ns access time and 50 microwatt per bit power dissipation has been achieved by
Hitachi Company of Japan.

(3) A Switched Collector Impedance Memory - This is a 288-bit LSI in which integrated bipolar memory
cells exhibit 4 ns access time at 50-200 microwatt-per-bit cell standby dissipation. (Hitachi, Japan.)

(4) A trim memory employing both NPN and high-gain Unijunction Transistors. A three photo-mask flip-
flop memory cell showing a cycle time of 500 ns has been achieved. Each cell contains four devices in
an active area of 35 sq. mil and uses 40 microwatt-per-bit holding power. (Bell Telephone Laboratories.)
155

Fig. 10.3 Intel 1024-bit dynamic MOS RAM


156

Fig. 10.4 Intel 1024-bit bipolar ROM


157

(5) Small-Size, Low Power Bipolar Memory Cell has been developed by IBM, which allows a very high storage
density in random access read/write monolithic memories at an extremely low power dissipation.

(6) A large, static MOS/bipolar ROM with combinatorial addressing for keyboard encoding has been developed
by Honeywell and Texas Instruments, Inc. It is a simple monolithic LSI device with 5520-bits, bipolar
TTL compatible outputs.

(7) An integrated, fixed-address MOSFET Memory Cell with normally-off-type Schottky-barrier FETs, has been
developed by IBM. A 1 micron-channel length and contact separation provide high package density
(2.5 mil2/cell) and high speed. Supply voltage can be below 1 V.

(8) In the area of monolithic main memory, IBM has reported development of a 128-bit bipolar memory chip
with under 50 ns access delay at 0.5 nW per bit and with wide fabrication tolerances. Also the design,
process and characterization of a bipolar main memory, with a basic 512-bit module containing four chips
with decode and sense circuits, has been reported.

(9) A fully decoded 2048-bit electrically programmable MOS-ROM has been developed by INTEL Corp. The
memory element is a silicon gate chip that provides access times of 500 ns (dynamic mode) or 800 ns
(static mode).

(10) Some other interesting developments have been reported recently. Among these are:
(a) A latent image memory (by IBM) which is a random access read/write memory with a suppressed
read-only image. The ROM image is non-volatile, reappearing with each powering-up and virtually
no effect on the orginal RAM capabilities.
(b) Charge-transfer electronics - tandem matrix semiconductor memory selection, using one Schottky
and two PN diodes per selected matrix rail. (Bell Telephone Laboratories.)
(c) A self-contained Magnetic Bubble-domain memory chip - consisting of NDRO shift register loops,
generators, input and output decoders — all implemented with double domain devices - and magneto-
resistive detectors, has been reported by IBM.
(d) A memory system based on surface-charge transport structure, in which adjacent rows propagate in
the opposite direction, has been combined with compact refresh-turn-around circuits to produce a
shift register memory system of high density and speed, has been reported by General Electric Co.
(e) A new planar distributed device based on a domain principle, able to perform many processing
functions such as analog multiplication, signal correction, coordinate transformation, and analog-to-
digital conversion was reported by Tektronix, Inc.
(0 Magnetic film memories, making use of a truly planar single film element with an open-flux structure.
The elements usually possessed a relatively low ratio of disturb-to-full switch threshold, and also
require a rather low element density to avoid interactions between adjacent bits.
(g) An important extension of the above concept is MATED-FILM Solid Stack element, in which the
word line plane is orthogonal to the bit line plane.
(h) A plated wire matrix is formed by means of an orthogonal arrangement of plated wires or bit lines
and an overlay of copper strap word lines. The bit is at the intersection of wire and strap. This
concept proved to be attractive for low power and mass memory applications, as well as main
memory.
(i) Advances have been reported in optical beam addressed schemes and a number of read/write schemes
using various magneto-optic media have been proposed (see References 4 and 5). The progress in
laser technology makes this scheme rather promising. The sonic deflector seems to be the more
practical deflection method at present. It has been estimated that using such a deflector, a 108 bit
semi-permanent memory is technically feasible. (See References 6 and 7.)
(j) Magnetic Bubble Memories. The basic storage medium is a cylindrical region of magnetic energy
called a domain or bubble. It is a whole new class of mass storage, which promises up to a 100-
fold improvement in access time over disks as well as asynchronous, multi-speed operation for greater
flexibility.
The principle development efforts have been aimed at devising the best methods for manipulating
these domains and in formulating magnetic materials that provide optimum performance. Several
suitable circuit techniques for manipulating domains have emerged, but device development has been
delayed by the search for suitable materials, which still remains the most difficult area.
The potential applications for bubble memories are many. They range from fast-access memories
(FAMs) to a repertory dialer for telephones.
158

The immediate and most frequently feasible applications for bubbles will probably be in the replace-
ment of small disc memories. Bubble memories on the order of 1 to 10 million bits are very attractive
because they are economical. The table below, compiled by Dr P.Bailey of Monsanto, shows the
comparison of various types of memories8.

TABLE 1

Tape Disc Core Si Bubble


Drum rc
Cost/bit i 1(T4 l(T l 1 i 104
Average access
time-sec. 10 io- 2 10"6 io- 7 10"s
Bits/in2 104 105 103 10s I0 6
Power consumption:
Joules/bit 10"4 10"4 10"' 10"9 lO"13
Volatile No No No Yes No
Logic* No No No No Yes
Radiation resistance Fair Fair Fair Poor Fair**

* Capability of combining logic and memory operations in the same device.


** Based on preliminary data. More tests are required.

10.4 LSI TESTING

Due to inherent complexity of modern LSI devices, exhaustive testing of all functional and individual para-
meters may require several months to complete, even if the duration of each test is reduced to as little as 10 micro-
seconds or less. Therefore, a full family of a new generation of LSI testers, a very complex machine, was born and
the algorithms have been developed to simplify the testing procedures of LSI circuits. Figure 10.5 shows the system
configuration of the Microdata MD-200 MOS tester.

Functional and parametric testing comprise the two general areas of LSI testing. LSI electrical testing is
a particularly problem ridden area due to high chip complexities, limited internal chip access, a proliferation of
custom logic, mixtures of sequential and combination logic, the continual evolution of new technologies (e.g.,
CMOS, Field Shield, ion implantation, silicon gate, etc.) and a lack of uniformity throughout the industry in testing
of LSI devices.

10.5 FUNCTIONAL TESTING


r

Functional testing of LSI microdrcuits comprises the generation of specific test patterns which, when applied
to the input terminals, will yield information indicating the presence or absence of faults in the device. These test
routines are generally classified as either fault diagnosis or fault detection routines. Fault diagnosis includes the
location and determination of the fault, while fault detection is, in general, the verification of the Boolean response
of the device. Assumptions usually made in functional testing include the following:
— Faults can only occur one at a time,
— Faults are static, not intermittent (i.e., stuck at 1, or stuck at 0),
— Logic is non-redundant.
Functional testing is also categorized according to the type of logic to be tested. The logic type is either combina-
tional or sequential. Combinational logic networks respond to each input pattern independently of the previous
input. Sequential logic networks respond according to their present state and the incoming input pattern. As a result,
sequential logic testing is more complicated than conblnation testing. Sequential test routines, however, often
include a capability for testing combinational logic4.

It is well known that the application of all possible combinations of input patterns to a device for functional
testing is not at all practical, particularly with complex devices. Additionally, in as much as most LSI microdrcuits
are usually pin-limited, it is often impractical if not often impossible to provide sufficient external test points for
monitoring the performance of internal circuit elements. The singular prohibitive factor in this exhaustive testing
concept is time. As a result, algorithms have been developed outlining more efficient test approaches that will
CONSOLE
INTERROGATOR UNIT
PARAMETER TESTING
(1)
PATTERN PATTERN
ASR 33 •
ANALYZER SYNTHESIZER

(1)
CONSOLE
H.S. 1 INTERROGATOR UNIT
PAPER TAPE MAIN — PARAMETER TESTING
MULTIPLEX •
HEADER/PUNCH MEMORY CHANNEL (4)

1 CONSOLE
^ 1
CARD INTERROGATOR UNIT
1 •
READER DIRECT DIRECT PARAMETER TESTING
MEMORY MEMORY (13)
ACCESS CHANNEL ACESS CHANNEL PATTERN
SYNTHESIZER
DISC. . •
(4)
CONSOLE
INTERROGATOR UNIT
PARAMETER TESTING
MAGNETIC 116)
TAPE

Fig. 10.5 System configuration MD-200 (Microdata)


160

sufficiently exercise a device. In the area of LSI testing, this lends itself to more than one discipline of thought.
One testing philosophy dictates that all gates shall be exercised at least once, while another approach is to exercise
most of the gates several times. In the latter technique, a grading system is used to grade each test sequence
according to how many gates were exercised. Testing sequences can be combinations of all 1 's followed by all 0's
or alternate l's and 0's (checker board array) or some other variation of binary elements.

In functional testing, it is desirable to have a test system that permits a variable allocation of input and output
pins and which also possesses the capability of changing all inputs at once. This allows testing flexibility from device
to device and permits testing of various I/O pin configurations with maximum binary exercising.

The length and configuration of data patterns depends on the type of device to be tested. In general, the various
LSI devices fall under the classifications of Random Access Memory (RAM), Read Only Memory (ROM), Shift
Registers, and Logic Arrays.

ROMs require a pattern depth of 2^ , as a minimum, where N is the number of address lines. RAMs
require relatively long, yet simple data patterns (e.g., write l's, write 0's, write checkerboard) and it may be desirable
to use self-generating pattern techniques to produce the large words required. The testing of Shift Registers requires
propagating a logic 1 through all existing logic 0 stages and vice versa. Random logic arrays require the generation of
special complex patterns.

10.6 PARAMETRIC TESTING - DC AND AC

Electrical parameter testing of LSI devices fall into two general categories: DC or static parameters measure-
ments, and AC or switching characteristic measurements. Parametric tests relate directly to process verification and
as a result are a mandatory part of LSI device testing. Test time for individual parameters is much longer than
functional test time. This is due to the setup time required for each parameter along with proper sequencing of
current and voltage measurements.

Some useful parametric information may be derived through exhaustive worst-case functional testing where
functional test patterns are applied to the device under test (DUT) for various worst-case input or supply conditions.
Verification of electrical parameters is inferrred by the realization of a correct output sequence for all worst-case
situations. The "functional" parametric testing is faster of course, since measurement time is eliminated.

The extent of AC testing should be related to the intended utilization of the device. Switching characteristics
of devices should be verified at as close to the designed operating speed as practicable. Characterization of circuit
parameters is, at best, a compromise of proper testing procedures. For very high-reliability applications, neither
bipolar nor MOS-LSI processing technology is controlled sufficiently to allow ambient temperature characterization
of DC parameters to preclude actual testing of the device under temperature or speed extremes.

10.7 OPTOELECTRONIC DEVICES

Much literature exists in the fast growing field of opto-dectronics. The purpose of this chapter is to give a
brief "account" of a few significant and promising developments which are finding their way into advanced avionic
computer systems.

Opto-electronic devices as practical operating components are less than a decade old. Yet their application to
computer systems ranges from optical computer-tape and card readers to optical computer keyboards, and from
simple panel indicator lamps to complex alphanumeric displays for radars, computers and aircraft instmmentation,
including devices with a built-in storage capability. Solid-stage opto-electronic devices represent a special interest to
computer designers in view of their relative simplicity, high reliability, and compactness.

Semiconductor - light (and IR) emitting diodes are playing a significant part in the rapidly growing field of
opto-dectronics. They are designed into equipment such as card readers, encoders, and night vision systems. A
GaAs IR-emitting diode in a p—n junction diode, which can emit photons with energy slightly less than that of the
semiconductor band-gap. With forward bias, light is generated at the junction and is emitted in all directions.
The devices can be operated in either continuous or pulsed mode. A typical application: card readout.

GaAs Laser Diode


The GaAs injection laser is basically a planar p - n junction in a single crystal of GaAs. At low values of forward
diode current, it functions as a conventional GaAs IR emitter. However, if the laser is pulsed beyond a threshold
current, lasing occurs and a marked increase in radiant power is produced. There has been an extremely large
literature published on the fundamentals of lasers and GaAs IR emitting diodes, which eliminates the necessity of
further discussion of this subject in this chapter.
161

Photosensors
There has been a whole family of photosensors developed throughout the industry in recent years. The use
of photoconduction in junction silicon devices dominates the present technology. The devices include photodiodes,
phototransistors and multijunction photosensitive units, such as light activated SCRs. (Photoswitches.) Silicon
FET are the most sensitive devices because of their relatively high input impedances, which permit generation of
high control voltage from small photocurrents.

Optical Couplers
A host of new components called optically coupled isolators have appeared in advanced computer drcuitry,
consisting of a combination of an IR light emitting diode and a photosensitive transistor. The great advantage of
such an optical coupling is an almost perfect isolation between the input and output terminals (typically 10" ohms).

An interesting application of opto-dectronic devices can be found in solid state DPDT relays utilizing an electro-
optical isolation. The circuit has two pairs of NPN transistors; one pair normally conductive so as to provide
openings in current paths to terminals connected to its collectors. Switching action is obtained by means of a
photon-coupling pair (photo transistor and light-emitting diode) connected through other-transistors to the bases of
the PNP transistors5.

Real Time Displays for Airborne Image Forming Sensors; Laser Beam recording and display, and many more
topics related to opto-electronics can be found in the AGARD Conference Proceedings No. 50, devoted entirely to
the subject of Opto-Electronic Processing Techniques.

What is the state-of-the-art of light-emitting diodes (LID)? Because of the wide variety of structures used for
LEDs, the external efficiency for identical material can vary greatly from diode to diode, depending often on the
ingenuity of the investigator in fabricating the device. Below is a summary table which compares the reported
effidency and brightness of various LEDs6.

TABLE 2

State-of-the-art Performance of p—n Junction LEDs

Peak
Commercially Lum. Eff. ^ext* B/J,t
Material Color Wavelength Reference
Available? Lumens/Watt Percent fL/A-cnr2
A

GaP: Zn,0 Yes red 6900 20 a 3-7b 350 c 29


Al. 3 Ga. 7 As no red 6750 16 1.3 140 38
GaAs. 6 P. 4 yes red 6600 42 0.5 145 32
ln. 42 Ga. S8 P no amber 6170 284 0.1 31Crd 36
GaAs. 5 P. s yes amber 6100 342 0.013 35
GaAs. 2 5 P. 7 5 :N no amber 6100 342 0.04 40-1006 33
SiC yes yellow 5900 515 0.003 10 44
ln. 4 Ga. 6 P no yellow-green 5700 648 0.02 115 34
b f
GaP:N yes green 5500 677 0.5-0.6 470 6

Except where noted, efficiencies for diodes with plastic encapsulants.


Except where noted, B/J calculated from Equation (2) using efficiency for unencapsulated diode with
(A:/As) = 1 . Diode efficiencies assumed to be 2.5 times less without encapsulation.
a Mean value for nonmonochromatic emission spectrum.
b Range between commercially practical and best laboratory results.
c Assumed 3% unencapsulated diode efficiency; (Aj/Aj) assumed to be one third to compensate for significant
edge emission.
d B/J calculated from measured efficiency value of 5.9 x 10 r4 for unencapsulated diode.
e Typical values of B/J reported as 40 to 60 fL/A.cm-2 in Reference 33. Value of 100 fL/A.cm"2 calculated
from Equation (1) using efficiency value found in Reference 33.
Calculated for representative dc efficiency of 0.1 per cent for unencapsulated diode. (Aj/Aj) assumed to be
one third.

From: IEEE Spectrum. May 1972, pp.28-38, "The Future of LEDs" by C.J.Nuese. H.Lressel, l.Ladany.
162

REFERENCES

1. - AGARD Lecture Series No.40, Large Scale Integration in Micro-Electronics, July 1970.

2. - AGARD Lecture Series on Air and Spaceborne Computers, 1968, AGARDograph


No. 127, pp.113-126.

3. Keonjian, E. Microelectronics in Perspective, Keynote Address, 1967 WESCON Symposium, San


Francisco, California.

4. Chang, J.T. Magneto-Optic Variable Memory, J. Appl. Phys., p.1110, 1965.

5. Mee, C.D. A Proposed Beam Addressable Memory, IEEE Trans. Mag., p.72, 1967.
Fan, G.J.

6. Gordon, E.L. A Review of Acousto-Optical Deflection and Modulation Devices, Appl. Opt. 5, p. 1629,
1966.

7. Smith, F.M. Design Considerations for a Semi Permanent Optical Memory, BSTJ.46, p.1267, 1967.
Gallaher, L.E

8. - It's a Year for Bubble Memories; Prototype will Appear Shortly, Electronic Design 3,
1 Febmary 1973.

9. - Amorphous Metallic Films Promise Easy-to-Make Bubble Memories, Electronic


Design 5, 1 March 1973.
163

CHAPTER 11

SPECIFYING THE REQUIREMENTS

A.L.Freedman

11.1 PRACTICAL DEFINITION OF SYSTEM

Many books have been written on the subject of System Engineering but no satisfactory definition of the
word system has apparently materialised so far. The fifth chapter of this book is in fact devoted to the elucidation
of the nature of real time systems and begins by pointing out the distinction between closed and open systems.
The point is made that the latter are really characterised by the purpose which they serve. It is in fact from the
consideration that an open system is there for a given purpose that one may derive a definition of the word system.
It will be noted that the word is used in a great many contexts, not necessarily technical ones. Thus, for instance,
we speak of a system of taxation or of an educational system. Considering this latter example one knows that the
educational system could be sub-divided into a system of primary education, secondary education and so on yet
we still talk about these components as systems even though they are only a part of a larger system. Note, however,
that this is only tme as long as the part of the larger system is a complete means for achieving its purpose. Thus
for instance we would not refer to the set of all primary teachers as an educational system. Taking a very simple
technical example, note that one would not talk about a hammer as a system; one would, however, regard a hammer
and nails as a system for fixing together certain items. Hence one concludes that an open system is a complete tool
for the performance of a given activity.

The first section of Chapter 5 also brings out another basic principle of system engineering namely that open
systems are hierarchical. On the one hand a system may itself be made up of a number of sub-systems while on the
other hand the system as a whole is a sub-system of another, higher level system. Since an open system is there for the
purpose which it serves, its essential definition can only be expressed in the terms of this purpose and this purpose
is part of the next higher level in the hierarchy. Thus for instance a definition which embraces all the possible
varieties of tables is only possible in terms of the purpose for which tables are used, that is the functions which it
will perform for its users.

Sections two to five of Chapter 5 are concerned specifically with the derivation of the specification for the
software component of the system and with the general problems of the design and the implementation of this
component. As pointed out in Section four of Chapter five a specification of the software component must be
derived from the functional specification of the system as a whole and it is therefore necessary to investigate first
how the specification of the system as a whole can be arrived at. As will be seen shortly this process is of cmcial
importance for the overall success of the whole undertaking.

11.2 DERIVING THE SPECIFICATION OF THE SYSTEM AS A WHOLE

11.2.1 A Procedure for the Derivation of the Specification

The history of real time systems is very much a tale of toil, sweat and disappointments. At best these real
time systems usually come into operation, after varying delays. At worst, they have to be dismantled and removed.
Sometimes, indeed, they are not even put together.

The usual excuse is to present these troubles as the inevitable penalty of poineering. Admittedly, ten years
ago we did not employ computers to control aircraft, nor to control all the systems within an aircraft. However,
a study of the history of these troubled systems shows that this is merely a convenient excuse. In fact, the tme
cause lies not in technology but in management thinking, or, to be more precise, the lack of such thinking. On
investigation it is found that most of the problems had already been built into the project well before any
engineering even started. In this section we shall therefore describe a four step procedure for deriving the specification
of the system as a whole. Such a specification makes it possible to decide firstly whether to go ahead with the
system, and secondly if the decision is taken to go ahead, to eliminate the main sources of the troubles which
bedevilled these projects in the past.
164

The procedure stems directly from the definition of a real time system as a tool to assist in performing a
given activity. The activity may be the control of interceptor aircraft or the control of the systems on board an
aircraft. Whatever the activity, however, one clearly cannot design a tool to assist in its performance unless one
is quite clear about what one is trying to do and how one intends to achieve one's purpose. Therefore, the first
step in the sequence is: —

11.2.1.1 Analysis of the Activity


The fact that a real time system is a tool for performing an activity means that it is essential to be quite clear
about the nature of the activity to be performed. To achieve this it is necessary to carry out a formal analysis of
this activity. The framework for such an analysis is in three parts: —

The purpose of the activity:


The available means:
The constraints.

It may be thought that when an activity has been going for a long time all this will be known. This is not necessarily
so. To start with, there is the tendency for the activity to become an end in itself. This has been neatly illustrated,
with reference to that famous saying about the advantage of producing a better mousetrap, by pointing out that
actually the real purpose is not to make a better mousetrap but to kill mice. As regards means and constraints, it
is doubtful whether an attempt was ever made to encompass them all, on top of which they change with time.

Where the application is a new one, say the control system for a new air to ground weapon, a precise
definition of the essential purpose must be formulated. Outstanding design breakthroughs have sometimes been due
to a clear realisation of the essential purpose. Where applicable, quantification of the purpose is of the utmost
importance. Thus for instance, in the above case of an air to ground weapon, the degree of precision of the guidance
system which is required may have a decisive impact on the means available for implementation.

Having determined precisely the purpose, it then becomes necessary to consider the means available to achieve
it. Thus, for instance, in considering a system for the control of an air-to-ground weapon, it has to be borne in
mind that the purpose may be achieved either by utilising the pilot in the aircraft or it may be possible to utilise
ground based resources operating from a knowledge of the position and parameters of the flight of the aircraft
relative to the position of the target.

Whichever way one performs an activity there are always constraints on what may or may not be done. In the
case of the air-borne weapon with a control system operated by the pilot of the aircraft, there are a number of
constraints due to the fact that the system is air-bome in an aircraft and a further set of constraints due to the
fact that the human operator also has a number of other tasks to perform.

The analysis of the activity must be fully recorded as a formal report. This will help to ensure that it has
been thoroughly done since thoughts scribbled on backs of envelopes are usually only half-baked. This formal
report then serves as the input to the next step of our procedure.

11.2.1.2 Operational Requirement


On the basis of the analysis of the activity it becomes possible to define a tool or tools which will assist in
its performance. There may be a number of possible tools depending on which of various possible methods may
be employed as well as tools suitable for various phases or aspects of the activity. Each tool demands a separate
definition, which must again be a formal document. This document lists the capabilities required for the tool to
be useful as a tool. It is therefore usually known as the operational requirement.

In preparing the operational requirement it is not enough to list the functions which the tool has to be
capable of performing. Where the tool is intended to be used by a human operator, as is mostly the case, both the
operator and his tool will have functions to perform. The operator has two groups of functions to perform, one is
the group of functions which are complementary to those of the tool and which have to be performed by the
human operator at the same time in order to achieve the required results. The other group are the functions which
the operator has to perform in accepting information from the system and in order to control the operation of the
system. These are additional tasks for the operator, due to the introduction of the new system and unless these
are more than compensated for by the easing of his previous tasks, the use of a real time system may not be
worthwhile, and indeed if the total tasks demanded from the operators exceeds their capabilities, the use of the
new system will not be possible. There is in fact a case on record of a real time system for the control of aircraft
which could not be put into use because the total tasks imposed on the operators exceeded their capabilities.

The problem of allocation of functions between man and machine is treated in Chapter 9. It lists the
main points that should be considered as: —
165

(a) crew work load


(b) new skill
(c) communication among new positions
(d) hand-off of functions from one position to another
(e) possible crew contribution to availability through primary and secondary crew functions allocations.

Methods for determining the optimum man-machine allocation of tasks are described in the references given
in that chapter, as are examples of how the problem was solved in some systems, such as the space shuttle orbiter.
The last section of Chapter nine surveys briefly some of the equipment for the implementation of the man-machine
interface.

Having determined the work load it now becomes possible to quantify the maximum load which may be imposed
on the operator by the need to control the system, or in other words, the limit of the load which may be imposed on
the operator by the man-machine interface. This limit will be a crucial part of the specification of the system.
Following this all the other functions of the tool have to be fully quantified. This being a real time system the first
and major consideration is that of response time. The limits on response time may have to be quoted on a statistical
basis, that is as maximum acceptable response times for various percentages of cases. With some systems there will
be an absolute overall limit which must not be exceeded under any circumstance and this raises problems of system
integrity which will be discussed in 11.2.1.3. Response times are not the only things which have to be quantified. A
limit has also to be computed for the maximum percentage of erroneous results which will be accepted. Again these
results may depend on the type of erroneous results. There may be instances where a certain percentage of cormpt
messages may be acceptable but with a different limit on instances of complete loss of a message. This latter limit
may possibly be nil.

Even the most effective tool will avail but naught unless it is used effectively. The operational requirement must
therefore include six exhaustive forecasts about the future use of the tool, namely:—

(a) How it will be used - this really boils down to the consideration as to how the system will be integrated
with the higher level system of which it forms a sub-system. Problems which have to be considered and
provided for are such points as the impact on the organisation, changes which may have to be introduced
in the organisation, physical arrangements for the system such as the provision of suitable space, the
provision of suitable operators and so on and so on.

(b) Preparations to be made for the use of the new tool — following directly from (a) this requires the planning
of such activities as the education of people in the organisation to accept the new system, training operators
for it, arranging for the provision of all the physical requirements, etc., etc.

(c) How it will be introduced into service — again following from (a) above; plans have to be laid for the
introduction of the system into service. It may be that the system is a tool for an on-going activity
which must not be interrupted while the system is being introduced into service. A way must, therefore,
be prepared of achieving this. Alternatively, it may be a tool for a new activity, in which case preparations
must be made for its integration with whatever it will have an impact on. Thus, for example, if an air
traffic control system is introduced into an area where no such activity was previously being performed,
arrangements have to be made for this new activity to be accepted by the pilots of the aircraft.

(d) How it will be tested for acceptance — when the system is delivered it becomes necessary to determine
within a comparatively short period, whether the system does, in fact, perform to the specification to
which it has been supplied. Since it is by no means easy in the case of more complex systems to test
all the functions of the system under all the conceivable circumstances which may arise, plans must be
carefully worked out well in advance to achieve the most comprehensive testing which is possible within
an acceptable period of time and which will still prove adequate to determine whether the new system
should be accepted as meeting the specification.

(e) How its performance will be monitored — many a system has caused utterly unexpected side effects. It
is, therefore, essential to monitor the actual performance of the system in use in order to determine how
it compares with the envisaged performance both as regards benefits and the expected costs. This again
is something which may not be easy to do unless the facilities for doing this have been provided in
advance both in the design of the system and in the plans for using it.

(0 How it will be maintained — on this point it is necessary to find out in advance what facilities it may be
practicable to provide for the maintenance of the system since the level of such facilities has a direct impact
on the way the system will have to be designed. The extreme case in this respect is that of space-bome
systems whether the maintenance facilities are simply nil.
166

The success of the whole approach clearly depends on the thoroughness with which the work is done.
Indeed, unless the operational requirement is very carefully worked out, trouble will arise very quickly, for there
will be repeated modifications to the operational requirement. If these go on for long enough the project will
tend to go on forever. Such a project may sometimes get into the press and thus give some computer experts
an opportunity to publish articles on the technical reason for the long delay — each expert with his own pet
reason and all of them equally irrelevant to the real problems.

11.2.1.3 Procurement Specification


From the definition of the tool, the procurement specification can be prepared. It is vital that the procure-
ment specification should be both complete and purely functional: that is, it must specify completely all the
functions which the system should provide, but not lay down how the system will be designed internally to
achieve this. In a nutshell, a procurement specification must be all about what and nothing about how. In
practice, procurement specifications tend to go in the opposite direction. It happens like this:— an organisation
which is about to embark on a real time system feels that it would be safer to have some computer engineers
of its own. These engineers are then given the task of preparing the specification. To this end they will
extract a certain amount of information from managers and operators after which they proceed to do precisely
what engineers are supposed to do, that is they go off to work away on how the system may be engineered.
With any luck the result will be a procurement specification which starts out with a couple of paragraphs on
what the system is supposed to do and then goes on to consider at length how such a system may be engineered
— a most enjoyable exercise for the authors, the more so, as they know that it is not they who will have to
implement it.

The correct format for a Procurement Specification is in four parts.

(1) Functional Requirements.


For each function which the system is to perform, a precise definition of the function has to be given. Also
for each function, the response time has to be specified in the manner described in 11.2.1.2. Careful consideration
should be given to the question of whether there is an absolute limit on the response time under all
circumstances and if so whether this limit is of the order of a few tens of seconds or whether it exceeds
two minutes. The reason for this is that while it is possible at the present state of the art to guarantee a
response time as low as fifteen seconds for one or more functions under any circumstances, this is possible
only through the use of special equipment and special design techniques. Such equipment and techniques
are rather expensive so that there is likely to be a very significant jump in the cost if the absolute limit
of the response time, even if this applies to only one function, is less than approximately two minutes. In
addition to the response time it is also necessary to specify the freshness of the data. Consider, for example,
an air defence system which has a response time of thirty seconds. This will then present requested data
within that period. There will, however, be an additional requirement that data so presented must represent
the situation as it existed in the outside world no more than say sixty seconds prior to the presentation.
This is usually referred to as the freshness of the data. For each function one also has to specify the
resolution and accuracy required, on a statistical basis. Where a function has to be performed in conjunction
with existing interfaces, whether human or otherwise and where equipment to be interfaced with is either
existing equipment which is not part of the specification or else where the interface has to comply with
given restrictions as in the case of standard interfaces, the interface characteristics have to be adequately
specified. One of these characteristics of the interface is the load which may be imposed on it. As has
been seen, this is applicable whether the interface is human or otherwise.

(2) Available Inputs


As the system will have to produce the output information from the data which will be available to it, it
is clearly necessary to provide the would be designers with full information on such data. The format for
this information is similar to that of the required outputs. For each data source it is necessary to define
the information provided by it, the format, average rate and peak rate, number of interface calls per second,
accuracy and resolution, availability and integrity and the full characteristics of the interface across which
it is provided. It may also be necessary to give the freshness of the data, that is the time interval between
the moment at which the data is offered at the interface and the moment at which it was a valid description
of the outside world.

(3) Overall System Constraint Requirements


The six forecasts (a) to (0 in 11.2.1.2 have to be carefully analysed to see what implications they have on the
system. Such implications may well include physical constraints such as maximum weight or size. From
the forecast of how the system will be used can be deduced the time at which the system will be required,
and also such requirements as facilities for training operators to be provided by the potential supplier in
advance of system delivery, and similarly, facilities for training maintenance operatives. From the three
forecasts it may also be possible to specify the provisions which the supplier will have to make for conducting
167

the acceptance tests and for monitoring the subsequent performance of the system. From forecast (0 it is
possible to specify the limitations on the maintenance facilities. An extreme case occurs in the case of space-
borne systems where maintenance facilities are nil.

There is yet another consideration which is well worth adding to the specification, namely margins on the
capacity of the system. All the quantitative aspects of the forecast use of the system are of necessity estimates
with margins of uncertainty in them. It is therefore a necessary precaution to specify the spare capacity, both
in load and storage, which the proposed system should have. 50% space capacity is usually regarded as the
minimum at the specification stage and in many cases it may be desirable to go beyond this minimum.
Alternatively, it may be possible to specify a somewhat lower figure, say 30% to 40%, provided the capacity
can be increased at a later stage, when required. In this case a quotation should be requested for such an
extension and if this is necessary, proof that this may be done without interference with the operation of the
system.

(4) Performance Guarantees


From the cost benefit analysis described below in section 11.2.1.4 it is possible to estimate the damage which the
potential user will suffer if the system does not perform as specified and on this basis, specify the guarantees
demanded from the potential supplier that he will meet the specification both with regard to time scale and
performance. It may prove very difficult in practice to find a supplier who will be in a position to provide
adequate guarantees. The cost benefit analysis may also show that the damage can be reduced if advance
notice of pending delays or envisaged difficulties in meeting the performance specification became available.
This may make it possible to specify reduced penalties if advance warning of non-performance is given.

It will be seen that nowhere in the specification is there any mention of reliability. The reason for this is
that reliability is not a functional characteristic of the system and therefore, need not be specified. Reliability
effects such functional features as response time and integrity and also maintenance requirements. As these are
really the aspects which are of interest to the future user, it is these that are fully specified and how they are to be
achieved is left to the supplier.

11.2.1.4 Cost-Benefit Analysis


On the basis of the work done in steps 1 and 2 the potential user must now quantify the benefits expected
from the proposed system. This is by no means always an easy task. It must nevertheless be done, as without
it a rational decision is clearly impossible.

In due course, tenders will arrive in response to the procurement specification. These typically contain a great
deal of glossy material about how marvellous is the computer used in the system, how it has sixteen bits or whatnots
to each word and much else which is not of real interest to the potential customer. Somewhere in the proposal it
may even state how the system measures up against the requirements and what guarantees the supplier is prepared
to offer that the system will be delivered on time and perform as specified. From those proposals which provide
this information, the potential user then obtains the cost of acquiring the system. The tenders will also, if they
provide the information which they should, make it possible to prepare firm estimates of the cost of the six
activities (a) to (0 forecast in the operational requirements. It is the cost of these six activities together with the
purchase cost which is the total cost of using the new tool, and the cheapest tender, incidentally, is the one giving
the lowest total cost.

Knowing now the cost benefits and the total cost of the system it becomes possible to determine its profitability.
The result of this may well be an iteration or iterations of steps 1 to 4 of the sequence in order to consider
alternative approaches and means of automation. These iterations may even lead to a conclusion that a real time
system would not be worthwhile. Only if a particular real time system does definitely emerge as worthwhile
should a decision to go ahead be taken.

Some organisations prefer to do their own system design and some projects have indeed been successfully done
in this way. A procedure like the one outlined above is nevertheless still essential.

It is clearly not an easy task to carry out thoroughly the investigations required in the four steps of our
procedure, the more so as this work is almost pure thinking which is hard labour indeed. Not suprisingly, there is
a marked reluctance to adapt this procedure. The excuse most often advanced is that the time scale of the project
does not allow for this work to be done. Yet, the people who advance this excuse often know full well that any
aspect which is skipped and does not come right by sheer good luck will take very much longer to rectify at a
later stage and will cost many more times to do so.
168

11.3 SYSTEM DESIGN

11.3.1 Overall System Design

We move on now to the problem of designing the optimum system to meet the specification. Chapters 5 to 9
are directly concerned with this task. The first phase is that of establishing the broad outline of the design. The
approach to this is basically the same as that adopted for the problem of deriving the specification, namely to
consider the means available for achieving the purpose while observing the constraints. The purpose is to meet the
specification, which is now clearly defined. One part of the design does in fact, follow directly from this specif-
ication. These are the output interfaces. It is known from the specification, or it is possible to deduce directly
from it, what information will have to be provided. The constraints on the interfaces over which this information
will have to come are also given. A survey must, therefore, be carried out of all relevant available output techniques
and equipments to determine the best interfaces to use.

The next step is to determine firstly whether the required information can in fact be obtained from the
available input information. Assuming that this is so the next step is to choose the interfaces which may be capable
of capturing the input data within the constraints given in the specification. This is done in the manner similar to
that used on the output interfaces. Having made a first choice of the input and output interfaces it now becomes
necessary to produce a first estimate of the total processing task and of the storage which may be required to carry
it out. For this first iteration of estimating the total task it suffices to divide it into two main components, input/
output load and processing load. With regard to the first of these a table has to be compiled which gives the
following three parameters for each input and output: average rate, peak load and response time. That of Chapter
5 is more detailed than is required at this stage, of the design work. The benchmark method mentioned in
Chapter 6 is quite adequate and the easiest thing to do is to assume any computer of which experience exists
within the design organisation so as to produce this first estimate quickly. From the loading figures and from a
consideration of the integrity requirements given in the specification it may be possible to determine whether the
system can be a straight forward single processor design or whether a more sophisticated system is required. If the
integrity requirements are such that the maximum break acceptable, even for only one of the functions, is of the
order of a few tens of seconds then a very special system design will have to be adopted. The several approaches
which are available to achieve this are surveyed briefly in Chapter 6. All of these are based on the use of redundant
equipment, automatic fault detection and automatic procedures to overcome the effects of the detected fault, except
that the majority voting method provides fault detection and recovery from the fault combined. The approaches
which involve the use of redundant modules are those for which most experience is available so far. In these auto-
matic recovery systems the supervisor level software has to be specially developed for both the specific hardware
used and the particular application.

Where the integrity requirements are of the order of a couple of minutes or so, dual processor systems with
manual intervention in the case of failure will suffice. It is then possible to manage without the use of specially
designed hardware. Also there is by now a fair amount of experience available for such systems in a variety of
applications.

Where the integrity requirements permit a break of the order of the time required to repair a computer i.e.
about a couple of hours, a single processor configuration may be adequate to provide the required level of integrity.
In applications where this level of integrity is adequate and where a single computer can provide the required
capacity a straight forward single computer system can therefore be used. In these cases the design can then be
completed using the design procedure described in Chapter 4. Situations sometimes arise when a single computer
capable of performing the required tasks on its own is available, but there may be objections to its use. For
instance, it may not meet the constraints regarding size or the environmental specification or it may be very much
more expensive than somewhat slower computers. In such a case, or where a powerful enough computer is not
available, alternative approaches must be considered. These are of three types:— firstly, there are designs which
employ various methods of reducing the computer load so as to enable the system to get by with a single computer;
secondly multi-computer designs; thirdly multi-processor designs. These will be discussed in the following three
sections.

11.3.2 Interfacing to the Computer

A fundamental concept in data acquisition by a real time computer system is that of survival time. Peripherals
like those cited in Chapter 3 deliver data at a certain rate s In most cases such as those of drums or disks, this rate
is regular. In other cases such as that of the keyboard cited in that Chapter, the rate is irregular. In both cases,
however, the situation is that an item of input data is made available to the system for a certain period and then
superseded by a new item from the same source. The previous item may be lost in the process. The time for
which a particular item of data is available for acquisition by the system is known as the survival time of that item
of data. There is an analogous situation in data output. Once, for instance, a drum or a magnetic tape transport
signals a demand for a word a limited period of time is available within which this word is to be delivered if it
is to be recorded in its proper position. It is thus seen that a requirement for a given response time applies not
only to requests made by human operators but also to demands made by output equipment. It is essential for
169

the successful operation of a real time system that input data offered to the system is not lost and that data
demanded is delivered within the appropriate response time. It is the task of the control of the input/output
operation to ensure that this is achieved.

Section 3.5 lists five methods, designated (a) to (e), of communication between the computer complex and
input/output devices. Methods (a) and (b) were quite common in the early days of computers but are no longer
used because of the computer time wastage which they caused. Method (c) is commonly known as polling, while
method (d) is often referred to as program interrupt. Both of these methods involve a time penalty, often referred
to as an overhead in that in both methods the computer has to switch from whatever problem it is doing at the
time to a special, so called, interrupt program which either accepts the input data or delivers the relevant output
data. This overhead is then repeated when the computer returns to the interrupt program having completed the
input or output process. This overhead in real time computers is typically between 10 and 50 microseconds in
each direction. The difference between the polling and the program interrupt method is that in the case of the
latter the jump to the interrupt program occurs only when a device signals that it has data for or requires data
from the computer, whereas with the polling method, the switch to the interrupt program occurs at a signal given
by a real time clock and the computer then enquires from the input/output device whether it either needs data
from or has data for the computer. Clearly such enquiries have to be made at intervals smaller than the survival
time. The polling method could, therefore, be wasteful in that on a great number of enquiries it may be found
that there is no data available or no data is required. However, this waste may be more than compensated for in
the case where there is a large number of possibly similar input and output devices which can be interrogated once
the computer has switched to the interrupt program. Clearly, if at the expense of a single overhead on jumping
to the interrupt program the computer succeeds in servicing a number of input/output devices, this will be more
efficient than having to suffer an overhead in each individual device as in the case of the program interrupt
method. The choice between the polling method and the program interrupt method, therefore, depends simply on
which would be more efficient in a particular case based on the number of devices, and degree of similarity
between them and on the relative frequencies at which they require attention. In some systems both methods are
in fact used: the polling method for a group of similar devices, in particular data communication devices; while the
program interrupt method is used for other devices.

The most efficient method of communicating between input/output devices and the computer complex is the
direct memory access (DMA) because in this case there is no overhead. At most, the data processing will be held
up because the computer is prevented from gaining access to its own store, while the input/output is taking place.
The computer will not be held up if it so happens that at that particular moment in time it does not require access
to the store which, for instance, may well be the case during the latter part of, say, a multiplication instmction.
There are also certain ways of designing a system in a manner which will enable direct memory access to go on in
parallel with processing as will be seen below.

The DMA method does, however, incur a hardware penalty in that it requires an additional unit sometimes
known as a DMA interface unit. This unit is also called various other names such as a selector channel. This
method of input/output was developed primarily for such peripheral devices as dmms, disks, and tapes and hence,
as pointed out in Section 3.5, for a transfer of blocks containing a large number of words. The rate at which
these words arrive or have to be delivered is determined by the device, and if an incoming word, say, is not required
when it is presented and before the next word from the same source arrives, it may be lost. It is in order to ensure
the capture of data within the survival time that a DMA interface has not only the capability of access to the
store but also a modicum of arithmetical capability and a couple of store registers. The transfer of a block of data
is initiated by software which loads one of the registers in the DMA interface with the starting address in the store
of the block to be transferred within the store, or the first address of the store zone which is allocated to a block
which is expected to come in. The other register is loaded with last address in the store or in other designs with
the number of words in the block. The first word of the block on, say, input, is stored in the starting address and
the DMA interface increases this address by one for every word so that incoming words go into successive locations.
It also compares the number of words with the word count, or else compares each successive address with the last
one so as to determine when the transfer of the block has been completed. When the end of a block is detected a
program interrupt is generated. There are thus two overheads for each block: the setting up of the block transfer
and the interrupt generated on its completion.

In some modern real time systems, this method of input/output is also used for devices which transfer words
at a high but not necessarily fixed rate and not necessarily in blocks. In this case the DMA interface unit supplied
by the manufacturer may not be suitable and the system designer has to design a unit of his own.

One further concept of importance in interfacing input/output equipment to computers is that of buffering.
On the input side, these are devices which will accumulate data so as to minimise the interrupt loading on the
computer. As an example, consider a fast data transmission link which delivers a bit at a time. The buffer will.in
this case, accumulate 16 bits in the case of a 16 bit word machine and then interrupt directly to memory to insert
the word it has accumulated. For output, such a buffer will operate to break up a block of data into a stream of
single bits.
170

With the extension of real time systems, much of the input data has to be captured at remote locations and
similarly output data may have to be supplied to remote locations, so that data transmission is playing an increasing
role in computer real-time systems. Many methods of data transmission have been developed with speed ranging
from 75 bits per second up to 2 million bits per second. By its very nature, data transmission requires international
standards, and there is now quite a large number of such standards in the various methods used. These are surveyed
in Section 3.6. All data transmission devices are prone to errors. The detection of these errors, leading to requests
for retransmission or the correction of errors by the use of error correcting codes may be accomplished by software
inside the computer or by hardware outside it. Because of the specialised nature of the operations which have to
be performed for error detection and correction, the latter method is probably more efficient, and this is also surveyed
briefly in Section 3.6.

11.3.3 Design of the Data Processing Task

11.3.3.1 General
Having chosen the input/output equipment one now has to proceed to the design of the computer complex and
of the tasks which have to be performed by it. The three basic tasks which the computer complex has to perform
are:—
(a) input/output
(b) data processing
(c) maintenance of a data base.

A presentation similar to that of the Phillips diagrams in Chapter 5 of these tasks is shown in Figure 11.1. Another
presentation of these tasks is that of Figure 7.1 of Chapter 7.

For the design of the computer complex and its tasks, it is necessary as a first step to determine the magnitude
of the task. Section 4 of Chapter 3 describes a method of estimating the processing load and this method is
summarised visually in Figure 7.2. However, it is necessary to estimate also the input and output tasks which
will have to be performed by the computer complex. The relationship between the input/output task and the data
processing task is illustrated in Figure 11.2 which is also a presentation of the 4 levels of operations of a modem
computer. The level of operation with the highest priority is that of the direct memory access described in the
preceding Section. The reason for this is that the computer is so designed that this takes precedence over anything
else the computer may be doing. The next level of priority is that of program interrupt, again described in the
preceding section. The input and output by the polling method is also done at this level since the polling is
normally achieved by entering an interrupt program as a result of an interruption from a real time clock. Actual
data processing is done at the third level, while a further, lowest priority level is used for self-diagnostic problems
when there is nothing of a higher level to be done.

In a real time computer of straightforward architecture like that of most mini computers, the bulk of the
operation at the direct memory access level and all of the work at the program interrupt level comes out of the
total time available and it is, therefore, necessary to estimate the total load represented by all three tasks. It is
also necessary to estimate the amount of memory which will be required to contain both the data bank and the
program required for the tasks.

11.3.3.2 Design of Simple Systems.


A very thorough method of preparing these estimates is described in Section 4 of Chapter 7 and is summarised
in Figure 7.1. The first step is to break down the three tasks into successively more detailed levels. The
Phillips diagrams for presenting processes described in Chapter 5 are very suitable for this purpose, their big advantage
being that the same method of presentation applies to all the successive levels. Having achieved a detailed enough
level it then becomes possible to actually estimate the numbers and types of instructions which will be required to
execute the various detailed processes. To do so in a machine independent manner, Chapter 7 introduces a new
high level language based on the so called Reverse Polish Notation. It is pointed out that other higher languages
could also be used for this purpose. In practice estimates are often prepared without actually going through the
process of translating the model into sequences of elementary operations. If programmers are available with
previous experience of programming similar tasks, they are very often in a position to give a first estimate of both
the number and the type of elementary operations which will be required to carry out these tasks. Such estimates
will inevitably be based on experience gained with a particular computer and are thus not machine independent.
This problem can be overcome by means of the bench mark method described in Chapter 5, as will be explained
later.

Let us continue, however, with the review of the method presented in Section 3.4 of Chapter 7. Once the
tasks to be performed have been detailed to the level of elementary operations which effectively means that the tasks
have been programmed in detail (or to use the expression of Chapter 7 the model has been translated into sections
of elementary operations,) it becomes possible to determine the following load parameters.
171

Fig. 11.1 Computer complex tasks

CMRECT MEMORY
ACCESS LEVEL

PROGRAM
INTERRUPT
LEVEL

ACTUAL
PROCESSING
LEVEL

SELF-DIAGNOSTIC
LEVEL

Fig.l 1.2 The four levels of operation


172

(1) Distribution of Operations


This gives the relative frequencies of the various types of elementary operations and makes it possible to choose
the computer with the most suitable order code.

(2) The times to perform the various tasks are given in parametric form, the parameters being the store cycle
time and the central processor's operation time for elementary operations. The total time for the tasks include
the overheads due to program interrupt and the store cycle times required for DMA operations.

(3) The amount of storage required for data


This is given as a histogram by the length of the item of data as shown in Figure 7.4.6.1 and makes it possible
to choose a computer with the optimum word length for the data involved in the task.

(4) Amount of memory storage required for programming


This is a number of instructions plus a 50% allowance for the fact that some instmctions may occupy more than
one word length. This 50% allowance is appropriate for machines with a short word length, e.g. 16 bits. In
the case of machines with a word length of 32 bits, this allowance is not really necessary since virtually all
instructions do fit into the single word.

These 4 load parameters are clearly required to make the right choice of the most suitable computer. Since,
however, we are dealing with a real time system there is a further requirement, namelv, that the response time should
meet the specification and this means that one has got to go through the individual chains of operations required to
produce each response and see that the time required to generate the response is within the specification as illustrated
in Section 3.4.8 of Chapter 7.

It should, however, be realised that the method described in Section 3.4. contains a number of major simplifi-
cations. For instance, the overheads given in Section 3.4.7 assume that only two registers have to be saved and
restored. In fact, the number of working registers varies from 2 to 16 and if registers are automatically saved and
restored, all of them, rather than just two, are operated on. In machines with a large number of working registers,
automatic saving and restoring of registers has to be done by software and takes much longer than one store cycle
per register in each direction. Also when computing the response time allowance has to be made for the time
which will be taken out by interrupts both at the DMA and the interrupt level for input and output operations
for other tasks. There may also be interaction with other tasks on the actual processing level. It may also be noted
that while the histogram of the length of the various items of data should make it possible to choose a machine
with the optimum word length, the choice of word length in computers which are actually available is rather limited
as most of them have word lengths of either 16 or 32 bits, with only a few computers with word lengths of 8, 12
or 24 bits.

There is also the problem of priority levels for interrupts and interrupt programs. The priority levels for
interrupts are decided on the basis of the survival time of the data. Since the purpose of input/output operations
is to ensure that all data presented to the system is captured and all data delivered within the survival time, priority
levels have to be so arranged that all the actions required to, say, capture a given item of data have to be performed
within the survival time of data at lower levels. Otherwise, input data may be lost. This has led to the concept
of differing priority levels for the interrupt programs which capture data. Some computers do, in fact, provide now
a facility for maintaining a record of the priority level of a given interrupt program. With this facility, an interrupt
program will not in turn be intermpted by another intermpt of the same or of a lower level. There is a case on
record of a system which has been programmed on two computers, one with this facility and another one without
it. In this particular case the load on the computer without this facility turned out to be 20% higher than on the
computer with the facility. This was probably an extreme case (there were 1100 program interrupts per second.)
But it, nevertheless, shows that it is well work checking the various facilities for dealing with program interrupts
provided on the various computers considered.

11.3.3.3 Design of More Complex Systems


Chapter 7 provided a useful example of a simple system and an explanation of how such systems are designed.
Chapter 5 deals with the design problems of more complex systems. It starts off by pointing out that the design
process may necessitate compromises in the operational requirements. As has been seen in Section 2 of this summary,
what is possible may indeed lead to modifications in the specification. However, it is essential that all such
modifications are made before design begins. We, therefore, assume now that the specification has been settled
and consider the problems of arriving at the optimum configuration of hardware and software components which
will meet the specification.

There are a number of considerations which are of special importance in the design of avionics systems. One
of these is system integrity, since avionics systems often replace a number of separate devices for doing different
tasks and the safety of mission and crew often depends on the continued availability and integrity of the system.
173

Standardisation is another major consideration in avionics systems, partly for the same reason as it is required in
data communications. The business of flying is an international one and in these days of alliances like NATO this
is true not only of civil aviation.

In discussing the various possible system configurations Chapter 6 employs the PMS Notation proposed by
Bell and Newell. Potentially, such a notation can be very useful because in the computer field there is the problem
not just of technical jargon but of a babel of jargons, the various experts each having their own. Whether the
potential of the PMS Notation will be realised in practice depends on the extent of its acceptance. However, the
very fact of the multiplicity of jargons does not augur well for the chances of its acceptance. It is not only the
language which could do with standardisation but also the interfaces between the various units which make up an
avionics system. A certain amount of standardisation has resulted from the work of such bodies as ARINC but
this has not so far been extensive. Such standardisation would be of great help because of the method of main-
tenance of avionics systems. There is in this field a great deal of first line maintenance by which is meant the
replacement of a faulty unit by another one. These replaceable units are known as line replaceable units.

The starting point for the design process is an exhaustive presentation of all the tasks which the system has
to perform. Section 3 of Chapter 6 suggests three complementary, implementation independent, representations
of the system task together with a complete list of all inputs and outputs as an appropriate form of such a presentation.
The next step is to estimate the load which will be generated by all the tasks which the system is to perform. The
method for doing this is essentially the same as that described in Chapter 4 and the data on the load and memory
requirements that has to be determined is also the same. Some quick methods for first assessment of the load
and memory requirements are mentioned in Chapter 6. However, to get a more accurate estimate it is necessary
to resort to the use of bench marks. The way this is done is as follows:— the tasks to be performed are detailed
to a level which enables programmers with experience of programming this type of task to estimate the number of
instmctions that will have to be performed and the amount of memory required. These estimates will be related
to the particular computer on which these programmers will have gained their experience. These programmers then
proceed to identify short programs which are (a) typical of the programs which will have to be written and (b) are
those parts which will be the most repetitive ones so that the computer or computers will spend a large part of
their time doing these short programs or programs like them. In order to evaluate how well various computers
under consideration are suited to the particular system, these bench marks will be sent on to the manufacturers
of the computers under consideration for the manufacturers' programmers to code these programs on their
respective machines. It is important that the coding of these so called bench marks is done by the manufacturers
of the computers since they are the people best qualified to do so. By comparing the bench marks as coded by
the manufacturers it is possible to establish performance ratios between these and the computer which was used
for estimating the total system load.

The next step is to consider which type of system would be appropriate. There are four main types of
computer complexes. The first is the straightfoward single computer type of computer described in Chapter 4.
Another one is the multi computer complex. An example of this is shown in Figure 11.3. This relates to a
simple air traffic control system in which the main computer prepares from the data provided to it as complete
a file as possible on all the aircraft in the air space. Each of the display processors analyses the messages given to it
by the air traffic controller, determines what information each particular controller requires, extracts this information
from the main computer, transforms it into the presentation requested by the controller and displays it. This
is a rather simple example of a multi-computer system. Other systems of this type can be very much more complex,
containing a great many computers. A third type of computer complex is that which has a single general purpose
processor with one or more special purpose processors. These latter are usually input/output processors. They
can, however, also be processors which are very fast in one particular task that the system has to perform, e.g.
special hardware for performing the fast Fourier transform. The last type of computer complex is the genuine
multi-processor in which there are a number of general performance central processors. The last two types of
computer complex are only possible when the hardware has been specifically designed for this type of system
architecture, since they require so called multiport storage modules and some other hardware facilities. The various
methods for interconnecting the components of such systems are discussed in Chapter 6.

The choice of type of computer complex is determined not only by the need to achieve the specified response
time but also by the availability and integrity requirements. A great deal of development work on high availability
systems has been done in the last 15 years or so and there is now considerable field experience with such systems.
The current state of the art is that where maximum down times of the order of a few tens of seconds are specified
it is necessary to resort to one of the methods classified in Chapter 6 as fault masking methods or stand by
redundancy method with automatic reconfiguration. While the fault masking methods hold out great promise most
of the actual field experience is with systems using stand by redundancy methods. Where down times of several
minutes are acceptable multi-computer systems with manual reconfiguration suffice. A straightforward single
computer system like that described in Chapter 7 can only be used if a down time determined by the repair time
is acceptable. The repair time in turn depends on such factors as the availability of on site repair facilities or the
level of spares holdings.
174

Secondary
Radar
Inputs

Display Display
Computer Computer

f
/
\
Keyboard Keyboard
/ \

Display
Rolling
Display
Ball

Fig. 11.3 Multi-computer system


175

Compiex real time systems require software to control the execution of the various tasks within the system.
Such software is often known as the operating system. This term and the type of software denoted by it belong
to the domain of large real time systems such as on line systems for scientific computation, e.g. the well known
project MAC a the Massachussetts Institute of Technology. The purpose of an operating system is to create a
user environment suited to the needs of the user which is almost independant of the hardware. Standard scientific
or EDP operating can become very inefficient in real time applications and a case has been described where only
5% of the time was available for actual productive work, the remainder being consumed by the operating system
itself. Dedicated real time systems almost invariably have control programs which are specially designed for the
particular system.

The simplest dedicated real time systems have no control software at all. Individual interrupts are sequenced
by whatever hardware priority facilities there are, data is processed as it becomes available and whatever processed
information there is, is output on demand. As complexity increases, software has to be added to control the operation
of the system. The first step will usually be to introduce a scheduler. This may be required to prevent one task
from monopolising the computer to the detriment of the other tasks. With a scheduler, a program on completion
no longer calls in another program. Instead each program, as it completes its task, passes control to the scheduler.
To prevent one task from monopolising the computer a method known as time slotting is often resorted to. Using
a real time clock the scheduler will allocate predetermined time periods to various tasks. In a comparatively simple
system the scheduler may control the tasks partly on the basis of interrupt and partly by time slotting. In still more
complex systems the scheduler will evolve into a control program which will also pass messages between programs
and perform the management of data shared by several programs. The tasks of the control program (or executive
as it is sometimes known) will go on increasing with the complexity of the system. Thus, for instance, in a multi
processor high availability system the control program will control all input and output, schedule all tasks, manage
all the system resources, such as the allocation of working space in the core store to various programs at any one
time, and furthermore, because of the special nature of such a system it will also control the system configuration.
In this capacity it will exclude from the system any module which is found to be faulty and replace it by a
stand-by module. The control program will also, if necessary, recover operation of the system, that is, if for
instance, the faulty module was a core store module it will load the replacement module, from the backing store,
with the program or data contained in the module which has been replaced.

11.3.3.4 Optimization
The right time at which to stirt on the optimisation of the design is when the first complete design of the
system is finished, as pointed out in Chapter 4. There is an emotional problem about doing this at that time,
since the natural reaction of a designer who has just completed a design is to breathe a sigh of relief and possibly
also pat himself on the back for having done a good job. It may help to overcome this emotional problem if it
is considered that the first design is of necessity bound to be more a record of the designer's gropings towards
a solution rather than an optimum design.

Chapter 4 of this book which is devoted to the problems of design optimisation rightly points out that the
first step towards optimisation is to become clear on what would be an optimum in the particular case. Clearly
such factors as size or cost will have different weightings attached to them in different applications. Furthermore,
in the process of determining wh;-.t the optimum would be it is necessary to consider not only the finished product,
but also the way in which it will be implemented. On a system of any size, the implementation work will be
divided between a number of groups of people. Experience has shown that design and implementation errors are
more likely to occur at the seams where the work of the various groups come together. In deciding on the optimum
it is, therefore, also necessary to bear in mind the way in which implementation will be partioned so as to minimise
the seams and furthermore, to suit the parts of the work to the ability and experience of the groups of people who
will be available to do the work. In brief, what may be an optimum design given one set of implementation
resources may well not be an optimum design given another set of implementation resources. Also as follows clearly
from what has been said earlier on in this summary about the various forecasts of how the system will be used,
it is clearly necessary to bear in mind not only the production of the system but also the manner in which it will
be used. The advantages of modularity in design and its impact on maintenance and ease of modifiction are well
known. What is usually less frequently remembered is that this also applies to the software. Like the hardware the
software has to be robust, modular and easy to maintain and modify.

There are a number of well known trade-offs in the course of optimisation. One of them is speed versus
complexity. Thus for instance a central computer complex which has to carry a very high load may be implemented
either by using one very fast computer or a number of slower ones, either in a multi-computer or multi-processor
type of configuration. Normally the mle is that the faster computer will give a simpler and therefore better
design. A further reason for this is that the price of computers does not go up in proportion with the speed, that is,
the through-put per dollar increases the faster the computer. There is, however, with currently available computers
a point of discontinuity in this respect. The so called mini computers nowadays offer excellent value for money
hardware wise. Where there are no special environmental or availability requirements, these mini computers
which are available from a great many suppliers offer outstanding value for money, not only in processing power
but also in their input and output facilities. However, their range of speed currently goes up to about a million
instmctions per second. Where a higher speed is required, the use of computers designed for large data processing
176

systems will typically increase the cost of the acquisition of the purchasing power by a factor of 10. Therefore,
where the processing power required exceeds something like a million instmctions per second, it may be
economically advantageous to use more than one mini computer in spite of the extra complexity of a multi-computer
or multi-processor complex.

Another one of the well known trade-offs is that between hardware and software. Many functions in real time
systems, like the error correcting function in data transmission mentioned in Chapter 7 or the fast Fourier trans-
form function mentioned in Chapter 5 are better performed by special hardware than by software. Another
application where the trade-off is between software and (usually analogue) hardware is the expansion and off-centering
of graphic presentations on CRTs. The worst aspect of the optimisation of the hardware/software trade-off is the
dicotomy between hardware and software people in the computer field. The fact that these two groups of designers
are in fact different groups with inadequate communication between them, makes it difficult to establish where the
optimum trade-off is. Yet another one of these well known trade-offs is that between the choice of proven but
less advanced equipment, as against more advanced and perhaps more suitable equipment which is still in the
drawing board stage. Apart from pointing out the dangers of relying on equipment which is still on the drawing
board it is difficult to give guidance on this trade-off. It is, however, well worth bearing in mind that there are
ways of quantifying, to some extent at least, the degree of risk danger in relying on as yet unproven equipment.
The three main points to consider in such a case are as follows: —

(a) The amount of experience of the team designing the new equipment. Here it is important to ascertain
the experience of the actual team engaged in the design work rather than that of the organisation in
question as a whole.
(b) The extent of advance of the new equipment being designed over previous equipment designed by that
team. Clearly the greater the jump from the previous model the greater the likdyhood of unforeseen
problems.
(c) The extent to which the new equipment being designed comes up against the limits of the technology which
it employs. Any new equipment which is very close to the limits of the technology it employs will
usually require a long development period before it operates satisfactorily.

Finally the most important and decisive trade-off of all; simplicity versus anything else. There is no substitute
for simplicity and the slogan KISS which is the acronym of "Keep It Simple, Stupid" applies just as much and
perhaps even more so to real time systems as to anything else.

In searching for the optimum design it would, of course, be nice if some method were available to evaluate
the effectiveness of the design before implementation starts. The method available for this purpose is simulation,
as described in Chapter 11. There are, however, a great many reservations about this method. The basic problem
is that any simulation is only as good as the assumptions on which it is based. There is a case on record of a system
which has been simulated at great cost and effort prior to implementation, the simulation having proved among
other things that the proposed system had ample processing power, including provision for further expansion for
many years ahead. Yet it was discovered that the processing power was totally inadequate even before implementation
was completed. One of the reasons it was found was that the load on the system was badly underestimated.
Simulation has been found to give misleading results in so many cases and at such great cost that it has been called by
some people "a sink for time and money". Simulation may, nevertheless, be useful provided it is borne in mind
that it can be extremely dangerous if it is employed as a substitute for thinking or if the cost or effort required to
carry it out are underestimated.

11.3.4 A Case History


A highly enlightening case history which illustrates the various aspects of system design discussed throughout
this book is described in Chapter 8. The example chosen is that of a computer system for the control of aircraft
power plant. Section 2 of Chapter 8 highlights the fact that the whole system is conditioned by the environment
in which it will operate. The top part of Figure 1 illustrates that the environment is made up of social, economic,
technological and regulatory factors. It is in this environment that the market requirements will arise for an
economically justifiable tool to perform a given function. The lower part of Figure 1 neatly illustrates the
hierarchical nature of the real time systems discussed in this book. The market requirement is for a given mode
of transportation which is met by a certain vehicle. The vehicle is in itself made up of three sub-systems:— the
air frame, the means of propulsion and the flight systems, such as communications and navigation equipment. The
system for controlling the power plant is in itself a sub-system of the propulsion sub-system.

The point is forcibly made in Section 2 that the real purpose of the design and implementation is not just to
obtain the customer's signature on the acceptance chitty. It is the service life of the system which is the real
justification for it. Hence, the importance of the various forecasts of use described in Section 11.2.1.2 of this
summary for the derivation of the specification.

Section 3 of Chapter 8 is concerned with the derivation of the operational requirement. It brings out the
fact that the operational requirements must embrace not only the normal operation of the system but also of the
177

failure modes, not only in the system itself but also in the associated systems, including the human factors in
its environment. These considerations are continued in Section 4 where the point is brought out that not only
must the system continue to operate in a pre-determined manner under all these conditions but that its operation
must also be monitored, that is, it is also essential to know at all times how the system does in fact behave. Section
5 then deals with the considerations arising from the data acquisition, communications and data processing aspects,
while Section 6 is concerned with the considerations of the man/machine interface. Section 7 then goes on to
consider the realisation or implementation of the system. The possibility of giving to the designers not only the
specification arrived at, but also the operational requirements is considered. This has the advantage of improving
the designer's understanding of the specification and should also improve communications across the interface
between the designers and the users. In the case of the control system for an aircraft power plant, there is the
further difficulty that the power plant to be controlled is being designed at the same time as the system for
controlling it. Simulation and emulation methods to help overcome this difficulty are discussed, so is the flexibility
that has to be built into the control system to take care of the modifications which will be made in the power
plant during its development.

The requirements for the software of the system are then considered, and to those listed earlier in the Chapter
on optimisation, is added portability, so that proven modules of software can be carried over into similar systems
at minimum cost and risk.

11.4 NOVEL DEVICES AND TECHNIQUES

11.4.1 General
The last Chapter of this book surveys current developments in advanced technology applicable to data processing
and this last Section of the summary briefly reviews the potential impact of these developments on the design of
data processing equipment in an attempt to forecast the type of processing, storage and input/output equipment
which will be available to the designer of avionic systems in the near and medium term future.

11.4.2 Central Processors


The development which is having and will have the greatest impact in central processor design is that of very
high speed read-only memories. The availability of such memories makes it possible to replace the bulk of the assem-
blage of wired gates and bi-stable devices which control the operation of central processors with suitably programmed
ROMs. As an example, in work done for the US Air Force the Burroughs Corporation have developed a very basic
framework which can be converted into various central processors or into special purpose processors simply by
providing the framework with the appropriate ROM. This ease of providing the control function has already led to
the appearance of processors with optional instmctions. Such processors provide a more or less extensive standard
instruction repertoire to which the user may add a number of instmctions tailored to his application. Such optional
instmctions can greatly increase the processing power in specialised applications. A few computers have also been
announced in which the machine language as we now know it no longer exists and is replaced by a suitable higher
level language. In fact, in at least one case a choice of high level languages is offered. There is also at least one
computer where the common and most used nuclei of the operating system have been built into hardware. There
is no doubt that all three of these trends will continue and indeed gather momentum.

11.4.3 Semi-Conductor Memories


As pointed out in Section 3 of Chapter 10, semi-conductor memories have already superseded magnetic
films and are now the only competitors to core stores. The latter have so far kept up in the race for faster storage
at lower cost per bit. However, semi-conductor stores are at the beginning of their development, whereas this is
not tme of core stores; the latter are, therefore, likely to fall back in the race. The drawback of semi-conductor
stores is their volatility in that their contents are lost with the loss of power. This is overcome by the provision
of stand by power arrangements using an accumulator battery. There may, however, by applications where the
environmental conditions would exclude such back up giving core stores an advantage. As pointed out in the
last Chapter it is not only speed and cost which confer an advantage on semi-conductor memories. They can also
provide orthogonal access. This means that they can be so designed that they can be written into and read out of
not only by words but also in the orthogonal mode, that is the corresponding bits of all the words may also be
written in or read out. This makes possible the design of associative processors as described in the next section.

11.4.4. Associative Processors


The name "associative processor" has a historic origin in that computer users have for long been hankering
for the ability to address data by the content of such data rather than by means of an index of where the
required data was located. This is analogous to being able to stand up at a conference, say, and request
Mr Brown to contact Reception instead of having to look up an index, if one exists, to find out where Mr Brown
is sitting. To appreciate the full power of this type of processing, however, it should be realised that not only
178

access, but also arithmetic and logic operations can be performed on many items of data simultaneously. The
combined effect of the low cost and orthogonal property of semi-conductors together with the LSI cost-effectiveness
in arithmetic and logic is to make it feasible to produce an array of up to several hundred arithmetic and logic
units each with up to thousands of bits of storage, all controlled by a common control unit. It is thus possible to
execute an instruction on a great many items of data simultaneously. This is now known as the SIMD (for Single
Instruction, Multi Data) mode of operation. So far the cost of such SIMD processors is very much higher than the
cost of the mass produced mini computers. With the ingenious methods which have been developed over the years
to overcome the lack of multiple data operation, there have not so far been many applications which would justify
the use of SIMD mode computers. However, design and cost effectiveness studies on the use of such computers in
air traffic control are now going on in the United States. It is probably safe to say that the combined fact of the
Parkinsonian Law that applications expand to fill the available means and of the reduction in the cost of SIMD
computers will result in their progressive introduction into use.

11.4.5 Mass Memory


All mass memories available so far are electro-mechanical ones with all the attendant problems. This presents
a particular problem in avionic systems since there is not even now available a fully mgged disc. Magnetic bubble
memories may well, therefore, find their first application in avionic systems where their low power consumption
would be an additional advantage.

11.4.6 Displays
As mentioned in Section 6 of Chapter 9, CRT displays are used in real time systems for the presentation of
both graphic information and of messages made up of characters. Displays using light emitting diodes afford a
replacement only for the latter application. Graphic presentation will become possible with plasma panels. This
is another current development in display technology and consists of a panel containing a large number Of small
cells filled with ionised gas. Each of the cells can be switched on or off by a suitable addressing mechanism and will
emit light when switched on. The main economic impetus for the development of these panels is the possibility of
their replacing the tubes in television sets. However, both light emitting diodes and plasma panels are candidates
for early use in real time systems because CRTs are bulky, have to be carefully, and hence expensively, mounted
for a mgged environment and in addition, have a low average life time.
REPORT DOCUMENTATION PAGE
1. Recipient's Reference 2.Originator's Reference 3. Further Reference 4. Security Classification
of Document
AGARD-AG-183
UNCLASSIFIED

5. Originator Advisory Group for Aerospace Research and Development


North Atlantic Treaty Organization
7 rue Ancelle, 92200 Neuilly sur Seine, France
6. Title
Principles of Avionics Computer Systems

7. Presented at

8. Author! s) 9. Date

Various Editor J.N.Bloom December 1974

10. Author's Address 11.Pages


Various 186

12. Distribution Statement This document is distributed in accordance with AGARD


policies and regulations, which are outlined on the
Outside Back Covers of all AGARD publications.
13. Key words/Descriptors 14.UDC
Airborne computers Data acquisition
Digital computers Design 681.32:629.73.05
Avionics

15. Abstract
An introduction to fundamentals of digital computers, data acquisition and
communication, logical partitioning and optimization of sub-systems is given.

A methodology of design is developed by philosophical discussion, detailed description of


processes, and by practical examples of the application of basic principles to the problems
of system and component design.

The technique of specifying a requirement is discussed in detail as are the various steps
required to satisfy it.

The book provides a helpful background to the non-expert for the acquisition of complex
avionic computer-based systems.

Where practicable, an extensive bibliography for further reading is provided.

This AGARDograph has been prepared at the request of the Avionics Panel of AGARD.
AGARDograph No. 183 AGARD-AG-183 AGARDograph No. 183 AGARD-AG-183
Advisory Group for Aerospace Research and 681.32:629.73.05 Advisory Group for Aerospace Research and 681.32:629.73.05
Development, NATO Development, NATO
PRINCIPLES OF AVIONICS COMPUTER SYSTEMS Airborne computers PRINCIPLES OF AVIONICS COMPUTER SYSTEMS Airborne computers
Edited by J.N.Bloom Digital computers Edited by J.N.Bloom Digital computers
Published December 1974 Avionics Published December 1974 Avionics
186 pages Data acquisition 186 pages Data acquisition
Design Design
An introduction to fundamentals of digital computers, An introduction to fundamentals of digital computers,
data acquisition and communication, logical partitioning data acquisition and communication, logical partitioning
and optimization of sub-systems is given. and optimization of sub-systems is given.

A methodology of design is developed by philosophical A methodology of design is developed by philosophical


discussion, detailed description of processes, and by discussion, detailed description of processes, and by
practical examples of the application of basic principles practical examples of the application of basic principles
to the problems of system and component design. to the problems of system and component design.

P.T.O. P.T.O.

AGARDograph No. 183 AGARD-AG-183 AGARDograph No. 183 AGARD-AG-183


Advisory Group for Aerospace Research and 681.32:629.73.05 Advisory Group for Aerospace Research and 681.32.629.73.05
Development, NATO Development, NATO
PRINCIPLES OF AVIONICS COMPUTER SYSTEMS Airborne computers PRINCIPLES OF AVIONICS COMPUTER SYSTEMS Airborne computers
Edited by J.N.Bloom Digital computers Edited by J.N.Bloom Digital computers
Published December 1974 Avionics Published December 1974 Avionics
186 pages Data acquisition 186 pages Data acquisition
Design Design
An introduction to fundamentals of digital computers, An introduction of fundamentals of digital computers,
data acquisition and communication, logical partitioning data acquisition and communication, logical partitioning
and optimization of sub-systems is given. and optimization of sub-systems is given.

A methodology of design is developed by philosophical A methodology of design is developed by philosophical


discussion, detailed description of processes, and by discussion, detailed description of processes, and by
practical examples of the application of basic principles practical examples of the application of basic principles
to the problems of system and component design. to the problems of system and component design.

P.T.O. P.T.O.
The technique of specifying a requirement is discussed in detail as are the various steps The technique of specifying a requirement is discussed in detail as are the various steps
required to satisfy it. required to satisfy it.

The book provides a helpful background to the non-expert for the acquisition of The book provides a helpful background to the non-expert for the acquisition of
complex avionic computer-based systems. complex avionic computer-based systems.

Where practicable, an extensive bibliography for further reading is provided. Where practicable, an extensive bibliography for further reading is provided.

This AG ARDograph has been prepared at the request of the Avionics Panel of AGARD. This AGARDograph has been prepared at the request of the Avionics Panel of AGARD.

The technique of specifying a requirement is discussed in detail as are the various steps The technique of spedfying a requirement is discussed in detail as are the various steps
required to satisfy it. required to satisfy it.

The book provides a helpful background to the non-expert for the acquisition of The book provides a helpful background to the non-expert for the acquisition of
complex avionic computer-based systems. complex avionic computer-based systems.

Where practicable, an extensive bibliography for further reading is provided. Where practicable, an extensive bibliography for further reading is provided.

This AGARDograph has been prepared at the request of the Avionics Panel of AGARD. This AGARDograph has been prepared at the request of the Avionics Panel of AGARD,
DISTRIBUTION OF UNCLASSIFIED AGARD PUBLICATIONS

NOTE: Initial distributions of AGARD unclassified publications are made to NATO Member Nations through the following National
Distribution Centres. Further copies are sometimes available from these Centres, but if not may be purchased in Microfiche
or photocopy form from the Purchase Agencies listed below. THE UNITED STATES NATIONAL DISTRIBUTION
CENTRE (NASA) DOES NOT HOLD STOCKS OF AGARD PUBLICATIONS, AND APPLICATIONS FOR
FURTHER COPIES SHOULD BE MADE DIRECT TO THE APPROPRIATE PURCHASE AGENCY (NTIS).

NATIONAL DISTRIBUTION CENTRES


BELGIUM ITALY
Coordonnateur AGARD - VSL Aeronautica Militare
Etat-Majorde la Force Aerienne Ufficio del Delegate Nazionale all'AGARD
Caserne Prince Baudouin 3, Piazzale Adenauer
Place Dailly, 1030 Bruxelles Roma/EUR
CANADA LUXEMBOURG
See Bel lum
Defence Scientific Information Service 8
Department of National Defence NETHERLANDS
Ottawa, Ontario K1A OZ3 Netherlands Delegation to AGARD
National Aerospace Laboratory, NLR
DENMARK P.O. Box 126
Danish Defence Research Board rj^ift
Osterbrogades Kaseme
N0RWAY
Copenhagen 0
Norwegian Defence Research Establishment
FRANCE Main Library
O.N.E.R.A. (Direction) P.O. Box 25
29, Avenue de la Division Leclerc N-2007 Kjeller
92,Chatillon sous Bagneux PORTUGAL
GERMANY Direccao do Service de Material
da Forca Aerea
Zentralstelle fur Luftfahrtdokumentation
Rua de Escola
und Information Pditecnica 42
Usboa
8 Munchen 86
Attn:
Postfach 860881 AGARD National Delegate
TURKEY
GREE (
. ;^. _ r- „ Turkish General Staff (ARGE)
Hellenic Armed Forces Command Ankara
D Branch, Athens
UNITED KINGDOM
ICELAND Defence Research Information Centre
Director of Aviation Slation Square House
c/o Flugrad St. Mary Cray
Reykjavik Orpington, Kent BR5 3RE
UNITED STATES
National Aeronautics and Space Administration (NASA)
Langley Field, Virginia 23365
Attn: Report Distribution and Storage Unit
(See Note above)
PURCHASE AGENCIES
Microfiche or Photocopy Microfiche Microfiche
National Technical ESRO/ELDO Space Technology Reports
Information Service (NTIS) Documentation Service Centre (DTI)
5285 Port Royal Road European Space Station Square House
Springfield Research Organization St. Mary Cray
Virginia 22151, USA 114, Avenue Charles de Gaulle Orpington, Kent BR5 3RF
92200 Neuilly sur Seine, France England
Requests for microfiche or photocopies of AGARD documents should include the AGARD serial number, title, author or editor, and
publication date. Requests to NTIS should include the NASA accession report number.

Full bibliographical references and abstracts of AGARD publications are given in the following bi-monthly abstract journals:

Scientific and Technical Aerospace Reports (STAR), Government Reports Announcements (GRA),
published by NASA, published by the National Technical
Scientific and Technical Information Facility Information Services, Springfield
P.O. Box 33, College Park Virginia 22151, USA
Maryland 20740, USA

$
Printed by Technical Editing and Reproduction Ltd
Harford House. 7-9 Charlotte St. London. WIP IHD

You might also like