Download as pdf or txt
Download as pdf or txt
You are on page 1of 182

Invitation to Computer Science

CHAPTER 4
KUAN-CHOU LAI

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 1


Objectives
After studying this chapter, students will be able to:
– Translate between base-ten and base-two numbers, and
represent negative numbers using both sign-magnitude
and two’s complement representations
– Explain how floating-point numbers, character, sounds,
and images are represented inside the computer
– Build truth tables for Boolean expressions and determine
when they are true or false
– Describe the relationship between Boolean logic and
computer hardware/circuits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 2


Objectives
– Construct circuits using the sum-of-products circuit
design algorithm, and analyze simple circuits to
determine their truth tables
– Explain how the compare-for-equality (CE) circuit
works and its construction from one-bit CE circuits, and
do the same for the adder circuit and its one-bit adder
parts
– Describe the purpose and workings of multiplexor and
decoder control circuits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 3


Introduction
• All computing devices are built on the ideas in this
chapter
– Laptops, desktops
– Servers, supercomputers
– Game systems, cell phones, MP3 players
– Calculators, singing get-well cards
– Embedded systems, in toys, cars, microwaves, etc.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 4


Introduction
• Focuses on hardware design (also called logic design)
– How to represent and store information inside a
computer
– How to use the principles of symbolic logic to design
gates
– How to use gates to construct circuits that perform
operations such as adding and comparing numbers, and
fetching instructions

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 5


Binary Systems
• How can an electronic (or magnetic) machine represent
information?
• Key requirements: clear, unambiguous, reliable
• External representation is human-oriented
– base-10 numbers
– keyboard characters
• Internal representation is computer-oriented
– base-2 numbers
– base-2 codes for characters

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 6


駭客任務 Matrix

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 7


Binary Systems
• The meaning of binary information depends on the
internal representation

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 8


Binary Systems
• Binary is the simple idea of On/Off, Yes/No,
True/False, and Positive/Negative.
• Binary is important to computing systems because of
it’s stability and reliability.
• Even when an electrical system degrades, there is still a
clear “On/Off.”
• All data stored inside a computer is stored in binary
(also called machine language) and interpreted to
display on the screen in human language.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 9


Binary Systems
• A computer’s internal storage techniques are different
from the way people represent information in daily
lives
• Information inside a digital computer is stored as a
collection of binary data
– Numbers
• Integer: unsigned / signed
• Floating point
– Text: ASCII / UNICODE
– Audio
– Images and graphics
– Video
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 10
Binary Systems
• Binary numbering system
– Base-2
– Built from ones and zeros
– Each position is a power of 2
11012 = 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20
• Decimal numbering system
– Base-10
– Each position is a power of 10
305210 = 3 x 103 + 0 x 102 + 5 x 101 + 2 x 100

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 11


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 12


Binary Systems
• Number Systems
– The number system on r-based

an × r n + an -1 × r n -1 + .... + a2 × r 2 + a1 × r1 + a0
+ a-1 × r -1 + a- 2 × r - 2 + .... + a- m × r - m
– Examples
( 4021.2)5 = 4 ´ 53 + 0 ´ 52 + 2 ´ 51 + 1 ´ 50 + 2 ´ 5-1 = (511.4)10

(110101) = 1 ´ 25 + 1 ´ 2 4 + 0 ´ 23 + 1 ´ 2 2 + 0 ´ 21 + 1 ´ 20
= 32 + 16 + 4 + 1 = (53)10

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 13


Binary Systems
• Decimal
– is base 10 and has 10 digits:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9
• Binary
– is base 2 and has 2 digits:
0, 1
• For a number to exist in a given number system, the
number system must include those digits.
– For example, the number 284 only exists in base 9 and
higher.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 14


Binary Systems
• Converting from binary to decimal
– Add up powers of two where a 1 appears in the binary
number
• Converting from decimal to binary
– Repeatedly divide by two and record the remainder
– Example, convert 9:
• 9/2 = 4 remainder 1, binary number = 1
• 4/2 = 2 remainder 0, binary number = 01
• 2/2 = 1 remainder 0, binary number = 001
• 1/2 = 0 remainder 1, binary number = 1001

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 15


Binary Systems
• Positional Notation
What if 642 has the base of 13?
+ 6 x 132 = 6 x 169 = 1014
+ 4 x 131 = 4 x 13 = 52
+ 2 x 13º = 2 x 1 = 2
= 1068 in base 10
64213 in base 13 is equivalent to 106810 in base 10

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 16


Binary Systems
• Bases Higher than 10
How are digits in bases higher than 10 represented?
– With distinct symbols for 10 and above.
– Base 16 has 16 digits:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 17


BIOS CODE

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 18


Converting Octal to Decimal
• What is the decimal equivalent of the octal number 642?

6 x 82 = 6 x 64 = 384
+ 4 x 81 = 4 x 8 = 32
+ 2 x 8º =2x1 = 2
= 418 in base 10

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 19


Converting Hexadecimal to Decimal
• What is the decimal equivalent of the hexadecimal
number DEF?

D x 162 = 13 x 256 = 3328


+ E x 161 = 14 x 16 = 224
+ F x 16º = 15 x 1 = 15
= 3567 in base 10
• Remember, the digits in base 16 are
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 20


Converting Binary to Decimal
• What is the decimal equivalent of the binary number
1101110?
1 x 26 = 1 x 64 = 64
+ 1 x 25 = 1 x 32 = 32
+ 0 x 24 = 0 x 16 = 0
+ 1 x 23 = 1x8 = 8
+ 1 x 22 = 1x4 = 4
+ 1 x 21 = 1x2 = 2
+ 0 x 2º = 0x1 = 0
= 110 in base 10

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 21


Representation
– 345.67810 = 3*102 + 4*101 + 5*100 +
6*10-1 + 7*10-2 + 8*10-3
– 1010.112 = 1*23 + 0*22 + 1*21 + 0*20 +
1*2-1 + 1*2-2
– 123.458 = 1*82 + 2*81+3*80 +
4*8-1 + 5*8-2
– 89A.BC16 = 8*162 + 9*161 + 10*160 +
11*16-1 + 12*16-2

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 22


Converting Binary to Octal
• Groups of Three (from right)
• Convert each group

10101011 → 10 101 011


2 5 3
10101011 is 253 in base 8

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 23


Converting Binary to Hexadecimal
• Groups of Four (from right)
• Convert each group

10101011 → 1010 1011


A B
10101011 is AB in base 16

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 24


Converting Decimal to Other Bases
• Algorithm for converting base 10 to other bases

While the quotient is not zero


– Divide the decimal number by the new base
– Make the remainder the next digit to the left in the
answer
– Replace the original dividend with the quotient

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 25


Converting Decimal to Hexadecimal
Try a Conversion

The base 10 number 3567 is what number in base 16?

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 26


Converting Decimal to Hexadecimal

222 13 0
16 3567 16 222 16 13
32 16 0
36 62 13
32 48
47 14
32
15

D E F

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 27


Converting
– Octal→ Decimal
• 51763.28 = 5*84 + 1*83 + 7*82 + 6*81 + 3*80 + 2*8-1
= 2048010 + 51210 + 44810 + 4810 + 310 + 0.2510
= 21491.2510
– The same in
Binary→Decimal / Hexadecimal→Decimal

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 28


Converting
• Decimal → Binary

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 29


Converting
– Decimal → Binary
59.7510 = 5910 + 0.7510
– Whole part
2 59 1 (remainder of 59 divide 2)
2 29 1 (remainder of 29 divide 2)
2 14 0 (remainder of 14 divide 2)
2 7 1 (remainder of 7 divide 2)
2 3 1 (remainder of 3 divide 2)
1 → 5910 = 1110112

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 30


Converting
– Fractional part
multiply 2, take the first number before point
until cycle or zero
0.75 * 2 = 1.5 → 1
0.50 * 2 = 1.0 → 1
0.7510 = 0.1102
– Merge whole part and fractional part →
59.7510 = 111011.112

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 31


Converting
– Decimal → Octal
• Same as Decimal → Binary
• whole part: divide 2 → divide 8
• Fractional part: multiply 2 → multiply 8
• 5176.6510 = 12070. 514638
– Decimal → Hexadecimal
• Same as Decimal → Binary
• whole part: divide 2 → divide 16
• Fractional part: multiply 2 → multiply 16
• 4877.1562510 = 130D.2816

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 32


Converting
• Octal / Hexadecimal → Binary

5 7 6 2. 1 38 = 101 111 110 010. 001 0112

E 8 C 4. B16 = 1110 1000 1100 0100. 10112

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 33


Converting
– Binary → Octal / Hexadecimal
group 3 or 4 numbers
011 010 111.101 1002 = 3 2 7. 5 48
0010 1101 0111 1010. 1111 00102 = 2 D 7 A. F 216

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 34


Arithmetic in Binary
• Remember that there are only 2 digits in binary,
– 0 and 1
– Position is key, carry values are used:

11111 Carry Values


1010111
+1 0 0 1 0 1 1
10100010

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 35


Subtracting Binary Numbers
• Remember borrowing? Apply that concept here:
12
202
1010111
- 111011
0011100

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 36


Power of 2 Number System
Binary Octal Decimal
000 0 0
001 1 1
010 2 2
011 3 3
100 4 4
101 5 5
110 6 6
111 7 7
1000 10 8
1001 11 9
1010 12 10
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 37
Binary and Computers
• Binary computers have storage units
– called binary digits or bits

Low Voltage = 0
High Voltage = 1 all bits have 0 or 1

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 38


Binary and Computers
• Byte = 8 bits
The number of bits in a word determines the word
length of the computer, but it is usually a multiple of 8
32-bit machines
64-bit machines etc

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 39


Binary and Computers
• Computers use fixed-length binary numbers for
integers, e.g., with 4 bits could represent 0 to 15
• Arithmetic overflow: when computer tries to make a
number that is too large, e.g. 14 + 2 with 4 bits
• Binary addition:
– 0+0=0, 0+1=1, 1+0=1
– 1+1=0 with carry of 1
• Example: 0101 + 0011 = 1000

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 40


Binary Systems
• Signed Binary Numbers (-910)
– Signed-magnitude representation 10001001
– Signed-1’s complement representation 11110110
– Signed-2’s complement representation 11110111
“-0 = +0” ?

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 41


Binary Systems
• Signed integers include negative numbers
• Sign/magnitude notation uses 1 bit for sign, the rest for
value
– +5 = 0101, -5 = 1101
– 0 = 0000 and 1000!
• Two’s complement representation: to make the
negative of a number, flip every bit and add one
– +5 = 0101, -5 = 1010 + 1 = 1011
– 0 = 0000, -0 = 1111 + 1 = 0000

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 42


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 43


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 44


Binary Systems
• Complements
– Diminished radix complement: (r-1)’s complement
• A number N in base r having n digitals
• (r-1)’s complement = ( r - 1) - N
n

• Ex.
– 9’s complement of 546700 is 999999-546700 =
453299
– 1’s complement of 0101101 is 1010010

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 45


Binary Systems
– Radix complement: r’s complement
• A number N in base r having n digitals

ì 0 , if N =0 ü
• r’s complement = í n ý
îr - N , if N ¹0 þ

• Ex.
– 10’s complement of 012398 is 987602
– 2’s complement of 0110111 is 1001001

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 46


Binary Systems
• Subtraction with Complements
– Borrow concept – elementary schools
– Subtraction of two n-digit unsigned numbers M-N in
base r
• Add the minuend M to the r’s complement of the
subtrahend N: M + ( rn - N)
• If M > N, the sum will produce an end carry, rn
– Discard it, leave M- N
• If M < N, the sum does not produce an end carry, is
equal to rn – (N - M), which is the r’s complement of
(N-M), then add the negative sign.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 47


Binary Systems
– Ex.
Using 10’s complement, subtract 72532-3250
M = 72532
10’s complement of N = + 96750
Sum = 169282
Discard end carry 105 = -100000
Ans = 69282

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 48


Binary Systems
– Arithmetic Addition

– Arithmetic Subtraction
(±A) – (+B) = (±A) + (-B)
(±A) – (-B) = (±A) + (+B)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 49


Binary Systems
• Number Overflow
– Overflow occurs when the value that we compute cannot
fit into the number of bits we have allocated for the
result. For example, if each value is stored using eight
bits, adding 127 to 3 overflows.
1111111
+ 0000011
10000010
– Overflow is a classic example of the type of problems we
encounter by mapping an infinite world onto a finite
machine.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 50


Binary Systems
• Representing real numbers
– Floating point numbers
– Real numbers may be put into binary scientific notation:
a x 2b
• Scientific notation, base 10: 1.35 × 10-5
base 2: 3.2510 =11.012 =1.101×21
– Number then normalized so that first significant digit is
immediately to the right of the binary point
• Example: .10111 x 23
– Mantissa and exponent then stored

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 51


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 52


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 53


Binary Systems
Above examples only exits in this textbook

Actually, the using format of Floating point number is


IEEE-754

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 54


IEEE-754 Floating Point Formats

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 55


Normalization

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 56


IEEE-754 Floating Point Formats

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 57


IEEE-754 Floating Point Formats

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 58


IEEE-754 Examples

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 59


IEEE-754 Conversion Example
– Represent -12.62510 in single precision IEEE-754 format.
–• Step #1: Convert to target base. -12.62510 = -1100.1012
–• Step #2: Normalize. -1100.1012 = -1.1001012 * 23
–• Step #3: Fill in bit fields. Sign is negative, so sign bit is
1. Exponent is in excess 127 (not excess 128!), so exponent is
represented as the unsigned integer 3 + 127 = 130. Leading
1 of significand is hidden, so final bit pattern is:
– 1 1000 0010 . 1001 0100 0000 0000 0000 000

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 60


Binary Systems
• To represent a text document in digital form, we need
to be able to represent every possible character that
may appear.
• There are finite number of characters to represent, so
the general approach is to list them all and assign each
a binary string.
• A character set is a list of characters and the codes used
to represent each one.
• By agreeing to use a particular character set, computer
manufacturers have made the processing of text data
easier.
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 61
Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 62


Binary Systems
• Text alphanumeric characters
– ASCII (American Standard Code for Information
Interchange)
– 7 bits for 128 characters
• 94 graphic characters that can be printed
• 34 non printing characters for control functions
– Format effectors
» Backspace, carriage return
– Information separators
» Record separator
– Communication-control characters
» Start of text, end of text, acknowledge
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 63
ASCII Character Set
– ASCII stands for American Standard Code for
Information Interchange. The ASCII character set
originally used seven bits to represent each character,
allowing for 128 unique characters.
– Later ASCII evolved so that all eight bits were used
which allows for 256 characters

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 64


High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 65
Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 66


Binary Systems

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 67


High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 68
Binary Systems
• UNICODE code set
– 16 bits per character; 65,536 character codes
– Unicode was designed to be a superset of ASCII. That is,
the first 256 characters in the Unicode character set
correspond exactly to the extended ASCII character set.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 69


Binary Systems
• Error-Detecting Code
– Parity bit is an extra bit to make the total number of 1’s
either even or odd.
– ASCII A=1000001
• Even parity 01000001
• Odd parity 11000001
– Detect 1, 3, or ant odd combination of errors

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 70


Binary Systems
• Binary information
– A group of binary cells
– ex. 1100001111001001
the content of the register represents
• Integer 50121
• Two characters, C and I

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 71


Analog and Digital Information
• Computers are finite
– Computer memory and other hardware devices have
only so much room to store and manipulate a certain
amount of data.
– The goal, is to represent enough of the world to satisfy
our computational needs and our senses of sight and
sound.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 72


Analog and Digital Information
• Information can be represented in one of two ways:
analog or digital.
– Analog data
A continuous representation, analogous to the actual
information it represents.
– Digital data
A discrete representation, breaking the information up
into separate elements.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 73


Analog and Digital Information
• A mercury thermometer is an analog device. The
mercury rises in a continuous flow in the tube in direct
proportion to the temperature.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 74


Analog and Digital Information
• Computers, cannot work well with analog information.
So we digitize information by breaking it into pieces
and representing those pieces separately.
• Why do we use binary?
– Modern computers are designed to use and manage
binary values because the devices that store and manage
the data are far less expensive and far more reliable if
they only have to represent on of two possible values.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 75


Electronic Signals
• An analog signal continually fluctuates in voltage up
and down. But a digital signal has only a high or low
state, corresponding to the two binary digits.
• All electronic signals (both analog and digital) degrade
as they move down a line. That is, the voltage of the
signal fluctuates due to environmental effects.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 76


Electronic Signals
• Periodically, a digital signal is reclocked to regain its
original shape. (Pulse-code modulation (PCM))

An analog and a digital


signal

Degradation of analog and digital signals

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 77


Binary Representations
• One bit can be either 0 or 1.
– Therefore, one bit can represent only two things.
• To represent more than two things, we need multiple
bits.
– Two bits can represent four things because there are
four combinations of 0 and 1 that can be made from two
bits: 00, 01, 10,11.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 78


Binary Representations
• If we want to represent more than four things, we need
more than two bits.
– Three bits can represent eight things because there are
eight combinations of 0 and 1 that can be made from
three bits.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 79


Binary Representations
• In general, n bits can represent 2n things because there
are 2n combinations of 0 and 1 that can be made from n
bits.
• Note that every time we increase the number of bits by
1, we double the number of things we can represent.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 80


Binary Representations

Bit combinations

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 81


Representing Audio Information
• Quantisation and sampling
• Discrete and continuous
• Continuous function mapping
• Discrete data
– Finite domain (set of possible values)
• e.g. Integers between 0 and 100
• Continuous data
– Infinite domain
• e.g. Numbers between 0 and 1
– 0.1, 0.01, 0.001, 0.0001

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 82


Quantisation
• Problem: Storage limitations
• Computers cannot store truly continuous data
• Solution: Quantisation
• Fit continuous data to nearest discrete quantum
• Finite number of quanta (therefore discrete)
• Interval: width of each quantum
• e.g. time, quantised into months

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 83


Quantisation
• Approximation of absolute values to given accuracy

Continuous Discrete quantisation


0.452 0
0.562 1
1.22 1
2.92 3 Intervals (1.00)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 84


Sampling
• To digitize the signal we periodically measure the
voltage of the signal and record the appropriate
numeric value. The process is called sampling.
• In general, a sampling rate of around 40,000 times per
second is enough to create a reasonable sound
reproduction.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 85


Sampling
• Taking a reading or sample
• Sampling rate: frequency of samples
• e.g. Movies
• Real life is continuous
• Movie cameras sample (take a still picture) 25 times a
second (f = 25Hz)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 86


Sampling

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 87


Sampling

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 88


Sampling
• Sound waves characterized by:
– amplitude: height of the wave at a moment in time
– period: length of time until wave pattern repeats
– frequency: number of periods per time unit
• Compact disc digital storage
– Quantised and sampled fq
• 16 bit quantisation (65536 quanta)
• 44100 samples per second
samples

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 89


Sampling

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 90


Sampling
• Quality determine by:
– Sampling rate: number of samples per second
• More samples = more accurate wave form
– Bit depth: number of bits per sample
• More bits = more accurate amplitude
• Time based media
– Audio files (.wav,.aiff)
• volume = Bit depth * sampling_rate * duration
– Video streams (.mov, .avi)
• volume = w * h * frames
or more commonly
• volume = w * h * frame_rate * duration
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 91
Data and Computers
• Data compression
– Reduction in the amount of space needed to store a piece
of data.
• Compression ratio
– The size of the compressed data divided by the size of the
original data.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 92


Compression
• It is important that we find ways to store and transmit
data efficiently, which means we must find ways to
compress text.
– keyword encoding
– run-length encoding
– variable length encoding
• Huffman encoding
• A data compression techniques can be
– lossless, which means the data can be retrieved without
any loss of the original information,
– lossy, which means some information may be lost in the
process of compaction.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 93


Keyword Encoding
• Frequently used words are replaced with a single
character. For example,

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 94


Keyword Encoding
• Given the following paragraph,
– The human body is composed of many independent
systems, such as the circulatory system, the respiratory
system, and the reproductive system. Not only must all
systems work independently, they must interact and
cooperate as well. Overall health is a function of the
well-being of separate systems, as well as how these
separate systems work in concert.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 95


Keyword Encoding
• The encoded paragraph is
– The human body is composed of many independent
systems, such ^ ~ circulatory system, ~ respiratory
system, + ~ reproductive system. Not only & each system
work independently, they & interact + cooperate ^ %.
Overall health is a function of ~ %- being of separate
systems, ^ % ^ how # separate systems work in concert.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 96


Keyword Encoding
• There are a total of 349 characters in the original
paragraph including spaces and punctuation. The
encoded paragraph contains 314 characters, resulting
in a savings of 35 characters. The compression ratio for
this example is 314/349 or approximately 0.9.
• The characters we use to encode cannot be part of the
original text.
Problem: $ is $?

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 97


Run-Length Encoding
• A single character may be repeated over and over
again in a long sequence. This type of repetition doesn’t
generally take place in English text, but often occurs in
large data streams.
• In run-length encoding, a sequence of repeated
characters is replaced by a flag character, followed by
the repeated character, followed by a single digit that
indicates how many times the character is repeated.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 98


Run-Length Encoding
• AAAAAAA would be encoded as *A7
• *n5*x9ccc*h6 some other text *k8eee would be decoded into
the following original text
nnnnnxxxxxxxxxccchhhhhh some other text kkkkkkkkeee
• The original text contains 51 characters, and the encoded
string contains 35 characters, giving us a compression ratio
in this example of 35/51 or approximately 0.68.
• Since we are using one character for the repetition count, it
seems that we can’t encode repetition lengths greater than
nine. Instead of interpreting the count character as an
ASCII digit, we could interpret it as a binary number.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 99


Run-Length Encoding

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 100


Variable Length Code Sets
• Huffman Encoding
• Why should the character “X”, which is seldom used in
text, take up the same number of bits as the blank,
which is used very frequently? Huffman codes using
variable-length bit strings to represent each character.
• A few characters may be represented by five bits, and
another few by six bits, and yet another few by seven
bits, and so forth.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 101


Huffman Encoding
• If we use only a few bits to represent characters that
appear often and reserve longer bit strings for
characters that don’t appear often, the overall size of
the document being represented is smaller.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 102


Huffman Encoding

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 103


Huffman Encoding
• DOORBELL would be encode in binary as
1011110110111101001100100.
• If we used a fixed-size bit string to
represent each character (say, 8 bits), then
the binary from of the original string
would be 64 bits. The Huffman encoding for that string
is 25 bits long, giving a compression ratio of 25/64, or
approximately 0.39.
• An important characteristic of any Huffman encoding
is that no bit string used to represent a character is the
prefix of any other bit string used to represent a
character.
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 104
Compression
• Compression
– Reduces data volume
• Data
– winzip, winrar
• Image compression
– JPEG and Frequency space
• Audio compression
– MiniDisc, MP3
• Video compression
– MJPEG, MPEG2

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 105


Audio Formats
• Audio Formats
– WAV(Windows), AU(Sun), AIFF(Apple), VQF, and
MP3.
• MP3 is dominant
– MP3 is short for MPEG-1, audio layer 3 encoding file.
– MP3 employs both lossy and lossless compression.
First it analyzes the frequency spread and compares it to
mathematical models of human psychoacoustics (the
study of the interrelation between the ear and the brain),
then it discards information that can’t be heard by
humans. Then the bit stream is compressed using a form
of Huffman encoding to achieve additional compression.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 106


Digitized Images and Graphics
• Digitizing a picture is the act of representing it as a
collection of individual dots called pixels.
• The number of pixels used to represent a picture is
called the resolution.
• The storage of image information on a pixel-by-pixel
basis is called a raster-graphics format. Several
popular raster file formats including bitmap (BMP),
GIF, and JPEG.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 107


Digitized Images and Graphics

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 108


Digitized Images and Graphics

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 109


Digitized Images and Graphics
• GIF / JPG / BMP
– Joint Photographic Expert Group (JPEG)
– Graphics interchange format
– Bit mapped graphics

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 110


Images and Graphics
• Color is our perception of the various frequencies of
light that reach the retinas of our eyes.
• Our retinas have three types of color photoreceptor
cone cells that respond to different sets of frequencies.
These photoreceptor categories correspond to the
colors of red, green, and blue.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 111


Images and Graphics
• Color is often expressed in a computer as an RGB (red-
green-blue) value, which is actually three numbers that
indicate the relative contribution of each of these three
primary colors.
• For example, an RGB value of (255, 255, 0) maximizes
the contribution of red and green, and minimizes the
contribution of blue, which results in a bright yellow.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 112


Images and Graphics
• The amount of data that is used to represent a color is
called the color depth.
– HiColor is a term that indicates a 16-bit color depth. Five
bits are used for each number in an RGB value and the
extra bit is sometimes used to represent transparency.
– TrueColor indicates a 24-bit color depth. Therefore, each
number in an RGB value gets eight bits.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 113


Images and Graphics

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 114


Indexed Color
• A particular application such as a browser may
support only a certain number of specific colors,
creating a palette from which to choose. For example,
the Netscape Navigator’s color palette is

The Netscape color palette

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 115


Representing Images and Graphics
• Mono
– 1 bit (black or white)
• 8 Gray level
• 256 Gray level
– 8 bits (28=256)
• 16 colors
– 4 bits
• 256 colors
– 8 bits
• Hi Color
– 16 bits
• True Color
– 24 bits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 116


Representing Images and Graphics
• Mono v.s. 256 Gray level

• 16 colors v.s. Hi-Colors

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 117


Vector Graphics
• Instead of assigning colors to pixels as we do in raster
graphics, a vector-graphics format describe an image
in terms of lines and geometric shapes. A vector
graphic is a series of commands that describe a line’s
direction, thickness, and color. The file size for these
formats tend to be small because every pixel does not
have to be accounted for.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 118


Vector Graphics
• Vector graphics can be resized mathematically, and
these changes can be calculated dynamically as needed.
• However, vector graphics is not good for representing
real-world images.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 119


Representing Video
• A video codec COmpressor/DECompressor refers to
the methods used to shrink the size of a movie to allow
it to be played on a computer or over a network.
Almost all video codecs use lossy compression to
minimize the huge amounts of data associated with
video.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 120


Representing Video
• Two types of compression, temporal and spatial.
– Temporal compression
A technique based differences between consecutive
frames. If most of an image in two frames hasn’t
changed, why should we waste space to duplicate all of
the similar information?
– Spatial compression
A technique based on removing redundant information
within a frame. This problem is essentially the same as
that faced when compressing still images.

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 121


Representing Video

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 122


Reliability of Binary System
• Electronic devices are most reliable in a bistable
environment
• Bistable environment
– Distinguishing only two electronic states
• Current flowing or not
• Direction of flow
• Computers are bistable
– hence binary representations

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 123


Binary Devices
• Binary computer meets the four criteria
– Two stable energy states
– Two states are separated by a large energy barrier
– Possible to sense which state without permanently
destroying the stored value
– Possible to switch the state from 0 to 1, or 1 to 0, by
applying a sufficient amount of energy
• Two hardware techniques
– Magnetic cores (no longer in use)
– transistors

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 124


Binary Devices
• Magnetic core
– Historic device for computer memory (1955-1975)
– Tiny magnetized rings: flow of current sets the direction
of magnetic field (1/50 of an inch)
– Binary values 0 and 1 are represented using the direction
of the magnetic field

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 125


Binary Devices
• Transistors
– Solid-state switches: either permits
or blocks current flow (faster than
mechanical parts)
– Change on/off when given power on
control line
– A control input causes state change
– Extremely small (billions per chip)
– Constructed from semiconductors

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 126


Binary Devices

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 127


Binary Devices

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 128


Binary Devices
• Transistors, Chips, Circuit Boards

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 129


Boolean Logic and Gates
• Boolean logic
– rules for manipulating true/false
• Boolean expressions
– can be converted to circuits
• Hardware design/logic design pertains to the design
and construction of new circuits
• Note that 1/0 of binary representations maps to
true/false of Boolean logic

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 130


Boolean Logic and Gates
• Truth tables lay out true/false values for Boolean
expressions, for each possible true/false input
• Example: (a Ÿ b) + (a Ÿ ~b)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 131


Boolean Logic and Gates

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 132


Binary Systems
• Binary Logic
– Two discrete values, 0/1, false/true
– Definition
• Binary logic consists of binary variables and logical
operations (AND / OR / NOT)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 133


Boolean Logic
• Boolean logic describes operations on true/false values
• True/false maps easily onto bitable environment
• Boolean logic operations on electronic signals may be
built out of transistors and other electronic devices

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 134


Boolean Logic
• Boolean expressions
– Constructed by combining together Boolean operations
• Example: (a AND b) OR ((NOT b) AND (NOT a))
• Truth tables capture the output/value of a Boolean
expression
– A column for each input plus the output
– A row for each combination of input values

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 135


Boolean Logic
• Example:
(a AND b) OR ((NOT b) and (NOT a))

a b Value
0 0 1
0 1 0
1 0 0
1 1 1

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 136


Gates
• Gate
– an electronic device that operates on inputs to produce
outputs, each gate corresponds to a Boolean operator
– Hardware devices built from transistors to mimic
Boolean logic

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 137


Gates
• AND gate
– Two input lines, one output line
– Outputs a 1 when both inputs are 1
• OR gate
– Two input lines, one output line
– Outputs a 1 when either input is 1
• NOT gate
– One input line, one output line
– Outputs a 1 when input is 0 and vice versa

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 138


Gates
• Abstraction in hardware design
– Map hardware devices to Boolean logic
– Design more complex devices in terms of logic, not
electronics
– Conversion from logic to hardware design may be
automated
• Logic gate
– Voltage-operated circuits respond to two separate
voltage levels (0/1)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 139


Gates
• State transition

0 1 0

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 140


Gates
• NOT gate

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 141


Gates
• NAND and AND gates

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 142


Gates
• NOR gate

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 143


Gates
• OR gate

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 144


Computer Circuits
• A circuit is a collection of logic gates:
– Transforms a set of binary inputs into a set of binary
outputs
– Values of the outputs depend only on the current values
of the inputs

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 145


Computer Circuits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 146


Computer Circuits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 147


Computer Circuits
• To convert a circuit to a Boolean expression:
– Start with output and work backwards
• Find next gate back, convert to Boolean operator
• Repeat for each input, filling in left and/or right side
• To convert a Boolean expression to a circuit:
– Similar approach
• To build a circuit from desired outcomes:
– Use standard circuit construction algorithm:
• e.g., sum-of-products algorithm

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 148


Computer Circuits
• Circuit types
– Combinational circuits
• have no cycles in them (no outputs feed back into
their own inputs)
– Sequential circuits
• Circuits contain feedback loops in which the output of
a gate is fed back as input to an earlier gate. The
output of these circuits depends not only on the
current input values but also on previous inputs
• Used to build memory units

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 149


Circuit Construction
• Sum-of-products algorithm is one way to design
circuits:
– Truth table to Boolean expression to gate layout

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 150


Circuit Construction
• Step 1. Truth Table Construction

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 151


Circuit Construction
• Step 2. Subexpression construction using AND and
NOT gates

– a = 0, b = 1, c = 0 equals (~a Ÿ b Ÿ ~c)


– a = 1, b = 1, c = 0 equals (a Ÿ b Ÿ ~c)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 152


Circuit Construction
• Step 3. Subexpression combination using OR gates

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 153


Circuit Construction
• Step 4. Circuit Diagram Production

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 154


Circuit Construction
– Above procedure not always produce optimal circuit
– 7 gates → 2 gates
– Cost / place / power / heat

• Practice exercise
– Label Output-2
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 155
Circuit Construction
• Example of compare-for-equality (CE) circuit
– Goal
test two unsigned binary numbers for exact equality
output 1 → if two numbers are equal
output 0 → if they are not equal
– First, construct 1 bit circuit
– Built by combining together 1-bit comparison circuits (1-
CE)
– Integers are equal if corresponding bits are equal (AND
together 1-CD circuits for each pair of bits)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 156


Circuit Construction
– True table

– Case 1 + Case 2 =

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 157


Circuit Construction
• N-bit CE circuit
• Input: a0a2…an-1 and b0b2…bn-1, where ai and bi are
individual bits
• Pair up corresponding bits: a0 with b0, a1 with b1, etc.
• Run a 1-CE circuit on each pair
• AND the results

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 158


Circuit Construction
• N-bits compare-for-equality circuit

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 159


Circuit Construction
• Addition Circuit
– Adds two unsigned binary integers, setting output bits
and an overflow
– Built from 1-bit adders (1-ADD)
• Starting with rightmost bits, each pair produces
• A value for that order
• A carry bit for next place to the left

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 160


Circuit Construction
– 1-ADD truth table
• Input
– One bit from each input integer
– One carry bit (always zero for rightmost bit)
• Output
– One bit for output place value
– One “carry” bit

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 161


Circuit Construction
The 1-ADD Circuit
and Truth Table

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 162


Circuit Construction

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 163


Circuit Construction

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 164


Circuit Construction
– Building the full adder
• Put rightmost bits into 1-ADD, with zero for the input
carry
• Send 1-ADD’s output value to output, and put its
carry value as input to 1-ADD for next bits to left
• Repeat process for all bits

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 165


Circuit Construction
• N-bit adder circuit
• Input: a0a2…an-1 and b0b2…bn-1, where ai and bi are
individual bits
• a0 and b0 are least significant digits: ones place
• Pair up corresponding bits: a0 with b0, a1 with b1, etc.
• Run 1-ADD on a0 and b0, with fixed carry in c0 = 0
• Feed carry out c1 to next 1-ADD and repeat

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 166


Circuit Construction

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 167


Circuit Construction
• Many different ways to express the same algorithm
– Page 8, Fig. 1.2 pseudocode
– Page 205, Fig. 4.27 Hardware circuit
– Regardless of whether use English, pseudocode,
mathematics or transistor to describe an algorithm, the
fundamental properties are the same
algorithmic problem solving

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 168


Circuit Construction
• 32 bits adder
– 32 * (3 not gates + 16 and gates + 6 or gates) = 32*25
= 800 gates
– Each AND and OR gates use 3 transistors
– One NOT gate uses 1 transistor
– Total 2208 transistors

• Optimized 32-bit addition circuits only need 500-600


transistors

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 169


Circuit Construction
• Control Circuits
– Do not perform computations
– Choose order of operations or select among data values
– Major types of controls circuits
• Multiplexors
– Select one of 2N inputs to be sent to 1 output
– 2N regular input lines
– N selector input lines
– 1 output line
• Decoders
– Sends a 1 on one output line, based on what
input line indicates
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 170
High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 171
Circuit Construction
– Multiplexors
• Select one of 2N inputs to be sent to 1 output

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 172


Circuit Construction
• 2-1 multiplexor

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 173


Circuit Construction
– Multiplexor purpose
• Given a code number for some input, selects that
input to pass along to its output
• Used to choose the right input value to send to a
computational circuit

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 174


Circuit Construction
– Multiplexor application
support the correct data

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 175


Circuit Construction
• Decoder
– N input lines
– 2N output lines
– N input lines indicate a binary number, which is used to
select one of the output lines
– Selected output sends a 1, all others send 0

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 176


Circuit Construction
• A 2-to-4 Decoder Circuit

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 177


Circuit Construction
– Decoder purpose
• Given a number code for some operation, trigger just
that operation to take place
• Numbers might be codes for arithmetic: add, subtract,
etc.
• Decoder signals which operation takes place next

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 178


Circuit Construction
– Decoder application
Select the correct instruction

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 179


Circuit Construction
• Decoder circuit uses
– To select a single arithmetic instruction, given a code for
that instruction
– Code activates one output line, that line activates
corresponding arithmetic circuit
• Multiplexor circuit uses
– To choose one data value from among a set, based on
selector pattern
– Many data values flow into the multiplexor, only the
selected one comes out

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 180


Summary
• Computers use binary representations because they
maximize reliability for electronic systems
• Many kinds of data may be represented at least in an
approximate digital form using binary values
• Boolean logic describes how to build and manipulate
expressions that are true/false
• We can build logic gates that act like Boolean operators
using transistors
• Circuits may be built from logic gates: circuits
correspond to Boolean expressions

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 181


Summary
• Sum-of-products is a circuit design algorithm: takes a
specification and ends with a circuit
• We can build circuits for basic algorithmic tasks:
– Comparisons (compare-for-equality circuit)
– Arithmetic (adder circuit)
– Control (multiplexor and decoder circuits)

High Performance Computing Laboratory@NTCU © Lai, Kuan-Chou 182

You might also like