Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Data Structures and Algorithm Analysis

in C Solutions Manual 2nd Edition Mark


Allen Weiss
Visit to download the full and correct content document:
https://ebookmeta.com/product/data-structures-and-algorithm-analysis-in-c-solutions-
manual-2nd-edition-mark-allen-weiss/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

A Textbook of Data Structures and Algorithms, Volume 3


: Mastering Advanced Data Structures and Algorithm
Design Strategies 2nd Edition G. A. Vijayalakshmi Pai

https://ebookmeta.com/product/a-textbook-of-data-structures-and-
algorithms-volume-3-mastering-advanced-data-structures-and-
algorithm-design-strategies-2nd-edition-g-a-vijayalakshmi-pai/

Problem Solving in Data Structures Algorithms Using C


2nd Edition Hemant Jain

https://ebookmeta.com/product/problem-solving-in-data-structures-
algorithms-using-c-2nd-edition-hemant-jain/

JavaScript Data Structures and Algorithms: An


Introduction to Understanding and Implementing Core
Data Structure and Algorithm Fundamentals 1st Edition
Sammie Bae
https://ebookmeta.com/product/javascript-data-structures-and-
algorithms-an-introduction-to-understanding-and-implementing-
core-data-structure-and-algorithm-fundamentals-1st-edition-
sammie-bae/

Introduction to Scientific Computing and Data Analysis


2nd Edition Mark H. Holmes

https://ebookmeta.com/product/introduction-to-scientific-
computing-and-data-analysis-2nd-edition-mark-h-holmes/
Networks Second Edition Instructor s Manual Solutions
Mark Newman

https://ebookmeta.com/product/networks-second-edition-instructor-
s-manual-solutions-mark-newman/

Introduction to Statistics and Data Analysis: With


Exercises, Solutions and Applications in R, 2nd Edition
Christian Heumann

https://ebookmeta.com/product/introduction-to-statistics-and-
data-analysis-with-exercises-solutions-and-applications-in-r-2nd-
edition-christian-heumann/

Calculus and Analysis in Euclidean Space Instructor


Solution Manual Solutions 1st Edition Jerry Shurman

https://ebookmeta.com/product/calculus-and-analysis-in-euclidean-
space-instructor-solution-manual-solutions-1st-edition-jerry-
shurman/

Data Structures Through C++ (4th Ed.) Yashavant


Kanetkar

https://ebookmeta.com/product/data-structures-through-c-4th-ed-
yashavant-kanetkar/

Solutions Manual for Understanding Analysis Second


Edition Stephen Abbott

https://ebookmeta.com/product/solutions-manual-for-understanding-
analysis-second-edition-stephen-abbott/
Data Structures

and

Algorithm Analysis in C

(second edition)
Solutions Manual

Mark Allen Weiss


Florida International University
Preface
Included in this manual are answers to most of the exercises in the textbook Data Structures and
Algorithm Analysis in C, second edition, published by Addison-Wesley. These answers reflect
the state of the book in the first printing.
Specifically omitted are likely programming assignments and any question whose solu-
tion is pointed to by a reference at the end of the chapter. Solutions vary in degree of complete-
ness; generally, minor details are left to the reader. For clarity, programs are meant to be
pseudo-C rather than completely perfect code.
Errors can be reported to weiss@fiu.edu. Thanks to Grigori Schwarz and Brian Harvey
for pointing out errors in previous incarnations of this manual.
Table of Contents

1. Chapter 1: Introduction ...................................................................................................... 1


2. Chapter 2: Algorithm Analysis .......................................................................................... 4
3. Chapter 3: Lists, Stacks, and Queues ................................................................................. 7
4. Chapter 4: Trees ................................................................................................................. 14
5. Chapter 5: Hashing ............................................................................................................ 25
6. Chapter 6: Priority Queues (Heaps) ................................................................................... 29
7. Chapter 7: Sorting .............................................................................................................. 36
8. Chapter 8: The Disjoint Set ADT ....................................................................................... 42
9. Chapter 9: Graph Algorithms ............................................................................................. 45
10. Chapter 10: Algorithm Design Techniques ...................................................................... 54
11. Chapter 11: Amortized Analysis ...................................................................................... 63
12. Chapter 12: Advanced Data Structures and Implementation ............................................ 66

-iii-
Chapter 1: Introduction

1.3 Because of round-off errors, it is customary to specify the number of decimal places that
should be included in the output and round up accordingly. Otherwise, numbers come out
looking strange. We assume error checks have already been performed; the routine
SeparateO is left to the reader. Code is shown in Fig. 1.1.
1.4 The general way to do this is to write a procedure with heading

void ProcessFile( const char *FileName );

which opens FileName,O does whatever processing is needed, and then closes it. If a line of
the form

#include SomeFile

is detected, then the call

ProcessFile( SomeFile );

is made recursively. Self-referential includes can be detected by keeping a list of files for
which a call to ProcessFileO has not yet terminated, and checking this list before making a
new call to ProcessFile.O
1.5 (a) The proof is by induction. The theorem is clearly true for 0 < XO ≤ 1, since it is true for
XO = 1, and for XO < 1, log XO is negative. It is also easy to see that the theorem holds for
1 < XO ≤ 2, since it is true for XO = 2, and for XO < 2, log XO is at most 1. Suppose the theorem
is true for pO < XO ≤ 2pO (where pO is a positive integer), and consider any 2pO < YO ≤ 4pO
(pO ≥ 1). Then log YO = 1 + log (YO / 2) < 1 + YO / 2 < YO / 2 + YO / 2 ≤ YO, where the first ine-
quality follows by the inductive hypothesis.
(b) Let 2XO = AO. Then AOBO = (2XO)BO = 2XBO. Thus log AOBO = XBO. Since XO = log AO, the
theorem is proved.
1.6 (a) The sum is 4/3 and follows directly from the formula.
1 2 3 2 3
(b) SO = __ + ___ 2
+ ___ 3
+ . . . . 4SO = 1+ __ + ___ + . . . . Subtracting the first equation from
4 4 4 4 42
1 ___ 2
the second gives 3SO = 1 + __ + 2 + . . . . By part (a), 3SO = 4/ 3 so SO = 4/ 9.
4 4
1 4 9 4 ___ 9 16
(c) SO = __ + 2 + 3 + . . . . 4SO = 1 +
___ ___ __ + 2 + ___ + . . . . Subtracting the first equa-
4 4 4 4 4 43
3 ___ 5 7
tion from the second gives 3SO = 1+ __ + 2 + 3 + . . . . Rewriting, we get
___
4 4 4
∞ i
___ + ∞ ___ 1
3SO = 2 Σ iO Σ iO . Thus 3SO = 2(4/ 9) + 4/ 3 = 20/ 9. Thus SO = 20/ 27.
iO=0 4 iO=0 4
∞ iONO
(d) Let SNO = Σ ___ iO
. Follow the same method as in parts (a) - (c) to obtain a formula for SNO
iO=0 4
in terms of SNO−1, SNO−2, ..., SO0 and solve the recurrence. Solving the recurrence is very
difficult.

-1-
_______________________________________________________________________________

double RoundUp( double N, int DecPlaces )


{
int i;
double AmountToAdd = 0.5;

for( i = 0; i < DecPlaces; i++ )


AmountToAdd /= 10;
return N + AmountToAdd;
}

void PrintFractionPart( double FractionPart, int DecPlaces )


{
int i, Adigit;

for( i = 0; i < DecPlaces; i++ )


{
FractionPart *= 10;
ADigit = IntPart( FractionPart );
PrintDigit( Adigit );
FractionPart = DecPart( FractionPart );
}
}

void PrintReal( double N, int DecPlaces )


{
int IntegerPart;
double FractionPart;

if( N < 0 )
{
putchar(’-’);
N = -N;
}
N = RoundUp( N, DecPlaces );
IntegerPart = IntPart( N ); FractionPart = DecPart( N );
PrintOut( IntegerPart ); /* Using routine in text */
if( DecPlaces > 0 )
putchar(’.’);
PrintFractionPart( FractionPart, DecPlaces );
}
Fig. 1.1.
_______________________________________________________________________________
N 1 N 1
__ − OINO/ 2 − 1OK __
1 ∼
∼ ln NO − ln NO/ 2 ∼∼ ln 2.
__
1.7 Σ i
= Σ i Σ i
iO=OINO/ 2OK iO=1 iO=1

-2-
1.8 24 = 16 ≡ 1 (modO 5). (24)25 ≡ 125 (modO 5). Thus 2100 ≡ 1 (modO 5).
1.9 (a) Proof is by induction. The statement is clearly true for NO = 1 and NO = 2. Assume true
kO+1 k
for NO = 1, 2, ..., kO. Then Σ FiO = Σ FiO+FkO+1. By the induction hypothesis, the value of the
iO=1 iO=1
sum on the right is FkO+2 − 2 + FkO+1 = FkO+3 − 2, where the latter equality follows from the
definition of the Fibonacci numbers. This proves the claim for NO = kO + 1, and hence for all
NO.
(b) As in the text, the proof is by induction. Observe that φ + 1 = φ2. This implies that
φ−1 + φ−2 = 1. For NO = 1 and NO = 2, the statement is true. Assume the claim is true for
NO = 1, 2, ..., kO.
FkO+1 = FkO + FkO−1
by the definition and we can use the inductive hypothesis on the right-hand side, obtaining
F < φkO + φkO−1
kO+1

< φ−1φkO+1 + φ−2φkO+1


FkO+1 < (φ−1 + φ−2)φkO+1 < φkO+1
and proving the theorem.
(c) See any of the advanced math references at the end of the chapter. The derivation
involves the use of generating functions.
N N N
1.10 (a) Σ (2iO−1) = 2 Σ iO − Σ 1 = NO(NO+1) − NO = NO2.
iO=1 iO=1 iO=1
(b) The easiest way to prove this is by induction. The case NO = 1 is trivial. Otherwise,
NO+1 N
Σ iO3 = (NO+1)3 + iΣ iO3
iO=1 O=1
NO2(NO+1)2
= (NO+1)3 + _________
4
H ___
NO2 J
= (NO+1)2OA + (NO+1)OA
I 4 K
H ___________
NO2 + 4NO + 4 J
= (NO+1)2OA OA
I 4 K
(NO+1)2(NO+2)2
_____________
=
22
H ___________
(NO+1)(NO+2) J
2
= OA OA
I 2 K
HNO+1 J 2
= OA Σ iOA
I iO=1 K

-3-
Chapter 2: Algorithm Analysis

2.1 2/NO, 37, √MM


NOO, NO, NOlog log NO, NOlog NO, NOlog (NO2), NOlog2NO, NO1.5, NO2, NO2log NO, NO3, 2NO/ 2,
NO
2 . NOlog NO and NOlog (NO2) grow at the same rate.
2.2 (a) True.
(b) False. A counterexample is TO1(NO) = 2NO, TO2(NO) = NO, and PfOO(NO) = NO.
(c) False. A counterexample is TO1(NO) = NO2, TO2(NO) = NO, and PfOO(NO) = NO2.
(d) False. The same counterexample as in part (c) applies.
2.3 We claim that NOlog NO is the slower growing function. To see this, suppose otherwise.
Then, NOε/ log NOO would grow slower than log NO. Taking logs of both sides, we find that,
√MMMMM

under this assumption, ε/ √Mlog


MMMMM
NOOlog NO grows slower than log log NO. But the first expres-
sion simplifies to ε √log NOO. If LO = log NO, then we are claiming that ε √L
MMMMMM MMOO grows slower than
log LO, or equivalently, that ε2LO grows slower than log2 LO. But we know that
log2 LO = ο (LO), so the original assumption is false, proving the claim.
2.4 Clearly, logkO1NO = ο(logkO2NO) if kO1 < kO2, so we need to worry only about positive integers.
The claim is clearly true for kO = 0 and kO = 1. Suppose it is true for kO < iO. Then, by
L’Hospital’s rule,
logiON logiO−1N
lim ______ = lim i _______
NO→∞ N NO→∞ N
The second limit is zero by the inductive hypothesis, proving the claim.
2.5 Let PfOO(NO) = 1 when NO is even, and NO when NO is odd. Likewise, let gO(NO) = 1 when NO is
odd, and NO when NO is even. Then the ratio PfOO(NO) / gO(NO) oscillates between 0 and ∞.
2.6 For all these programs, the following analysis will agree with a simulation:
(I) The running time is OO(NO).
(II) The running time is OO(NO2).
(III) The running time is OO(NO3).
(IV) The running time is OO(NO2).
(V) PjO can be as large as iO2, which could be as large as NO2. kO can be as large as PjO, which is
NO2. The running time is thus proportional to NO.NO2.NO2, which is OO(NO5).
(VI) The ifO statement is executed at most NO3 times, by previous arguments, but it is true
only OO(NO2) times (because it is true exactly iO times for each iO). Thus the innermost loop is
only executed OO(NO2) times. Each time through, it takes OO(PjO2) = OO(NO2) time, for a total of
OO(NO4). This is an example where multiplying loop sizes can occasionally give an overesti-
mate.
2.7 (a) It should be clear that all algorithms generate only legal permutations. The first two
algorithms have tests to guarantee no duplicates; the third algorithm works by shuffling an
array that initially has no duplicates, so none can occur. It is also clear that the first two
algorithms are completely random, and that each permutation is equally likely. The third
algorithm, due to R. Floyd, is not as obvious; the correctness can be proved by induction.

-4-
See
J. Bentley, "Programming Pearls," Communications of the ACM 30 (1987), 754-757.
Note that if the second line of algorithm 3 is replaced with the statement
Swap( A[i], A[ RandInt( 0, N-1 ) ] );
then not all permutations are equally likely. To see this, notice that for NO = 3, there are 27
equally likely ways of performing the three swaps, depending on the three random integers.
Since there are only 6 permutations, and 6 does not evenly divide
27, each permutation cannot possibly be equally represented.
(b) For the first algorithm, the time to decide if a random number to be placed in AO[iO] has
not been used earlier is OO(iO). The expected number of random numbers that need to be
tried is NO/ (NO − iO). This is obtained as follows: iO of the NO numbers would be duplicates.
Thus the probability of success is (NO − iO) / NO. Thus the expected number of independent
trials is NO/ (NO − iO). The time bound is thus
NO−1 NO−1 NO2
Ni
____ ____ < NO2NO−1 ____
1 N 1
__ = OO(NO2
Σ O −i
< Σ O −i Σ O −i
< NO2
Σ log NO)
iO=0 N iO=0 N iO=0 N PjO=1 Pj

The second algorithm saves a factor of iO for each random number, and thus reduces the time
bound to OO(NOlog NO) on average. The third algorithm is clearly linear.
(c, d) The running times should agree with the preceding analysis if the machine has enough
memory. If not, the third algorithm will not seem linear because of a drastic increase for
large NO.
(e) The worst-case running time of algorithms I and II cannot be bounded because there is
always a finite probability that the program will not terminate by some given time TO. The
algorithm does, however, terminate with probability 1. The worst-case running time of the
third algorithm is linear - its running time does not depend on the sequence of random
numbers.
2.8 Algorithm 1 would take about 5 days for NO = 10,000, 14.2 years for NO = 100,000 and 140
centuries for NO = 1,000,000. Algorithm 2 would take about 3 hours for NO = 100,000 and
about 2 weeks for NO = 1,000,000. Algorithm 3 would use 11⁄2 minutes for NO = 1,000,000.
These calculations assume a machine with enough memory to hold the array. Algorithm 4
solves a problem of size 1,000,000 in 3 seconds.
2.9 (a) OO(NO2).
(b) OO(NOlog NO).
2.10 (c) The algorithm is linear.
2.11 Use a variation of binary search to get an OO(log NO) solution (assuming the array is preread).
2.13 (a) Test to see if NO is an odd number (or 2) and is not divisible by 3, 5, 7, ..., √MM
NOO.
(b) OO( √N
MMOO), assuming that all divisions count for one unit of time.
(c) BO = OO(log NO).
(d) OO(2BO/ 2).
(e) If a 20-bit number can be tested in time TO, then a 40-bit number would require about TO2
time.
(f) BO is the better measure because it more accurately represents the sizeO of the input.

-5-
2.14 The running time is proportional to NO times the sum of the reciprocals of the primes less
than NO. This is OO(NOlog log NO). See Knuth, Volume 2, page 394.
2.15 Compute XO2, XO4, XO8, XO10, XO20, XO40, XO60, and XO62.
2.16 Maintain an OIarray
log NOK
PowersOfXO that can be filled in a for loop. The array will contain XO, XO2,
XO4, up to XO2 . The binary representation of NO (which can be obtained by testing even or
odd and then dividing by 2, until all bits are examined) can be used to multiply the
appropriate entries of the array.
2.17 For NO = 0 or NO = 1, the number of multiplies is zero. If bO(NO) is the number of ones in the
binary representation of NO, then if NO > 1, the number of multiplies used is
OIlog NOK + bO(NO) − 1

2.18 (a) AO.


(b) BO.
(c) The information given is not sufficient to determine an answer. We have only worst-
case bounds.
(d) Yes.
2.19 (a) Recursion is unnecessary if there are two or fewer elements.
(b) One way to do this is to note that if the first NO−1 elements have a majority, then the last
element cannot change this. Otherwise, the last element could be a majority. Thus if NO is
odd, ignore the last element. Run the algorithm as before. If no majority element emerges,
then return the NOthO element as a candidate.
(c) The running time is OO(NO), and satisfies TO(NO) = TO(NO/ 2) + OO(NO).
(d) One copy of the original needs to be saved. After this, the BO array, and indeed the recur-
sion can be avoided by placing each BiO in the AO array. The difference is that the original
recursive strategy implies that OO(log NO) arrays are used; this guarantees only two copies.
2.20 Otherwise, we could perform operations in parallel by cleverly encoding several integers
into one. For instance, if A = 001, B = 101, C = 111, D = 100, we could add A and B at the
same time as C and D by adding 00A00C + 00B00D. We could extend this to add NO pairs
of numbers at once in unit cost.
2.22 No. If LowO = 1, HighO = 2, then MidO = 1, and the recursive call does not make progress.
2.24 No. As in Exercise 2.22, no progress is made.

-6-
Chapter 3: Lists, Stacks, and Queues

3.2 The comments for Exercise 3.4 regarding the amount of abstractness used apply here. The
running time of the procedure in Fig. 3.1 is O (L + P ).
O O O

_______________________________________________________________________________

void
PrintLots( List L, List P )
{
int Counter;
Position Lpos, Ppos;

Lpos = First( L );
Ppos = First( P );
Counter = 1;
while( Lpos != NULL && Ppos != NULL )
{
if( Ppos->Element == Counter++ )
{
printf( "%? ", Lpos->Element );
Ppos = Next( Ppos, P );
}
Lpos = Next( Lpos, L );
}
}
Fig. 3.1.
_______________________________________________________________________________

3.3 (a) For singly linked lists, the code is shown in Fig. 3.2.

-7-
_______________________________________________________________________________

/* BeforeP is the cell before the two adjacent cells that are to be swapped. */
/* Error checks are omitted for clarity. */

void
SwapWithNext( Position BeforeP, List L )
{
Position P, AfterP;

P = BeforeP->Next;
AfterP = P->Next; /* Both P and AfterP assumed not NULL. */

P->Next = AfterP->Next;
BeforeP->Next = AfterP;
AfterP->Next = P;
}
Fig. 3.2.
_______________________________________________________________________________

(b) For doubly linked lists, the code is shown in Fig. 3.3.
_______________________________________________________________________________

/* P and AfterP are cells to be switched. Error checks as before. */

void
SwapWithNext( Position P, List L )
{
Position BeforeP, AfterP;

BeforeP = P->Prev;
AfterP = P->Next;

P->Next = AfterP->Next;
BeforeP->Next = AfterP;
AfterP->Next = P;
P->Next->Prev = P;
P->Prev = AfterP;
AfterP->Prev = BeforeP;
}
Fig. 3.3.
_______________________________________________________________________________

3.4 Intersect is shown on page 9.


O

-8-
_______________________________________________________________________________

/* This code can be made more abstract by using operations such as */


/* Retrieve and IsPastEnd to replace L1Pos->Element and L1Pos != NULL. */
/* We have avoided this because these operations were not rigorously defined. */

List
Intersect( List L1, List L2 )
{
List Result;
Position L1Pos, L2Pos, ResultPos;

L1Pos = First( L1 ); L2Pos = First( L2 );


Result = MakeEmpty( NULL );
ResultPos = First( Result );
while( L1Pos != NULL && L2Pos != NULL )
{
if( L1Pos->Element < L2Pos->Element )
L1Pos = Next( L1Pos, L1 );
else if( L1Pos->Element > L2Pos->Element )
L2Pos = Next( L2Pos, L2 );
else
{
Insert( L1Pos->Element, Result, ResultPos );
L1 = Next( L1Pos, L1 ); L2 = Next( L2Pos, L2 );
ResultPos = Next( ResultPos, Result );
}
}
return Result;
}
_______________________________________________________________________________

3.5 Fig. 3.4 contains the code for Union. O

3.7 (a) One algorithm is to keep the result in a sorted (by exponent) linked list. Each of the MN O

multiplies requires a search of the linked list for duplicates. Since the size of the linked list
is O (MN ), the total running time is O (M 2N 2).
O O O
O O

(b) The bound can be improved by multiplying one term by the entire other polynomial, and
then using the equivalent of the procedure in Exercise 3.2 to insert the entire sequence.
Then each sequence takes O (MN ), but there are only M of them, giving a time bound of
O O O

O (M 2N ).
O
O
O

(c) An O (MN log MN ) solution is possible by computing all MN pairs and then sorting by
O O O O

exponent using any algorithm in Chapter 7. It is then easy to merge duplicates afterward.
(d) The choice of algorithm depends on the relative values of M and N . If they are close,
O O

then the solution in part (c) is better. If one polynomial is very small, then the solution in
part (b) is better.

-9-
_______________________________________________________________________________

List
Union( List L1, List L2 )
{
List Result;
ElementType InsertElement;
Position L1Pos, L2Pos, ResultPos;

L1Pos = First( L1 ); L2Pos = First( L2 );


Result = MakeEmpty( NULL );
ResultPos = First( Result );
while ( L1Pos != NULL && L2Pos != NULL ) {
if( L1Pos->Element < L2Pos->Element ) {
InsertElement = L1Pos->Element;
L1Pos = Next( L1Pos, L1 );
}
else if( L1Pos->Element > L2Pos->Element ) {
InsertElement = L2Pos->Element;
L2Pos = Next( L2Pos, L2 );
}
else {
InsertElement = L1Pos->Element;
L1Pos = Next( L1Pos, L1 ); L2Pos = Next( L2Pos, L2 );
}
Insert( InsertElement, Result, ResultPos );
ResultPos = Next( ResultPos, Result );
}
/* Flush out remaining list */
while( L1Pos != NULL ) {
Insert( L1Pos->Element, Result, ResultPos );
L1Pos = Next( L1Pos, L1 ); ResultPos = Next( ResultPos, Result );
}
while( L2Pos != NULL ) {
Insert( L2Pos->Element, Result, ResultPos );
L2Pos = Next( L2Pos, L2 ); ResultPos = Next( ResultPos, Result );
}
return Result;
}
Fig. 3.4.
_______________________________________________________________________________
3.8 One can use the Pow function in Chapter 2, adapted for polynomial multiplication. If P is
O O

small, a standard method that uses O (P ) multiplies instead of O (log P ) might be better
O O O O

because the multiplies would involve a large number with a small number, which is good
for the multiplication routine in part (b).
3.10 This is a standard programming project. The algorithm can be sped up by setting
M' = M mod N , so that the hot potato never goes around the circle more than once, and
O O O O

-10-
then if M' > N / 2, passing the potato appropriately in the alternative direction. This
O O

requires a doubly linked list. The worst-case running time is clearly O (N min (M , N )), O O O O O

although when these heuristics are used, and M and N are comparable, the algorithm might
O O

be significantly faster. If M = 1, the algorithm is clearly linear. The VAX/VMS C


O

compiler’s memory management routines do poorly with the particular pattern of free s in P O

this case, causing O (N log N ) behavior. O O O

3.12 Reversal of a singly linked list can be done nonrecursively by using a stack, but this
requires O (N ) extra space. The solution in Fig. 3.5 is similar to strategies employed in gar-
O O

bage collection algorithms. At the top of the while loop, the list from the start to Pre-O

viousPos is already reversed, whereas the rest of the list, from CurrentPos to the end, is
O O

normal. This algorithm uses only constant extra space.


_______________________________________________________________________________

/* Assuming no header and L is not empty. */

List
ReverseList( List L )
{
Position CurrentPos, NextPos, PreviousPos;

PreviousPos = NULL;
CurrentPos = L;
NextPos = L->Next;
while( NextPos != NULL )
{
CurrentPos->Next = PreviousPos;
PreviousPos = CurrentPos;
CurrentPos = NextPos;
NextPos = NextPos->Next;
}
CurrentPos->Next = PreviousPos;
return CurrentPos;
}
Fig. 3.5.
_______________________________________________________________________________

3.15 (a) The code is shown in Fig. 3.6.


(b) See Fig. 3.7.
(c) This follows from well-known statistical theorems. See Sleator and Tarjan’s paper in
the Chapter 11 references.
3.16 (c) Delete takes O (N ) and is in two nested for loops each of size N , giving an obvious
O O O O

O (N 3) bound. A better bound of O (N 2) is obtained by noting that only N elements can be


O
O
O
O
O

deleted from a list of size N , hence O (N 2) is spent performing deletes. The remainder of
O O
O

the routine is O (N 2), so the bound follows.


O
O

(d) O (N 2).
O
O

-11-
_______________________________________________________________________________

/* Array implementation, starting at slot 1 */

Position
Find( ElementType X, List L )
{
int i, Where;

Where = 0;
for( i = 1; i < L.SizeOfList; i++ )
if( X == L[i].Element )
{
Where = i;
break;
}

if( Where ) /* Move to front. */


{
for( i = Where; i > 1; i-- )
L[i].Element = L[i-1].Element;
L[1].Element = X;
return 1;
}
else
return 0; /* Not found. */
}
Fig. 3.6.
_______________________________________________________________________________
(e) Sort the list, and make a scan to remove duplicates (which must now be adjacent).
3.17 (a) The advantages are that it is simpler to code, and there is a possible savings if deleted
keys are subsequently reinserted (in the same place). The disadvantage is that it uses more
space, because each cell needs an extra bit (which is typically a byte), and unused cells are
not freed.
3.21 Two stacks can be implemented in an array by having one grow from the low end of the
array up, and the other from the high end down.
3.22 (a) Let E be our extended stack. We will implement E with two stacks. One stack, which
O O

we’ll call S , is used to keep track of the Push and Pop operations, and the other, M , keeps
O O O O

track of the minimum. To implement Push(X,E), we perform Push(X,S). If X is smaller O

than or equal to the top element in stack M , then we also perform Push(X,M). To imple-
O

ment Pop(E), we perform Pop(S). If X is equal to the top element in stack M , then we also
O O

Pop(M). FindMin(E) is performed by examining the top of M . All these operations are
O

clearly O (1).
O

(b) This result follows from a theorem in Chapter 7 that shows that sorting must take
Ω(N log N ) time. O (N ) operations in the repertoire, including DeleteMin , would be
O O O O O

sufficient to sort.

-12-
_______________________________________________________________________________

/* Assuming a header. */

Position
Find( ElementType X, List L )
{
Position PrevPos, XPos;

PrevPos = FindPrevious( X, L );
if( PrevPos->Next != NULL ) /* Found. */
{
XPos = PrevPos ->Next;
PrevPos->Next = XPos->Next;
XPos->Next = L->Next;
L->Next = XPos;
return XPos;
}
else
return NULL;
}
Fig. 3.7.
_______________________________________________________________________________
3.23 Three stacks can be implemented by having one grow from the bottom up, another from the
top down, and a third somewhere in the middle growing in some (arbitrary) direction. If the
third stack collides with either of the other two, it needs to be moved. A reasonable strategy
is to move it so that its center (at the time of the move) is halfway between the tops of the
other two stacks.
3.24 Stack space will not run out because only 49 calls will be stacked. However, the running
time is exponential, as shown in Chapter 2, and thus the routine will not terminate in a rea-
sonable amount of time.
3.25 The queue data structure consists of pointers Q->Front and Q->Rear, which point to the
O O

beginning and end of a linked list. The programming details are left as an exercise because
it is a likely programming assignment.
3.26 (a) This is a straightforward modification of the queue routines. It is also a likely program-
ming assignment, so we do not provide a solution.

-13-
Chapter 4: Trees

4.1 (a) A . O

(b) G , H , I , L , M , and K .
O O O O O O

4.2 For node B : O

(a) A . O

(b) D and E .O O

(c) C . O

(d) 1.
(e) 3.
4.3 4.
4.4 There are N nodes. Each node has two pointers, so there are 2N pointers. Each node but
O O

the root has one incoming pointer from its parent, which accounts for N −1 pointers. The O

rest are NULL. O

4.5 Proof is by induction. The theorem is trivially true for H = 0. Assume true for H = 1, 2, ..., O O

k . A tree of height k +1 can have two subtrees of height at most k . These can have at most
O O O

2k +1−1 nodes each by the induction hypothesis. These 2k +2−2 nodes plus the root prove the
O O

theorem for height k +1 and hence for all heights. O

4.6 This can be shown by induction. Alternatively, let N = number of nodes, F = number of O O

full nodes, L = number of leaves, and H = number of half nodes (nodes with one child).
O O

Clearly, N = F + H + L . Further, 2F + H = N − 1 (see Exercise 4.4). Subtracting yields


O O O O O O O

L − F = 1.
O O

4.7 This can be shown by induction. In a tree with no nodes, the sum is zero, and in a one-node
tree, the root is a leaf at depth zero, so the claim is true. Suppose the theorem is true for all
trees with at most k nodes. Consider any tree with k +1 nodes. Such a tree consists of an i
O O O

node left subtree and a k − i node right subtree. By the inductive hypothesis, the sum for
O O

the left subtree leaves is at most one with respect to the left tree root. Because all leaves are
one deeper with respect to the original tree than with respect to the subtree, the sum is at
most 1⁄2 with respect to the root. Similar logic implies that the sum for leaves in the right
subtree is at most 1⁄2, proving the theorem. The equality is true if and only if there are no
nodes with one child. If there is a node with one child, the equality cannot be true because
adding the second child would increase the sum to higher than 1. If no nodes have one
child, then we can find and remove two sibling leaves, creating a new tree. It is easy to see
that this new tree has the same sum as the old. Applying this step repeatedly, we arrive at a
single node, whose sum is 1. Thus the original tree had sum 1.
4.8 (a) - * * a b + c d e.
(b) ( ( a * b ) * ( c + d ) ) - e.
(c) a b * c d + * e -.

-14-
4.9

3 4

1 4 1 6

2 6 2 5 9

5 9 7

4.11 This problem is not much different from the linked list cursor implementation. We maintain
an array of records consisting of an element field, and two integers, left and right. The free
list can be maintained by linking through the left field. It is easy to write the CursorNew O

and CursorDispose routines, and substitute them for malloc and free.
O

4.12 (a) Keep a bit array B . If i is in the tree, then B [i ] is true; otherwise, it is false. Repeatedly
O O O O

generate random integers until an unused one is found. If there are N elements already in O

the tree, then M − N are not, and the probability of finding one of these is (M − N ) / M .
O O O O O

Thus the expected number of trials is M / (M −N ) = α / (α − 1). O O O

(b) To find an element that is in the tree, repeatedly generate random integers until an
already-used integer is found. The probability of finding one is N / M , so the expected O O

number of trials is M / N = α. O O

(c) The total cost for one insert and one delete is α / (α − 1) + α = 1 + α + 1 / (α − 1). Set-
ting α = 2 minimizes this cost.
4.15 (a) N (0) = 1, N (1) = 2, N (H ) = N (H −1) + N (H −2) + 1.
O O O O O O O O

(b) The heights are one less than the Fibonacci numbers.

4.16

2 6

1 3 5 9

4.17 It is easy to verify by hand that the claim is true for 1 ≤ k ≤ 3. Suppose it is true for k = 1, O O

2, 3, ... H . Then after the first 2H − 1 insertions, 2H −1 is at the root, and the right subtree is
O
O O

a balanced tree containing 2H −1 + 1 through 2H − 1. Each of the next 2H −1 insertions,


O O O

namely, 2H through 2H + 2H −1 − 1, insert a new maximum and get placed in the right
O O O

-15-
subtree, eventually forming a perfectly balanced right subtree of height H −1. This follows
O

by the induction hypothesis because the right subtree may be viewed as being formed from
the successive insertion of 2H −1 + 1 through 2H + 2H −1 − 1. The next insertion forces an
O O O

imbalance at the root, and thus a single rotation. It is easy to check that this brings 2H to
O

the root and creates a perfectly balanced left subtree of height H −1. The new key is
O

attached to a perfectly balanced right subtree of height H −2 as the last node in the right
O

path. Thus the right subtree is exactly as if the nodes 2H + 1 through 2H + 2H −1 were
O O O

inserted in order. By the inductive hypothesis, the subsequent successive insertion of


2H + 2H −1 + 1 through 2H +1 − 1 will create a perfectly balanced right subtree of height
O O O

H −1. Thus after the last insertion, both the left and the right subtrees are perfectly bal-
O

anced, and of the same height, so the entire tree of 2H +1 − 1 nodes is perfectly balanced (and
O

has height H ). O

4.18 The two remaining functions are mirror images of the text procedures. Just switch Right O

and Left everywhere.


O

4.20 After applying the standard binary search tree deletion algorithm, nodes on the deletion path
need to have their balance changed, and rotations may need to be performed. Unlike inser-
tion, more than one node may need rotation.
4.21 (a) O (log log N ).
O O

(b) The minimum AVL tree of height 255 (a huge tree).

4.22
_______________________________________________________________________________

Position
DoubleRotateWithLeft( Position K3 )
{
Position K1, K2;

K1 = K3->Left;
K2 = K1->Right;

K1->Right = K2->Left;
K3->Left = K2->Right;
K2->Left = K1;
K2->Right = K3;
K1->Height = Max( Height(K1->Left), Height(K1->Right) ) + 1;
K3->Height = Max( Height(K3->Left), Height(K3->Right) ) + 1;
K2->Height = Max( K1->Height, K3->Height ) + 1;

return K3;
}
_______________________________________________________________________________

-16-
4.23 After accessing 3,

2 10

1 4 11

6 12

5 8 13

7 9

After accessing 9,

3 10

2 4 11

1 8 12

6 13

5 7

-17-
After accessing 1,

2 10

3 11

4 12

8 13

5 7

After accessing 5,

1 9

2 6 10

4 8 11

3 7 12

13

-18-
4.24

1 9

2 8 10

4 7 11

3 12

13

4.25 (a) 523776.


(b) 262166, 133114, 68216, 36836, 21181, 13873.
(c) After Find (9).
O

4.26 (a) An easy proof by induction.


4.28 (a-c) All these routines take linear time.
_______________________________________________________________________________

/* These functions use the type BinaryTree, which is the same */


/* as TreeNode *, in Fig 4.16. */

int
CountNodes( BinaryTree T )
{
if( T == NULL )
return 0;
return 1 + CountNodes(T->Left) + CountNodes(T->Right);
}

int
CountLeaves( BinaryTree T )
{
if( T == NULL )
return 0;
else if( T->Left == NULL && T->Right == NULL )
return 1;
return CountLeaves(T->Left) + CountLeaves(T->Right);
}
_______________________________________________________________________________

-19-
_______________________________________________________________________________

/* An alternative method is to use the results of Exercise 4.6. */

int
CountFull( BinaryTree T )
{
if( T == NULL )
return 0;
return ( T->Left != NULL && T->Right != NULL ) +
CountFull(T->Left) + CountFull(T->Right);
}
_______________________________________________________________________________
4.29 We assume the existence of a function RandInt(Lower,Upper), which generates a uniform
O

random integer in the appropriate closed interval. MakeRandomTree returns NULL if N is


O O

not positive, or if N is so large that memory is exhausted.


O

_______________________________________________________________________________

SearchTree
MakeRandomTree1( int Lower, int Upper )
{
SearchTree T;
int RandomValue;

T = NULL;
if( Lower <= Upper )
{
T = malloc( sizeof( struct TreeNode ) );
if( T != NULL )
{
T->Element = RandomValue = RandInt( Lower, Upper );
T->Left = MakeRandomTree1( Lower, RandomValue - 1 );
T->Right = MakeRandomTree1( RandomValue + 1, Upper );
}
else
FatalError( "Out of space!" );
}
return T;
}

SearchTree
MakeRandomTree( int N )
{
return MakeRandomTree1( 1, N );
}
_______________________________________________________________________________

-20-
4.30
_______________________________________________________________________________

/* LastNode is the address containing last value that was assigned to a node */

SearchTree
GenTree( int Height, int *LastNode )
{
SearchTree T;

if( Height >= 0 )


{
T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */
T->Left = GenTree( Height - 1, LastNode );
T->Element = ++*LastNode;
T->Right = GenTree( Height - 2, LastNode );
return T;
}
else
return NULL;
}

SearchTree
MinAvlTree( int H )
{
int LastNodeAssigned = 0;
return GenTree( H, &LastNodeAssigned );
}
_______________________________________________________________________________
4.31 There are two obvious ways of solving this problem. One way mimics Exercise 4.29 by
replacing RandInt(Lower,Upper) with (Lower+Upper) / 2. This requires computing
2H +1−1, which is not that difficult. The other mimics the previous exercise by noting that
O

the heights of the subtrees are both H −1. The solution follows:
O

-21-
_______________________________________________________________________________

/* LastNode is the address containing last value that was assigned to a node. */

SearchTree
GenTree( int Height, int *LastNode )
{
SearchTree T = NULL;

if( Height >= 0 )


{
T = malloc( sizeof( *T ) ); /* Error checks omitted; see Exercise 4.29. */
T->Left = GenTree( Height - 1, LastNode );
T->Element = ++*LastNode;
T->Right = GenTree( Height - 1, LastNode );
}
return T;
}

SearchTree
PerfectTree( int H )
{
int LastNodeAssigned = 0;
return GenTree( H, &LastNodeAssigned );
}
_______________________________________________________________________________
4.32 This is known as one-dimensional range searching. The time is O (K ) to perform the
O O

inorder traversal, if a significant number of nodes are found, and also proportional to the
depth of the tree, if we get to some leaves (for instance, if no nodes are found). Since the
average depth is O (log N ), this gives an O (K + log N ) average bound.
O O O O O

_______________________________________________________________________________

void
PrintRange( ElementType Lower, ElementType Upper, SearchTree T )
{
if( T != NULL )
{
if( Lower <= T->Element )
PrintRange( Lower, Upper, T->Left );
if( Lower <= T->Element && T->Element <= Upper )
PrintLine( T->Element );
if( T->Element <= Upper )
PrintRange( Lower, Upper, T->Right );
}
}
_______________________________________________________________________________

-22-
4.33 This exercise and Exercise 4.34 are likely programming assignments, so we do not provide
code here.
4.35 Put the root on an empty queue. Then repeatedly Dequeue a node and Enqueue its left and
O O

right children (if any) until the queue is empty. This is O (N ) because each queue operation
O O

is constant time and there are N Enqueue and N Dequeue operations.


O O O O

4.36 (a)

6:-

2:4 8:-

0,1 2, 3 4, 5 6, 7 8, 9

(b)

4:6

1,2,3 4, 5 6,7,8

-23-
4.39

B C G

D E F N

H I J K L M

O P Q R

4.41 The function shown here is clearly a linear time routine because in the worst case it does a
traversal on both T 1 and T 2. O O

_______________________________________________________________________________

int
Similar( BinaryTree T1, BinaryTree T2 )
{
if( T1 == NULL || T2 == NULL )
return T1 == NULL && T2 == NULL;
return Similar( T1->Left, T2->Left ) && Similar( T1->Right, T2->Right );
}
_______________________________________________________________________________

4.43 The easiest solution is to compute, in linear time, the inorder numbers of the nodes in both
trees. If the inorder number of the root of T2 is x , then find x in T1 and rotate it to the root. O O

Recursively apply this strategy to the left and right subtrees of T1 (by looking at the values
in the root of T2’s left and right subtrees). If dN is the depth of x , then the running time O
O

satisfies T (N ) = T (i ) + T (N −i −1) + dN , where i is the size of the left subtree. In the worst
O O O O O O O
O
O

case, dN is always O (N ), and i is always 0, so the worst-case running time is quadratic.


O
O O O

Under the plausible assumption that all values of i are equally likely, then even if dN is O
O

always O (N ), the average value of T (N ) is O (N log N ). This is a common recurrence that


O O O O O O O

was already formulated in the chapter and is solved in Chapter 7. Under the more reason-
able assumption that dN is typically logarithmic, then the running time is O (N ).
O
O O

4.44 Add a field to each node indicating the size of the tree it roots. This allows computation of
its inorder traversal number.
4.45 (a) You need an extra bit for each thread.
(c) You can do tree traversals somewhat easier and without recursion. The disadvantage is
that it reeks of old-style hacking.

-24-
Chapter 5: Hashing

5.1 (a) On the assumption that we add collisions to the end of the list (which is the easier way if
a hash table is being built by hand), the separate chaining hash table that results is shown
here.

0
1 4371
2
3 1323 6173
4 4344
5
6
7
8
9 4199 9679 1989

(b)

0 9679
1 4371
2 1989
3 1323
4 6173
5 4344
6
7
8
9 4199

-25-
(c)

0 9679
1 4371
2
3 1323
4 6173
5 4344
6
7
8 1989
9 4199

(d) 1989 cannot be inserted into the table because hash 2(1989) = 6, and the alternative locations
O

5, 1, 7, and 3 are already taken. The table at this point is as follows:

0
1 4371
2
3 1323
4 6173
5 9679
6
7 4344
8
9 4199

5.2 When rehashing, we choose a table size that is roughly twice as large and prime. In our
case, the appropriate new table size is 19, with hash function h (x ) = x (mod 19).
O O O O

(a) Scanning down the separate chaining hash table, the new locations are 4371 in list 1,
1323 in list 12, 6173 in list 17, 4344 in list 12, 4199 in list 0, 9679 in list 8, and 1989 in list
13.
(b) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 14 because both 12 and 13 are already occu-
pied, and 4199 in bucket 0.

-26-
(c) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 16 because both 12 and 13 are already occu-
pied, and 4199 in bucket 0.
(d) The new locations are 9679 in bucket 8, 4371 in bucket 1, 1989 in bucket 13, 1323 in
bucket 12, 6173 in bucket 17, 4344 in bucket 15 because 12 is already occupied, and 4199
in bucket 0.
5.4 We must be careful not to rehash too often. Let p be the threshold (fraction of table size) at
O

which we rehash to a smaller table. Then if the new table has size N , it contains 2pN ele-O O

ments. This table will require rehashing after either 2N − 2pN insertions or pN deletions.
O O O

Balancing these costs suggests that a good choice is p = 2/ 3. For instance, suppose we
O

have a table of size 300. If we rehash at 200 elements, then the new table size is N = 150, O

and we can do either 100 insertions or 100 deletions until a new rehash is required.
If we know that insertions are more frequent than deletions, then we might choose p to be O

somewhat larger. If p is too close to 1.0, however, then a sequence of a small number of
O

deletions followed by insertions can cause frequent rehashing. In the worst case, if p = 1.0, O

then alternating deletions and insertions both require rehashing.


5.5 (a) Since each table slot is eventually probed, if the table is not empty, the collision can be
resolved.
(b) This seems to eliminate primary clustering but not secondary clustering because all ele-
ments that hash to some location will try the same collision resolution sequence.
(c, d) The running time is probably similar to quadratic probing. The advantage here is that
the insertion can’t fail unless the table is full.
(e) A method of generating numbers that are not random (or even pseudorandom) is given
in the references. An alternative is to use the method in Exercise 2.7.
5.6 Separate chaining hashing requires the use of pointers, which costs some memory, and the
standard method of implementing calls on memory allocation routines, which typically are
expensive. Linear probing is easily implemented, but performance degrades severely as the
load factor increases because of primary clustering. Quadratic probing is only slightly more
difficult to implement and gives good performance in practice. An insertion can fail if the
table is half empty, but this is not likely. Even if it were, such an insertion would be so
expensive that it wouldn’t matter and would almost certainly point up a weakness in the
hash function. Double hashing eliminates primary and secondary clustering, but the compu-
tation of a second hash function can be costly. Gonnet and Baeza-Yates [8] compare several
hashing strategies; their results suggest that quadratic probing is the fastest method.
5.7 Sorting the MN records and eliminating duplicates would require O (MN log MN ) time
O O O O

using a standard sorting algorithm. If terms are merged by using a hash function, then the
merging time is constant per term for a total of O (MN ). If the output polynomial is small
O O

and has only O (M + N ) terms, then it is easy to sort it in O ((M + N )log (M + N )) time,
O O O O O O O O

which is less than O (MN ). Thus the total is O (MN ). This bound is better because the
O O O O

model is less restrictive: Hashing is performing operations on the keys rather than just com-
parison between the keys. A similar bound can be obtained by using bucket sort instead of
a standard sorting algorithm. Operations such as hashing are much more expensive than
comparisons in practice, so this bound might not be an improvement. On the other hand, if
the output polynomial is expected to have only O (M + N ) terms, then using a hash table
O O O

saves a huge amount of space, since under these conditions, the hash table needs only

-27-
O (M + N ) space.
O O O

Another method of implementing these operations is to use a search tree instead of a hash
table; a balanced tree is required because elements are inserted in the tree with too much
order. A splay tree might be particularly well suited for this type of a problem because it
does well with sequential accesses. Comparing the different ways of solving the problem is
a good programming assignment.
5.8 The table size would be roughly 60,000 entries. Each entry holds 8 bytes, for a total of
480,000 bytes.
5.9 (a) This statement is true.
(b) If a word hashes to a location with value 1, there is no guarantee that the word is in the
dictionary. It is possible that it just hashes to the same value as some other word in the dic-
tionary. In our case, the table is approximately 10% full (30,000 words in a table of
300,007), so there is a 10% chance that a word that is not in the dictionary happens to hash
out to a location with value 1.
(c) 300,007 bits is 37,501 bytes on most machines.
(d) As discussed in part (b), the algorithm will fail to detect one in ten misspellings on aver-
age.
(e) A 20-page document would have about 60 misspellings. This algorithm would be
expected to detect 54. A table three times as large would still fit in about 100K bytes and
reduce the expected number of errors to two. This is good enough for many applications,
especially since spelling detection is a very inexact science. Many misspelled words (espe-
cially short ones) are still words. For instance, typing them instead of then is a misspelling
O O

that won’t be detected by any algorithm.


5.10 To each hash table slot, we can add an extra field that we’ll call WhereOnStack, and we can O

keep an extra stack. When an insertion is first performed into a slot, we push the address (or
number) of the slot onto the stack and set the WhereOnStack field to point to the top of the
O

stack. When we access a hash table slot, we check that WhereOnStack points to a valid part
O

of the stack and that the entry in the (middle of the) stack that is pointed to by the WhereOn-
Stack field has that hash table slot as an address.
O

5.14

000 001 010 011 100 101 110 111

(2) (2) (3) (3) (2)


00000010 01010001 10010110 10111101 11001111
00001011 01100001 10011011 10111110 11011011
00101011 01101111 10011110 11110000
01111111

-28-
Chapter 6: Priority Queues (Heaps)

6.1 Yes. When an element is inserted, we compare it to the current minimum and change the
minimum if the new element is smaller. DeleteMinO operations are expensive in this
scheme.

6.2

1 1

3 2 3 2

6 7 5 4 12 6 4 8

15 14 12 9 10 11 13 8 15 14 9 7 5 11 13 10

6.3 The result of three DeleteMins,O starting with both of the heaps in Exercise 6.2, is as fol-
lows:

4 4

6 5 6 5

13 7 10 8 12 7 10 8

15 14 12 9 11 15 14 9 13 11

6.4
6.5 These are simple modifications to the code presented in the text and meant as programming
exercises.
6.6 225. To see this, start with iO=1 and position at the root. Follow the path toward the last
node, doubling iO when taking a left child, and doubling iO and adding one when taking a
right child.

-29-
6.7 (a) We show that HO(NO), which is the sum of the heights of nodes in a complete binary tree
of NO nodes, is NO − bO(NO), where bO(NO) is the number of ones in the binary representation of
NO. Observe that for NO = 0 and NO = 1, the claim is true. Assume that it is true for values of
kO up to and including NO−1. Suppose the left and right subtrees have LO and RO nodes,
respectively. Since the root has height OIlog NOK, we have
HO(NO) = OIlog NOK + HO(LO) + HO(RO)
= OIlog NOK + LO − bO(LO) + RO − bO(RO)
= NO − 1 + (OIlog NOK − bO(LO) − bO(RO))
The second line follows from the inductive hypothesis, and the third follows because
LO + RO = NO − 1. Now the last node in the tree is in either the left subtree or the right sub-
tree. If it is in the left subtree, then the right subtree is a perfect tree, and
bO(RO) = OIlog NOK − 1. Further, the binary representation of NO and LO are identical, with the
exception that the leading 10 in NO becomes 1 in LO. (For instance, if NO = 37 = 100101, LO =
10101.) It is clear that the second digit of NO must be zero if the last node is in the left sub-
tree. Thus in this case, bO(LO) = bO(NO), and
HO(NO) = NO − bO(NO)

If the last node is in the right subtree, then bO(LO) = OIlog NOK. The binary representation of RO
is identical to NO, except that the leading 1 is not present. (For instance, if NO = 27 = 101011,
LO = 01011.) Thus bO(RO) = bO(NO) − 1, and again
HO(NO) = NO − bO(NO)

(b) Run a single-elimination tournament among eight elements. This requires seven com-
parisons and generates ordering information indicated by the binomial tree shown here.

e c b

g f d

The eighth comparison is between bO and cO. If cO is less than bO, then bO is made a child of cO.
Otherwise, both cO and dO are made children of bO.
(c) A recursive strategy is used. Assume that NO = 2kO. A binomial tree is built for the NO
elements as in part (b). The largest subtree of the root is then recursively converted into a
binary heap of 2kO−1 elements. The last element in the heap (which is the only one on an
extra level) is then inserted into the binomial queue consisting of the remaining binomial
trees, thus forming another binomial tree of 2kO−1 elements. At that point, the root has a sub-
tree that is a heap of 2kO−1 − 1 elements and another subtree that is a binomial tree of 2kO−1
elements. Recursively convert that subtree into a heap; now the whole structure is a binary
heap. The running time for NO = 2kO satisfies TO(NO) = 2TO(NO/ 2) + log NO. The base case is
TO(8) = 8.

-30-
6.8 Let DO1, DO2, ..., DkO be random variables representing the depth of the smallest, second smal-
lest, and kOthO smallest elements, respectively. We are interested in calculating EO(DkO). In
what follows, we assume that the heap size NO is one less than a power of two (that is, the
bottom level is completely filled) but sufficiently large so that terms bounded by OO(1 / NO)
are negligible. Without loss of generality, we may assume that the kOthO smallest element is
in the left subheap of the root. Let pPjO,kO be the probability that this element is the PjOthO smal-
lest element in the subheap.

kO−1
Lemma: For kO>1, EO(DkO) = Σ pPjO,kO(EO(DPjO) + 1).
PjO=1

Proof: An element that is at depth dO in the left subheap is at depth dO + 1 in the entire
subheap. Since EO(DPjO + 1) = EO(DPjO) + 1, the theorem follows.
Since by assumption, the bottom level of the heap is full, each of second, third, ..., kO−1thO
smallest elements are in the left subheap with probability of 0.5. (Technically, the probabil-
ity should be 1⁄2 − 1/(NO−1) of being in the right subheap and 1⁄2 + 1/(NO−1) of being in the
left, since we have already placed the kOthO smallest in the right. Recall that we have assumed
that terms of size OO(1/NO) can be ignored.) Thus

pPjO,kO = pkO−PjO,kO =
2
1
____
kO−2
( kPjOO−1−2 )
Theorem: EO(DkO) ≤ log kO.

Proof: The proof is by induction. The theorem clearly holds for kO = 1 and kO = 2. We then
show that it holds for arbitrary kO > 2 on the assumption that it holds for all smaller kO. Now,
by the inductive hypothesis, for any 1 ≤ PjO ≤ kO−1,
EO(DPjO) + EO(DkO−PjO) ≤ log PjO + log kO−PjO
Since PfOO(xO) = log xO is convex for xO > 0,
log PjO + log kO−PjO ≤ 2log (kO/ 2)
Thus
EO(DPjO) + EO(DkO−PjO) ≤ log (kO/ 2) + log (kO/ 2)
Furthermore, since pPjO,kO = pkO−PjO,kO,
pPjO,kOEO(DPjO) + pkO−PjO,kOEO(DkO−PjO) ≤pPjO,kOlog (kO/ 2) + pkO−PjO,kOlog (kO/ 2)
From the lemma,
kO−1
EO(DkO) = Σ pPjO,kO(EO(DPjO) + 1)
PjO=1
kO−1
=1+ Σ pPjO,kOEO(DPjO)
PjO=1

Thus
kO−1
EO(DkO) ≤ 1 + Σ pPjO,kOlog (kO/ 2)
PjO=1

-31-
kO−1
≤ 1 + log (kO/ 2) Σ pPjO,kO
PjO=1

≤ 1 + log (kO/ 2)

≤ log kO

completing the proof.

∼ log (kO−1) − 0.273548.


It can also be shown that asymptotically, EO(DkO) ∼
6.9 (a) Perform a preorder traversal of the heap.
(b) Works for leftist and skew heaps. The running time is OO(KdO) for dO-heaps.
6.11 Simulations show that the linear time algorithm is the faster, not only on worst-case inputs,
but also on random data.
6.12 (a) If the heap is organized as a (min) heap, then starting at the hole at the root, find a path
down to a leaf by taking the minimum child. The requires roughly log NO comparisons. To
find the correct place where to move the hole, perform a binary search on the log NO ele-
ments. This takes OO(log log NO) comparisons.
(b) Find a path of minimum children, stopping after log NO − log log NO levels. At this point,
it is easy to determine if the hole should be placed above or below the stopping point. If it
goes below, then continue finding the path, but perform the binary search on only the last
log log NO elements on the path, for a total of log NO + log log log NO comparisons. Other-
wise, perform a binary search on the first log NO − log log NO elements. The binary search
takes at most log log NO comparisons, and the path finding took only log NO − log log NO, so
the total in this case is log NO. So the worst case is the first case.
(c) The bound can be improved to log NO + log*NO + OO(1), where log*NO is the inverse Ack-
erman function (see Chapter 8). This bound can be found in reference [16].
6.13 The parent is at position OI(iO + dO − 2)/dOK. The children are in positions (iO − 1)dO + 2, ...,
idO + 1.
6.14 (a) OO((MO + dNO)logdONO).
(b) OO((MO + NO)log NO).
(c) OO(MO + NO2).
(d) dO= max (2, MO / NO).
(See the related discussion at the end of Section 11.4.)

-32-
6.16

2
4 11
9 5 12 17
18 10 8 6 18
31 15 11
21

6.17

2 3

7 6 5 4

8 9 10 11 12 13 14 15

6.18 This theorem is true, and the proof is very much along the same lines as Exercise 4.17.
6.19 If elements are inserted in decreasing order, a leftist heap consisting of a chain of left chil-
dren is formed. This is the best because the right path length is minimized.
6.20 (a) If a DecreaseKeyO is performed on a node that is very deep (very left), the time to per-
colate up would be prohibitive. Thus the obvious solution doesn’t work. However, we can
still do the operation efficiently by a combination of DeleteO and InsertO. To DeleteO an arbi-
trary node xO in the heap, replace xO by the MergeO of its left and right subheaps. This might
create an imbalance for nodes on the path from xO’s parent to the root that would need to be
fixed by a child swap. However, it is easy to show that at most log NO nodes can be affected,
preserving the time bound. This is discussed in Chapter 11.
6.21 Lazy deletion in leftist heaps is discussed in the paper by Cheriton and Tarjan [9]. The gen-
eral idea is that if the root is marked deleted, then a preorder traversal of the heap is formed,
and the frontier of marked nodes is removed, leaving a collection of heaps. These can be
merged two at a time by placing all the heaps on a queue, removing two, merging them, and
placing the result at the end of the queue, terminating when only one heap remains.
6.22 (a) The standard way to do this is to divide the work into passes. A new pass begins when
the first element reappears in a heap that is dequeued. The first pass takes roughly

-33-
2*O1*O(NO/ 2) time units because there are NO/ 2 merges of trees with one node each on the
right path. The next pass takes 2*O2*O(NO/ 4) time units because of the roughly NO/ 4 merges
of trees with no more than two nodes on the right path. The third pass takes 2*O3*O(NO/ 8)
time units, and so on. The sum converges to 4NO.
(b) It generates heaps that are more leftist.

6.23

4 11

5 9 12 17

6 8 18 10 18

11 15 31

21

6.24

3 2

7 5 6 4

15 11 13 9 14 10 12 8

6.25 This claim is also true, and the proof is similar in spirit to Exercise 4.17 or 6.18.
6.26 Yes. All the single operation estimates in Exercise 6.22 become amortized instead of
worst-case, but by the definition of amortized analysis, the sum of these estimates is a
worst-case bound for the sequence.
6.27 Clearly the claim is true for kO = 1. Suppose it is true for all values iO = 1, 2, ..., kO. A BkO+1
tree is formed by attaching a BkO tree to the root of a BkO tree. Thus by induction, it contains
a BO0 through BkO−1 tree, as well as the newly attached BkO tree, proving the claim.
6.28 Proof is by induction. Clearly the claim is true for kO = 1. Assume true for all values iO = 1,
2, ..., kO. A BkO+1 tree is formed by attaching a BkO tree to the original BkO tree. The original

-34-
( ) ( )
thus had dk nodes at depth dO. The attached tree had dOk−1 nodes at depth dO−1,
which are now at depth dO. Adding these two terms and using a well-known formula estab-
lishes the theorem.

6.29

4
13 15 23 12
18 51 24 21 24 14
65 65 26 16
18

11 29

55

6.30 This is established in Chapter 11.


6.31 The algorithm is to do nothing special − merely InsertO them. This is proved in Chapter 11.
6.35 Don’t keep the key values in the heap, but keep only the difference between the value of the
key in a node and the value of the key in its parent.
6.36 OO(NO + kOlog NO) is a better bound than OO(NOlog kO). The first bound is OO(NO) if
kO = OO(NO / log NO). The second bound is more than this as soon as kO grows faster than a
constant. For the other values Ω(NO / log NO) = kO = ο(NO), the first bound is better. When
kO = Θ(NO), the bounds are identical.

-35-
Chapter 7: Sorting

7.1
_______________________________________________
_____________________________________________
L _L _____________________________________________
Original L 3 1 4 1 5 9 2 6 5 LL
LL L LL
L L after PO=2 L 1 3 4 1 5 9 2 6 5 L L
L L after PO=3 L 1 3 4 1 5 9 2 6 5 LL
L L after PO=4 L 1 1 3 4 5 9 2 6 5 L L
L L after PO=5 L 1 1 3 4 5 9 2 6 5 LL
L L after PO=6 L 1 1 3 4 5 9 2 6 5 L L
LL L LL
L L after PO=7 L 1 1 2 3 4 5 9 6 5 L L
L L after PO=8 L 1 1 2 3 4 5 6 9 5 L L
after PO=9 L 1 1
L_L______________________________________________
3 4 5 5 6 9 LL
_____________________________________________
2

7.2 OO(NO) because the whileO loop terminates immediately. Of course, accidentally changing the
test to include equalities raises the running time to quadratic for this type of input.
7.3 The inversion that existed between AO[iO] and AO[iO + kO] is removed. This shows at least one
inversion is removed. For each of the kO − 1 elements AO[iO + 1], AO[iO + 2], ..., AO[iO + kO − 1],
at most two inversions can be removed by the exchange. This gives a maximum of
2(kO − 1) + 1 = 2kO − 1.
7.4
________________________________________________
______________________________________________
L L_______________________________________________
Original L 9 8 7 6 5 4 3 2 1 LL
LL L LL
L L after 7-sort L 2 1 7 6 5 4 3 9 8 L L
L L after 3-sort L 2 1 4 3 5 7 6 9 8 L L
LL_L_______________________________________________
after 1-sort L 1 3 4 5 6 7 8 9 L LL
______________________________________________
2

7.5 (a) Θ(NO2). The 2-sort removes at most only three inversions at a time; hence the algorithm
is Ω(NO2). The 2-sort is two insertion sorts of size NO/ 2, so the cost of that pass is OO(NO2).
The 1-sort is also OO(NO2), so the total is OO(NO2).
7.6 Part (a) is an extension of the theorem proved in the text. Part (b) is fairly complicated; see
reference [11].
7.7 See reference [11].
7.8 Use the input specified in the hint. If the number of inversions is shown to be Ω(NO2), then
the bound follows, since no increments are removed until an htO/ 2 sort. If we consider the
pattern formed hkO through hO2kO−1, where kO = tO/ 2 + 1, we find that it has length
NO = hkO(hkO + 1)−1, and the number of inversions is roughly hkO4/ 24, which is Ω(NO2).
7.9 (a) OO(NOlog NO). No exchanges, but each pass takes OO(NO).
(b) OO(NOlog NO). It is easy to show that after an hkO sort, no element is farther than hkO from
its rightful position. Thus if the increments satisfy hkO+1 ≤ chkO for a constant cO, which
implies OO(log NO) increments, then the bound is OO(NOlog NO).

-36-
7.10 (a) No, because it is still possible for consecutive increments to share a common factor. An
example is the sequence 1, 3, 9, 21, 45, htO+1 = 2htO + 3.
(b) Yes, because consecutive increments are relatively prime. The running time becomes
OO(NO3/ 2).
7.11 The input is read in as
142, 543, 123, 65, 453, 879, 572, 434, 111, 242, 811, 102
The result of the heapify is
879, 811, 572, 434, 543, 123, 142, 65, 111, 242, 453, 102
879 is removed from the heap and placed at the end. We’ll place it in italics to signal that it
is not part of the heap. 102 is placed in the hole and bubbled down, obtaining
811, 543, 572, 434, 453, 123, 142, 65, 111, 242, 102, 879
Continuing the process, we obtain
572, 543, 142, 434, 453, 123, 102, 65, 111, 242, 811, 879
543, 453, 142, 434, 242, 123, 102, 65, 111, 572, 811, 879
453, 434, 142, 111, 242, 123, 102, 65, 543, 572, 811, 879
434, 242, 142, 111, 65, 123, 102, 453, 543, 572, 811, 879
242, 111, 142, 102, 65, 123, 434, 453, 543, 572, 811, 879
142, 111, 123, 102, 65, 242, 434, 453, 543, 572, 811, 879
123, 111, 65, 102, 142, 242, 434, 453, 543, 572, 811, 879
111, 102, 65, 123, 142, 242, 434, 453, 543, 572, 811, 879
102, 65, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879
65, 102, 111, 123, 142, 242, 434, 453, 543, 572, 811, 879
7.12 Heapsort uses at least (roughly) NOlog NO comparisons on any input, so there are no particu-
larly good inputs. This bound is tight; see the paper by Schaeffer and Sedgewick [16]. This
result applies for almost all variations of heapsort, which have different rearrangement stra-
tegies. See Y. Ding and M. A. Weiss, "Best Case Lower Bounds for Heapsort," ComputingO
49 (1992).
7.13 First the sequence {3, 1, 4, 1} is sorted. To do this, the sequence {3, 1} is sorted. This
involves sorting {3} and {1}, which are base cases, and merging the result to obtain {1, 3}.
The sequence {4, 1} is likewise sorted into {1, 4}. Then these two sequences are merged to
obtain {1, 1, 3, 4}. The second half is sorted similarly, eventually obtaining {2, 5, 6, 9}.
The merged result is then easily computed as {1, 1, 2, 3, 4, 5, 6, 9}.
7.14 Mergesort can be implemented nonrecursively by first merging pairs of adjacent elements,
then pairs of two elements, then pairs of four elements, and so on. This is implemented in
Fig. 7.1.
7.15 The merging step always takes Θ(NO) time, so the sorting process takes Θ(NOlog NO) time on
all inputs.
7.16 See reference [11] for the exact derivation of the worst case of mergesort.
7.17 The original input is
3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5
After sorting the first, middle, and last elements, we have
3, 1, 4, 1, 5, 5, 2, 6, 5, 3, 9
Thus the pivot is 5. Hiding it gives
3, 1, 4, 1, 5, 3, 2, 6, 5, 5, 9
The first swap is between two fives. The next swap has iO and PjO crossing. Thus the pivot is

-37-
_______________________________________________________________________________

void
Mergesort( ElementType A[ ], int N )
{
ElementType *TmpArray;
int SubListSize, Part1Start, Part2Start, Part2End;

TmpArray = malloc( sizeof( ElementType ) * N );


for( SubListSize = 1; SubListSize < N; SubListSize *= 2 )
{
Part1Start = 0;
while( Part1Start + SubListSize < N - 1 )
{
Part2Start = Part1Start + SubListSize;
Part2End = min( N, Part2Start + SubListSize - 1 );
Merge( A, TmpArray, Part1Start, Part2Start, Part2End );
Part1Start = Part2End + 1;
}
}
}
Fig. 7.1.
_______________________________________________________________________________
swapped back with iO:
3, 1, 4, 1, 5, 3, 2, 5, 5, 6, 9
We now recursively quicksort the first eight elements:
3, 1, 4, 1, 5, 3, 2, 5
Sorting the three appropriate elements gives
1, 1, 4, 3, 5, 3, 2, 5
Thus the pivot is 3, which gets hidden:
1, 1, 4, 2, 5, 3, 3, 5
The first swap is between 4 and 3:
1, 1, 3, 2, 5, 4, 3, 5
The next swap crosses pointers, so is undone; iO points at 5, and so the pivot is swapped:
1, 1, 3, 2, 3, 4, 5, 5
A recursive call is now made to sort the first four elements. The pivot is 1, and the partition
does not make any changes. The recursive calls are made, but the subfiles are below the
cutoff, so nothing is done. Likewise, the last three elements constitute a base case, so noth-
ing is done. We return to the original call, which now calls quicksort recursively on the
right-hand side, but again, there are only three elements, so nothing is done. The result is
1, 1, 3, 2, 3, 4, 5, 5, 5, 6, 9
which is cleaned up by insertion sort.
7.18 (a) OO(NOlog NO) because the pivot will partition perfectly.
(b) Again, OO(NOlog NO) because the pivot will partition perfectly.
(c) OO(NOlog NO); the performance is slightly better than the analysis suggests because of the
median-of-three partition and cutoff.

-38-
7.19 (a) If the first element is chosen as the pivot, the running time degenerates to quadratic in
the first two cases. It is still OO(NOlog NO) for random input.
(b) The same results apply for this pivot choice.
(c) If a random element is chosen, then the running time is OO(NOlog NO) expected for all
inputs, although there is an OO(NO2) worst case if very bad random numbers come up. There
is, however, an essentially negligible chance of this occurring. Chapter 10 discusses the
randomized philosophy.
(d) This is a dangerous road to go; it depends on the distribution of the keys. For many dis-
tributions, such as uniform, the performance is OO(NOlog NO) on average. For a skewed distri-
bution, such as with the input {1, 2, 4, 8, 16, 32, 64, ... }, the pivot will be consistently terri-
ble, giving quadratic running time, independent of the ordering of the input.
7.20 (a) OO(NOlog NO) because the pivot will partition perfectly.
(b) Sentinels need to be used to guarantee that iO and PjO don’t run past the end. The running
time will be Θ(NO2) since, because iO won’t stop until it hits the sentinel, the partitioning step
will put all but the pivot in SO1.
(c) Again a sentinel needs to be used to stop PjO. This is also Θ(NO2) because the partitioning
is unbalanced.
7.21 Yes, but it doesn’t reduce the average running time for random input. Using median-of-
three partitioning reduces the average running time because it makes the partition more bal-
anced on average.
7.22 The strategy used here is to force the worst possible pivot at each stage. This doesn’t neces-
sarily give the maximum amount of work (since there are few exchanges, just lots of com-
parisons), but it does give Ω(NO2) comparisons. By working backward, we can arrive at the
following permutation:
20, 3, 5, 7, 9, 11, 13, 15, 17, 19, 4, 10, 2, 12, 6, 14, 1, 16, 8, 18
A method to extend this to larger numbers when NO is even is as follows: The first element is
NO, the middle is NO − 1, and the last is NO − 2. Odd numbers (except 1) are written in
decreasing order starting to the left of center. Even numbers are written in decreasing order
by starting at the rightmost spot, always skipping one available empty slot, and wrapping
around when the center is reached. This method takes OO(NOlog NO) time to generate the per-
mutation, but is suitable for a hand calculation. By inverting the actions of quicksort, it is
possible to generate the permutation in linear time.
7.24 This recurrence results from the analysis of the quick selection algorithm. TO(NO) = OO(NO).
7.25 Insertion sort and mergesort are stable if coded correctly. Any of the sorts can be made
stable by the addition of a second key, which indicates the original position.
7.26 (d) PfOO(NO) can be OO(NO/ log NO). Sort the PfOO(NO) elements using mergesort in
OO(PfOO(NO)log PfOO(NO)) time. This is OO(NO) if PfOO(NO) is chosen using the criterion given. Then
merge this sorted list with the already sorted list of NO numbers in OO(NO + PfOO(NO)) = OO(NO)
time.
7.27 A decision tree would have NO leaves, so OHlog NOJ comparisons are required.
7.28 log NO! ∼
∼ NOlog NO − NOlog eO.

7.29 (a) ( 2NN ).

-39-
(b) The information-theoretic lower bound is log 2N ( )
N . Applying Stirling’s formula, we
can estimate the bound as 2NO − ⁄2log NO. A better lower bound is known for this case:
1

2NO−1 comparisons are necessary. Merging two lists of different sizes MO and NO likewise

requires at least log ( MON+ N ) comparisons.


7.30 It takes OO(1) to insert each element into a bucket, for a total of OO(NO). It takes OO(1) to
extract each element from a bucket, for OO(MO). We waste at most OO(1) examining each
empty bucket, for a total of OO(MO). Adding these estimates gives OO(MO + NO).
7.31 We add a dummy NO + 1thO element, which we’ll call MaybeO. MaybeO satisfies
PfalseO < MaybeO <trueO. Partition the array around MaybeO, using the standard quicksort tech-
niques. Then swap MaybeO and the NO + 1thO element. The first NO elements are then
correctly arranged.
7.32 We add a dummy NO + 1thO element, which we’ll call ProbablyFalseO. ProbablyFalseO
satisfies PfalseO < ProbablyFalseO < MaybeO. Partition the array around ProbablyFalseO as in
the previous exercise. Suppose that after the partition, ProbablyFalseO winds up in position
iO. Then place the element that is in the NO + 1thO slot in position iO, place ProbablyTrueO
(defined the obvious way) in position NO +1 , and partition the subarray from position iO
onward. Finally, swap ProbablyTrueO with the element in the NO + 1thO location. The first NO
elements are now correctly arranged. These two problems can be done without the assump-
tion of an extra array slot; assuming so simplifies the presentation.
7.33 (a) OHlog 4!OJ=5.
(b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and
aO4. Compare and exchange aO1 and aO3. Compare and exchange aO2 and aO4. Finally, com-
pare and exchange aO2 and aO3.
7.34 (a) OHlog 5!OJ = 7.
(b) Compare and exchange (if necessary) aO1 and aO2 so that aO1 ≥ aO2, and repeat with aO3 and
aO4 so that aO3 ≥ aO4. Compare and exchange (if necessary) the two winners, aO1 and aO3.
Assume without loss of generality that we now have aO1 ≥ aO3 ≥ aO4, and aO1 ≥ aO2. (The other
case is obviously identical.) Insert aO5 by binary search in the appropriate place among aO1 ,
aO3,aO4. This can be done in two comparisons. Finally, insert aO2 among aO3 , aO4 , aO5. If it is
the largest among those three, then it goes directly after aO1 since it is already known to be
larger than aO1. This takes two more comparisons by a binary search. The total is thus seven
comparisons.
7.38 (a) For the given input, the pivot is 2. It is swapped with the last element. iO will point at
the second element, and PjO will be stopped at the first element. Since the pointers have
crossed, the pivot is swapped with the element in position 2. The input is now 1, 2, 4, 5, 6,
..., NO − 1, NO, 3. The recursive call on the right subarray is thus on an increasing sequence
of numbers, except for the last number, which is the smallest. This is exactly the same form
as the original. Thus each recursive call will have only two fewer elements than the previ-
ous. The running time will be quadratic.
(b) Although the first pivot generates equal partitions, both the left and right halves will
have the same form as part (a). Thus the running time will be quadratic because after the
first partition, the algorithm will grind slowly. This is one of the many interesting tidbits in
reference [20].

-40-
7.39 We show that in a binary tree with LO leaves, the average depth of a leaf is at least log LO.
We can prove this by induction. Clearly, the claim is true if LO = 1. Suppose it is true for
trees with up to LO − 1 leaves. Consider a tree of LO leaves with minimum average leaf
depth. Clearly, the root of such a tree must have non-NULL left and right subtrees. Sup-
pose that the left subtree has LLO leaves, and the right subtree has LRO leaves. By the induc-
tive hypothesis, the total depth of the leaves (which is their average times their number) in
the left subtree is LLO(1 + log LLO), and the total depth of the right subtree’s leaves is
LRO(1 + log LRO) (because the leaves in the subtrees are one deeper with respect to the root of
the tree than with respect to the root of their subtree). Thus the total depth of all the leaves
is LO + LLOlog LLO + LROlog LRO. Since PfOO(xO) = xOlog xO is convex for xO ≥ 1, we know that
PfOO(xO) + PfOO(yO) ≥ 2fOO((xO+yO) / 2). Thus, the total depth of all the leaves is at least
LO + 2(LO/ 2)log (LO/ 2) ≥ LO + LO(log LO − 1) ≥ LOlog LO. Thus the average leaf depth is at least
log LO.

-41-
Chapter 8: The Disjoint Set ADT

8.1 We assume that unions operated on the roots of the trees containing the arguments. Also, in
case of ties, the second tree is made a child of the first. Arbitrary union and union by height
give the same answer (shown as the first tree) for this problem. Union by size gives the
second tree.

2 6 3 8 14

4 5 7 10 11 12 13 9 15 16

17

1 4 5 7 10 11 12 13 14

2 6 8 15 16

9 17

8.2 In both cases, have nodes 16 and 17 point directly to the root.
8.4 Claim: A tree of height HO has at least 2HO nodes. The proof is by induction. A tree of
height 0 clearly has at least 1 node, and a tree of height 1 clearly has at least 2. Let TO be the
tree of height HO with fewest nodes. Thus at the time of TO’s last union, it must have been a
tree of height HO−1, since otherwise TO would have been smaller at that time than it is now
and still would have been of height HO, which is impossible by assumption of TO’s minimal-
ity. Since TO’s height was updated, it must have been as a result of a union with another tree
of height HO−1. By the induction hypothesis, we know that at the time of the union, TO had
at least 2HO−1 nodes, as did the tree attached to it, for a total of 2HO nodes, proving the claim.
Thus an NO-node tree has depth at most OIlog NOK.
8.5 All answers are OO(MO) because in all cases α(MO, NO) = 1.
8.6 Assuming that the graph has only nine vertices, then the union/find tree that is formed is
shown here. The edge (4,6) does not result in a union because at the time it is examined, 4
and 6 are already in the same component. The connected components are {1,2,3,4,6} and

-42-
Another random document with
no related content on Scribd:
middle, while the other half must be squared and straight and set
with an empty space below, in order that it may hold as does an arch,
the wall on the external face appearing worked with vertical joints.
[116]
Do not let the stones of the said frieze rest on the architrave, but
let a finger’s breadth be between them; in this way, making an arch,
the frieze comes to support itself and does not burden the architrave.
Afterwards make on the inside, for filling up the said frieze, a flat
arch of bricks as high as the frieze, that stretches from die to die
above the columns. Then make a piece of cornice as wide as the
die[117] above the columns, which has the joints in front like those of
the frieze, and within let the said cornice be keyed like the blocks of
the frieze, care being taken to make the cornice, as the frieze, in three
pieces, of which the two at the sides hold from within the middle
piece of the cornice above the die of the frieze,[118] and mind that the
middle piece of the cornice, C, C, slips down into the sinkings so as to
span the void, and unites the two pieces at the sides so as to lock
them in the form of an arch. In this fashion everyone can see that the
frieze sustains itself, as does the cornice, which rests almost entirely
on the arch of bricks.[119] Thus one thing helping another, it comes
about that the architrave does not sustain any but its own weight, nor
is there danger of its ever being broken by too heavy a load. Because
experience shows this method to be the most sure, I have wished to
make particular mention of it, for the convenience and benefit of all;
especially as I know that when the frieze and the cornice were put
above the architrave as was the practice of the ancients, the latter
broke in course of time, possibly on account of an earthquake or
other accident, the arch of discharge which was introduced above the
cornice not being sufficient to preserve it. But throwing the arches
above the cornices made in this form, and linking them together with
iron, as usual,[120] secures the whole from every danger and makes
the building endure eternally.
Returning to the matter in hand, let us explain then that this
fashion of work may be used by itself alone, or can be employed in
the second floor from the ground level, above the Rustic Order, or it
can be put higher up above another variety of Order such as Ionic,
Corinthian or Composite, in the manner shown by the ancients in the
Colosseum in Rome, in which arrangement they used skill and
judgement. The Romans, having triumphed not only over the Greeks
but over the whole world, put the Composite Order at the top, of
which Order the Tuscans have composed many varieties. They placed
it above all, as superior in force, grace, and beauty, and as more
striking than the others, to be a crown to the building; for to be
adorned with beautiful members gives to the work an honourable
completion and leaves nothing more to be desired.

§ 23. The proportions and parts of the Doric Order.

To return to the Doric Order, I may state that the column is made
seven heads in height. Its pedestal must be a little less than a square
and a half in height and a square in width,[121] then above are placed
its mouldings and beneath its base with torus and two fillets, as
Vitruvius directs. The base and capital are of equal height, reckoning
the capital from the astragal upwards. The cornice with the frieze
and architrave attached projects over every column, with those
grooved features, usually called triglyphs, which have square
spaces[122] interposed between the projections, within which are the
skulls of oxen, or trophies, or masks, or shields, or other fancies. The
architrave, jutting out, binds these projections with a fillet, and
under the fillet are little strips square in section, at the foot of each of
which are six drops, called by the ancients ‘guttae’ (goccie). If the
column in the Doric order is to be seen fluted, there must be twenty
hollow facets instead of flutes,[123] and nothing between the flutes but
the sharp arris. Of this sort of work there is an example in Rome at
the Forum Boarium which is most rich;[124] and of another sort are
the mouldings and other members in the theatre of Marcellus, where
to-day is the Piazza Montanara, in which work there are no bases (to
the Doric columns) and those bases which are visible are Corinthian.
It is thought that the ancients did not make bases, but instead placed
there a pedestal of the same size as the base would have been. This is
to be met with in Rome by the prison of the Tullianum where also are
capitals richer in members than others which appear in the Doric
Order.[125] Of this same order Antonio da San Gallo has made the
inner court of the Casa Farnese in the Campo di Fiore at Rome,
which is highly decorated and beautiful; thus one sees continually
ancient and modern temples and palaces in this style, which for
stability and assemblage of the stones have held together better and
lasted longer than all other edifices.
Fig. 6.—Drawing by Giuliano da San Gallo of a
portion of the Basilica Aemilia in the Roman
Forum, that survived to the time of Vasari.

§ 24. The Ionic Order.

The Ionic Order, more slender than the Doric, was made by the
ancients in imitation of persons who stand mid-way between the
fragile and the robust; a proof of this is its adoption in works
dedicated to Apollo, Diana, and Bacchus, and sometimes to Venus.
The pedestal which sustains the column is one and a half squares
high and one wide, and the mouldings, above and below, are in
accordance with this Order. Its column measures in height eight
times the head, and its base is double with two tori, as described by
Vitruvius in the third chapter of his third book. Its capital with its
volutes or scrolls or spirals, as anyone may call them, should be well
turned, as one sees in the theatre of Marcellus in Rome, above the
Doric Order; and its cornice adorned with modillions and with
dentils, and its frieze slightly convex
(pulvinated). Should it be desired to
flute the columns, there must be
twenty-four flutes, but divided in such
a manner as to leave between each two
of them a flat piece that measures the
fourth part of the flute. This order has
in itself the most beautiful lightness
and grace and is consequently adopted
by modern architects.

§ 25. The Corinthian Order.

The Corinthian style was invariably a


favourite among the Romans, who
delighted in it so greatly that they
chose this Order for their most
elaborate and most prized buildings to
remain as a memorial of themselves; as
is seen in the Temple at Tivoli above
the Teverone, in the remains of the
temple of Peace,[126] in the arch of Pola,
Fig. 7.—Roman Doric cap, with and in that of the harbour of Ancona;
stucco finish, at S. Nicola in but much more beautiful is the
Carcere, Rome. Pantheon, that is the Ritonda of Rome.
This Order is the richest and most
decorated of all the Orders spoken of above. The pedestal that
supports the column is measured in the following way; a square and
two thirds wide (high)[127] and the mouldings above and below in
proportion, according to Vitruvius[128]: the height of the column nine
heads with base and capital, which last shall be in height the
diameter of the column at the foot, and its base half of the said
thickness. This base the ancients used to carve in various ways. Let
the ornament of the capital be fashioned with its tendrils and its
leaves, as Vitruvius directs in the fourth book, where he records that
this capital has been taken from the tomb of a Corinthian girl. Then
follow its proper architrave, frieze and cornice measured as he
describes, all carved with the modillions and ovolos and other sorts
of carving under the drip. The friezes of this Order may be carved
with leafage, or again they may be plain, or adorned with letters of
bronze let into marble, as those on the portico of the Ritonda. There
are twenty-six flutes in the Corinthian columns, although sometimes
also there are fewer, and the fourth part of the width of each flute
remains flat between every two, as is evident in many ancient works
and in modern works copied from the ancients.

§ 26. The Composite Order.

The Composite Order, although Vitruvius has not made mention of


it—having taken account of none others than the Doric, Ionic,
Corinthian, and Tuscan, and holding those artists lawless, who,
taking from all four Orders, constructed out of them bodies that
represented to him monsters rather than men—the Composite Order
has nevertheless been much used by the Romans and in imitation of
them by the moderns. I shall therefore proceed, to the end that all
may have notice of it, to explain and give the proportions of buildings
in this Order also, for I am convinced of this, that if the Greeks and
Romans created these first four Orders and reduced them to a
general rule and measure, there may have been those who have done
the same for the Composite Order, forming of it things much more
graceful than ever did the ancients.
As an example of the truth of this I quote the works of
Michelagnolo Buonarroti in the Sacristy and Library of San Lorenzo
in Florence, where the doors, niches, bases, columns, capitals,
mouldings, consoles and indeed all the details, have received from
him something of the new and of the Composite Order, and
nevertheless are wonderful, not to say beautiful. The same merit in
even greater measure is exhibited by the said Michelagnolo in the
second story of the Court of the Casa Farnese[129] and again in the
cornice which supports on the exterior the roof of that palace. He
who wishes to see in this manner of work the proof of this man’s
excellence—of truly celestial origin—in art and design of various
kinds, let him consider that which he has accomplished in the fabric
of St. Peter’s in compacting together the body of that edifice and in
making so many sorts of various and novel ornaments, such
beautiful profiles of mouldings, so many different niches and
numerous other things, all invented by him and treated differently
from the custom of the ancients. Therefore no one can deny that this
new Composite Order, which through Michelagnolo has attained to
such perfection, may be worthily compared with the others. In truth,
the worth and capacity of this truly excellent sculptor, painter, and
architect have worked miracles wherever he has put forth his hand.
Besides all the other things that are clear as daylight, he has rectified
sites which were out of the straight and reduced to perfection many
buildings and other objects of the worst form, covering with lovely
and fanciful decoration the defects of nature and art.[130] In our days
certain vulgar architects, not considering these things judiciously
and not imitating them, have worked presumptuously and without
design almost as if by chance, without observing ornament, art, or
any order. All their things are monstrous and worse than the
German.
Returning now to our subject, it has become usual for this manner
of work to be called by some the ‘Composite,’ by others the ‘Latin,’
and by others again the ‘Italic’ Order. The measure of the height of
this column must be ten heads, the base the half of the diameter of
the column, measured in the same way as the Corinthian column, as
we see in the arch of Titus Vespasianus in Rome. And he who wishes
to make flutes in this column can do so, following the plan of the
Ionian or Corinthian column—or in any way that pleases him who
adopts this style of architecture, which is a mixture of all the Orders.
The capitals may be made like those of the Corinthian except that the
echinus moulding of the capital must be larger and the volutes or
tendrils somewhat larger, as we see in the above mentioned arch.
The architrave must be three quarters of the thickness of the column
and the rest of the frieze supplied with modillions, and the cornice
equal to the architrave, because the projection gives the cornice an
increase of size, as one sees in the uppermost story of the Roman
Colosseum; and in the said modillions grooves can be cut after the
manner of triglyphs, and there can be other carving according to the
taste of the architect; the pedestal on which the column rests must be
two squares high, with the mouldings just as he pleases.

§ 27. Of Terminal Figures.


The ancients were accustomed to use for doors or sepulchres or
other kinds of enrichment, various sorts of terminal figures instead
of columns, here a figure which has a basket on the head for capital,
there a figure down to the waist, the rest, towards the base, a cone or
a tree trunk; in the same way they made virgins, chubby infants,
satyrs, and other sorts of monsters or grotesque objects, just as it
suited them, and according as the ideas occurred to them so the
works were put into operation.

§ 28. German Work (the Gothic Style).

We come at last to another sort of work called German, which both


in ornament and in proportion is very different from the ancient and
the modern. Nor is it adopted now by the best architects but is
avoided by them as monstrous and barbarous, and lacking
everything that can be called order. Nay it should rather be called
confusion and disorder. In their buildings, which are so numerous
that they sickened the world, doorways are ornamented with
columns which are slender and twisted like a screw, and cannot have
the strength to sustain a weight, however light it may be. Also on all
the façades, and wherever else there is enrichment, they built a
malediction of little niches one above the other, with no end of
pinnacles and points and leaves, so that, not to speak of the whole
erection seeming insecure, it appears impossible that the parts
should not topple over at any moment. Indeed they have more the
appearance of being made of paper than of stone or marble. In these
works they made endless projections and breaks and corbellings and
flourishes that throw their works all out of proportion; and often,
with one thing being put above another, they reach such a height that
the top of a door touches the roof. This manner was the invention of
the Goths, for, after they had ruined the ancient buildings, and killed
the architects in the wars, those who were left constructed the
buildings in this style.[131] They turned the arches with pointed
segments, and filled all Italy with these abominations of buildings, so
in order not to have any more of them their style has been totally
abandoned.
May God protect every country from such ideas and style of
buildings! They are such deformities in comparison with the beauty
of our buildings that they are not worthy that I should talk more
about them, and therefore let us pass on to speak of the vaults.
CHAPTER IV.
On forming Vaults in Concrete, to be impressed with Enrichment:
when the Centerings are to be removed, and how to mix the
Plaster.

§ 29. The Construction of enriched Stucco Vaults.

When walls have reached the point where the arches of brick or
light stone or tufa have to spring, it is necessary to turn a centering
with planks in a close circle, over the framework of struts or
boarding. The planks are fitted together according to the form of the
vault, or in the shape of a boat, and this centering for the vaults must
be fixed with strong props in whatever mode you wish, so that the
material above does not strain it by its weight; and afterwards every
crevice, in the middle, in the corners, and everywhere, must be firmly
stopped up with clay so that when the concrete is spread the mixture
shall not filter through. This finished, above that surface of boards
they make caissons of wood, which are to be worked contrariwise,
with projections where a hollow is wanted; in the same way let the
mouldings and details that we wish to make be worked by opposites,
so that when the material is cast, it may come, where (the mould is)
hollow, in relief; where in relief, hollow, and thus similarly must all
the members of the mouldings be arranged. Whether the vault is to
be smooth or enriched, it is equally necessary to have shapes of
wood, which mould the desired forms in clay; with this clay also are
made the square panels for such decoration, and these are joined the
one to the other on the flat or by mouldings or enriched bands, which
can be made to follow the line of this centering. Having finished
covering it all with enrichments of clay, formed in intaglio and fitted
together, as was said above, one must then take lime, with pozzolana
earth or sand riddled finely, mixed liquid and mostly lime, and of
that lay evenly a coating over all, till every mould is full. Afterwards,
above this coating make the vault with bricks, raising or lowering
them according as the vault turns, and continually adding till the
arch be closed. This done, it must all be left to set and get firm, till
the work be dry and solid.[132] Then when the props are removed and
the vault is left free, the clay is easily taken away and all the work
remains modelled and worked as if done in stucco, and those parts
that have not come out well are gone over with stucco till they are
complete. In this manner have been executed all the works in the
ancient edifices, which had afterwards stucco enrichment upon
them. This the moderns have done to-day in the vaults of St. Peter’s,
and many other masters throughout Italy have done the same.

§ 30. Stucco made with Marble Dust.

Now let us show how the stucco is mixed.[133] Chips of marble are
pounded in a stone mortar; no other lime is used for this stucco save
white lime made either of marble chips or of travertine; instead of
sand the pounded marble is taken and is sifted finely and kneaded
with the lime, in the proportion of two thirds lime to one third
pounded marble. The stucco is made coarser or finer, according as
one wishes to work coarsely or finely. Enough now of stuccoes
because the rest will be said later, when I shall treat of them in
connection with Sculpture. Before passing to this subject, we shall
speak briefly of fountains which are made for walls and of their
various ornaments.
CHAPTER V.
How Rustic Fountains are made with Stalactites and Incrustations
from water, and how Cockle shells and Conglomerations of
vitrified stone are built into the Stucco.

§ 31. Grottoes and Fountains of ‘Rocaille’ work.

The fountains which the ancients made for their palaces, gardens,
and other places, were of different kinds; some stood alone, with
basins and vases of different sorts, others were attached to the walls,
and bore niches with masks, figures, or ornaments suggesting the
sea; others again for use in hot baths, were simpler and plainer, and
finally others resembled woodland springs that rise naturally in the
groves; while those which the moderns have made and continue to
make are also of different kinds. The moderns, always varying them,
have added to the inventions of the ancients, compositions of Tuscan
work,[134] covered with stalactites from petrified waters, which hang
down resembling roots, formed in the lapse of time of congelations of
such waters as are hard and are charged with sediment. These exist
not only at Tivoli, where the river Teverone petrifies the branches of
trees, and all objects that come in contact with it, turning them into
gum-like exudations and stalactites; but also at the lake Piè di Lupo,
[135]
where the stalactites are very large; and in Tuscany at the river
Elsa,[136] whose water makes them clear so that they look like marble,
glass, or artificial crystals. But the most beautiful and curious of all
are found behind Monte Morello[137] also in Tuscany, eight miles
from Florence. Of this sort Duke Cosimo has had made in his garden
at Olmo near Castello[138] the rustic ornaments of the fountains
executed by the sculptor Tribolo. These stalactites removed from
where nature has produced them are introduced into work done by
the artificer and fixed with iron bars, with branches soldered with
lead or in some other way, or they are grafted into the stones so as to
hang suspended. They are fixed on to the Tuscan work in such a way
as to leave it here and there exposed to view. Then by adjusting
leaden tubes hidden between these stalactites, and distributing holes
among them, jets of water are made to pour out, when a key at the
entrance of the conduit is turned; and thus are arranged pipes for
water and various jets through which the water rains down among
the incrustations of these stalactites, and in falling sounds sweet to
the ear and is beautiful to the eye.
There is also another kind of grotto, of a more rustic fashion,
imitating sylvan fountains in the following way. Some take sponge-
like stones and joining them together sow grass over them, thus, with
an order which appears disorder and wild, the grottoes are rendered
very natural and real. Others make smoother and more polished
grottoes of stucco, in which are mingled both stones and stucco, and
while the stucco is fresh they insert, in bands and compartments,
knobs or bosses, cockle shells, sea snails, tortoise shells, shells large
and small, some showing the outside and some the reverse: and of
these they make flower vases and festoons, in which the cockle shells
represent the leaves, and other varieties of shells the fruit;[139] and to
these they add shells of turtles, as is seen in the vineyard at the foot
of Monte Mario that Pope Clement VII, when still Cardinal, had
made by the advice of Giovanni da Udine.[140]
Again a rustic and very beautiful mosaic in many colours is made
by using little bits of old bricks that have been too much baked, and
pieces of glass which has run owing to the pans of glass bursting in
an overheated furnace. The work is done by sticking these bits into
the stucco on the wall as was said above, and arranging between
them corals and other spoils from the sea, things in themselves full of
grace and beauty. Thus are made animals and figures, covered with
the shells already mentioned as well as with coloured pastes in
various pieces arranged in rustic fashion, very quaint to look upon.
There have been many fountains of this kind recently set up at Rome,
which by their charm have incited the minds of countless persons to
be lovers of such work. Another kind of ornament entirely rustic is
also used now-a-days for fountains, and is applied in the following
manner. First the skeleton of the figure or any other object desired is
made and plastered over with mortar or stucco, then the exterior is
covered in the fashion of mosaic, with pieces of white or coloured
marble, according to the object designed, or else with certain little
many coloured pebbles: and these when carefully worked have a long
life. The stucco with which they build up and work these things is the
same that we have before described, and when once set it holds them
securely on the walls. To such fountains pavements are made of
sling-stones, that is, round and flat river pebbles, set on edge and in
ripples as water goes, with excellent effect. Others, for the finer
fountains, make pavements with little tiles of terra cotta in various
divisions and glazed in the fire, as in clay vases painted in various
colours and with painted ornaments and leafage; but this sort of
pavement is more suitable for hot-air chambers and baths than for
fountains.[141]

Plate IV

INTERIOR OF GROTTO IN BOBOLI GARDENS, FLORENCE

Showing an unfinished statue ascribed to Michelangelo


CHAPTER VI.
On the manner of making Pavements of Tesselated Work.

§ 32. Mosaic Pavements.

There are no possible devices in any department that the ancients


did not find out or at any rate try very hard to discover,—devices I
mean that bring delight and refreshment to the eyes of men. They
invented then, among other beautiful things, stone pavements
diversified with various blendings of porphyry, serpentine, and
granite, with round and square or other divisions, whence they went
on to conceive the fabrication of ornamental bands, leafage, and
other sorts of designs and figures. Therefore to prepare the work the
better to receive such treatment, they cut the marble into little
pieces, so that these being small they could be turned about for the
background and the field, in round schemes or lines straight or
twisted, as came most conveniently. From the joining together of
these pieces they called the work mosaic,[142] and used it in the
pavements of many of their buildings, as we still see in the Baths of
Caracalla in Rome and in other places, where the mosaic is made
with little squares of marble, that form leaves, masks, and other
fancies, while the background for these is composed of squares of
white marble and other small squares of black. The work was set
about in the following manner. First was spread a layer of fresh
stucco of lime and marble dust thick enough to hold firmly in itself
the pieces fitting into each other, so that when set they could be
polished smooth on the top; these in the drying make an admirably
compacted concrete, which is not hurt by the wear of footsteps nor
by water. Therefore this work having come into the highest
estimation, clever people set themselves to study it further, as it is
always easy to add something valuable to an invention already found
out. So they made the marble mosaics finer, and of these, laid
pavements both for baths and for hot rooms, and with the most
subtle mastery and diligence they delicately fashioned various fishes
in them, and imitated painting with many colours suitable for that
work, and with many different sorts of marbles, introducing also
among these some pieces cut into little mosaic squares of the bones
of fishes which have a lustrous surface.[143] And so life-like did they
make the fishes, that water placed above them, veiling them a little,
even though clear, made them appear actually alive in the
pavements; as is seen in Parione in Rome, in the house of Messer
Egidio and Fabio Sasso.[144]

§ 33. Pictorial Mosaics for Walls, etc.

Therefore, this mosaic work appearing to them a picture, capable


of resisting to all eternity water, wind, and sunshine, and because
they considered such work much more effective far off than near, the
ancients disposed it so as to decorate vaults and walls, where such
things had to be seen at a distance, for at a distance one would not
perceive the pieces of mosaic which when near are easily
distinguished. Then because the mosaics were lustrous and
withstood water and damp, it was thought that such work might be
made of glass, and so it was done, and producing hereby the most
beautiful effect they adorned their temples and other places with it,
as we still see in our own days at Rome in the Temple of Bacchus[145]
and elsewhere.[146] Just as from marble mosaics are derived those
which we now call in our time glass mosaics, so from the mosaic of
glass we have passed on to egg-shell mosaic,[147] and from this to the
mosaic in which figures and groups in light and shade are formed
entirely of tesserae, though the effect is like painting; this we shall
describe in its own place in the chapters on that art.[148]
CHAPTER VII.
How one is to recognize if a Building have good Proportions, and of
what Members it should generally be composed.

§ 34. The principles of Planning and Design.

But since talking of particular things would make me turn aside


too much from my purpose, I leave this minute consideration to the
writers on architecture, and shall only say in general how good
buildings can be recognized, and what is requisite to their form to
secure both utility and beauty. Suppose then one comes to an edifice
and wishes to see whether it has been planned by an excellent
architect and how much ability he has shown, also whether the
architect has known how to accommodate himself to the site, as well
as to the wishes of him who ordered the structure to be built, one
must consider the following questions. First, whether he who has
raised it from the foundation has thought if the spot were a suitable
one and capable of receiving buildings of that style and extent, and
(granted that the site is suitable) how the building should be divided
into rooms, and how the enrichment on the walls be disposed in view
of the nature of the site which may be extensive or confined, elevated
or low-lying. One must consider also whether the edifice has been
tastefully arranged and in convenient proportion, and whether there
has been furnished and distributed the proper kind and number of
columns, windows, doors, and junctions of wall-faces, both within
and without, in the given height and thickness of the walls; in short
whether every detail is suitable in and for its own place. It is
necessary that there should be distributed throughout the building,
rooms which have their proper arrangement of doors, windows,
passages, secret staircases, anterooms, lavatories, cabinets, and that
no mistakes be apparent therein. For example there should be a large
hall, a small portico or lesser apartments, which being members of
the edifice, must necessarily, even as members of the human body,
be equally arranged and distributed according to the style and
complexity of the buildings; just as there are temples round, or
octagonal, or six sided, or square, or in the form of a cross, and also
various Orders, according to the position and rank of the person who
has the buildings constructed, for when designed by a skilful hand
these exhibit very happily the excellence of the workman and the
spirit of the author of the fabric.

§ 35. An ideal Palace.

To make the matter clearer, let us here imagine a palace,[149] and


this will give us light on other buildings, so that we may be able to
recognize, when we see them, whether they are well fashioned or no.
First, then, if we consider the principal front, we shall see it raised
from the ground either above a range of outside stairs or basement
walls, so that standing thus freely the building should seem to rise
with grandeur from the ground, while the kitchens and cellars under
ground are more clearly lighted and of greater elevation. This also
greatly protects the edifice from earthquakes and other accidents of
fortune. Then it must represent the body of a man in the whole and
similarly in the parts; and as it has to fear wind, water, and other
natural forces it should be drained with sewers, that must be all in
connection with a central conduit that carries away all the filth and
smells that might generate sickness. In its first aspect the façade
demands beauty and grandeur, and should be divided as is the face
of a man. The door must be low down and in the middle, as in the
head the mouth of the man, through which passes every sort of food;
the windows for the eyes, one on this side, one on that, observing
always parity, that there be as much ornament, and as many arches,
columns, pilasters, niches, jutting windows, or any other sort of
enrichment, on this side as on that; regard being had to the
proportions and Orders already explained, whether Doric, Ionic,
Corinthian, or Tuscan. The cornice which supports the roof must be
made proportionate to the façade according to its size, that rainwater
may not drench the façade and him who is seated at the street front.
The projection must be in proportion to the height and breadth of
the façade. Entering within, let the first vestibule have a great
amplitude, and let it be arranged to join fittingly with the entrance
corridor, through which everything passes; let it be free and wide, so
that the press of horses or of crowds on foot, that often congregate
there, shall not do themselves any hurt in the entrance on fête days
or on other brilliant occasions. The courtyard, representing the
trunk, should be square and equal, or else a square and a half, like all
the parts of the body, and within there should be doors and well-
arranged apartments with beautiful decoration. The public staircase
needs to be convenient and easy to ascend, of spacious width and
ample height, but only in accordance with the proportion of the other
parts. Besides all this, the staircases should be adorned or copiously
furnished with lights, and, at least over every landing-place where
there are turns, should have windows or other apertures. In short,
the staircases demand an air of magnificence in every part, seeing
that many people see the stairs and not the rest of the house. It may
be said that they are the arms and legs of the body, therefore as the
arms are at the sides of a man so ought the stairs to be in the wings
of the edifice. Nor shall I omit to say that the height of the risers
ought to be one fifth of a braccio at least,[150] and every tread two
thirds wide,[151] that is, as has been said, in the stairs of public
buildings and in others in proportion; because when they are steep
neither children nor old people can go up them, and they make the
legs ache. This feature is most difficult to place in buildings, and
notwithstanding that it is the most frequented and most common, it
often happens that in order to save the rooms the stairs are spoiled.
It is also necessary that the reception rooms and other apartments
downstairs should form one common hall for the summer, with
chambers to accommodate many persons, while upstairs the
parlours and saloons and the various apartments should all open
into the largest one. In the same manner should be arranged the
kitchens and other places, because if there were not this order and if
the whole composition were broken up, one thing high, another low,
this great and that small, it would represent lame men, halt,
distorted, and maimed. Such works would merit only blame, and no
praise whatever. When there are decorated wall-faces either external
or internal, the compositions must follow the rules of the Orders in
the matter of the columns, so that the shafts of the columns be not
too long nor slender, not over thick nor short, but that the dignity of
the several Orders be always observed. Nor should a heavy capital or
base be connected with a slender column, but in proportion to the
body must be the members, that they may have an elegant and
beautiful appearance and design. All these things are best
appreciated by a correct eye, which, if it have discrimination, can
hold the true compasses and estimate exact measurements, because
by it alone shall be awarded praise or blame. And this is enough to
have said in a general sense of architecture, because to speak of it in
any other way is not matter for this place.
NOTES ON ‘INTRODUCTION’ TO
ARCHITECTURE

PORPHYRY AND PORPHYRY QUARRIES.

[See § 2, Of Porphyry, ante, p. 26.]


Porphyry, which is mineralogically described as consisting of
crystals of plagioclase felspar in a purple felspathic paste, is a very
hard stone of beautiful colour susceptible of a high polish. ‘No
material,’ it has been said, ‘can approach it, either in colour, fineness
of grain, hardness or toughness. When used alone its colour is always
grand; and in combination with any other coloured material,
although displaying its nature conspicuously, it is always
harmonious’ (Transactions, Royal Institute of British Architects,
1887, p. 48). Though obtained, as Vasari knew, from Egypt, it was
not known to the dynastic Egyptians, but was exploited with avidity
by the Romans of the later imperial period. The earliest mention of it
seems to be in Pliny, Hist. Nat., XXXVI, 11, under the name
‘porphyrites’ and statues in the material were according to this
author sent for the first time to Rome from Egypt in the reign of
Claudius. The new material was however not approved of, and for
some time was by no means in fashion. It was not indeed till the age
of the Antonines that as Helbig remarks ‘the preference for costly
and rare varieties of stone, without reference to their adaptability for
sculpture, began to spread.’ After this epoch, the taste for porphyry
and other such strongly marked or else intractable materials grew till
it became a passion, and the Byzantine emperors carried on the
tradition of its use inherited by them from the later days of
paganism. The material was quarried in the mountains known as
Djebel Duchan near the coast of the Red Sea, almost opposite the
southern point of the peninsula of Sinai, and the Romans carried the
blocks a distance of nearly 100 miles to Koptos on the Nile whence
they were transported down stream to Alexandria, where Mr
Brindley thinks there would be reserve dépôts where lapidaries and
artists resided, a source of supply for the large quantities used by
Constantine. The same authority estimates that there must be about

You might also like