Lect 11

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

The Splay Tree Idea

10 If youre forced to make a really deep access: 17 Since youre down there anyway, fix up a lot of deep nodes!
5 2 3 9

Find/Insert in Splay Trees


1. Find or insert a node k 2. Splay k to the root using: zig-zag, zig-zig, or plain old zig rotation

Why could this be good?? 1. Helps the new root, k


o Great if k is accessed again

2. And helps many others!


1

Great if many others on the path are accessed

Splay: Zig-Zag*
g g

Splay: Zig-Zig*
k
p p

k
W Z p g p X

W Y Z W X

Y
*Just

Z Which nodes improve depth?


3

*Is this just two AVL single rotations in a row?

like an

Special Case for Root: Zig


root p

Splaying Example: Find(6)


1 1 2 2

root

?
3 3 4 6 5 5 6
5

Find(6)

Relative depth of p, Y, Z?

Relative depth of everyone else?

Why not drop zig-zig and just zig all the way? 4
6

Still Splaying 6
1 2 1 6 1 6

Finally
6 1

?
3 6 5 4 2 4
7

?
3 5 2 4 3 5 2 4
8

3 5

Another Splay: Find(4)


6 1 1 6

Example Splayed Out


6 1 1 4 6 3 5 2 5

?
3 Find(4) 2 4 5 2
9

?
4 3 5 2
10

4 3

Original Tree
1

Tree after two finds


4

But Wait
What happened here?

2 3 4 5 6

1 3 2 5

Didnt two find operations take linear time instead of logarithmic? What about the amortized O(log n) guarantee?

11

12

Why Splaying Helps


If a node n on the access path is at depth d before the splay, its at about depth d/2 after the splay

Practical Benefit of Splaying


No heights to maintain, no imbalance to check for
Less storage per node, easier to code

Overall, nodes which are low on the access path tend to move closer to the root

Often data that is accessed once, is soon accessed again!


Splaying does implicit caching by bringing it to the root

Splaying gets amortized O(log n) performance.


(Maybe not now, but soon, and for the rest of the operations.)

13

14

Splay Operations: Find


Find the node in normal BST manner Splay the node to the root
if node not found, splay what would have been its parent

Splay Operations: Insert


Insert the node in normal BST manner Splay the node to the root

What if we didnt splay? What if we didnt splay?

15

16

Splay Operations: Remove


Join(L, R):

Join
given two trees such that (stuff in L) < (stuff in R), merge them: splay L R

k find(k) L R delete k L <k R >k L R

max

Splay on the maximum element in L, then attach R

Now what?
17 18

Delete(4) 6 1 4 2 7 9

Delete Example
4 find(4) 1 2 7 2 1 6 9 7 7 1 6 9 1 2 Find max 2 6 9
19

Splay Tree Summary


6

All operations are in amortized O(log n) time


9

Splaying can be done top-down; this may be better because:


7 only one pass no recursion or parent pointers necessary we didnt cover top-down in class

Splay trees are very effective search trees


Relatively simple No extra fields required Excellent locality properties: frequently accessed keys are cheap to find 20

Why do we need to know about the memory hierarchy/locality?

The Memory Hierarchy & Locality

One of the assumptions that Big-Oh makes is that all operations take the same amount of time. Is that really true?

22

Definitions
Cycle (for our purposes) the time it takes to execute a single simple instruction. (ex. Add 2 registers together) Memory Latency time it takes to access memory
SRAM 8KB - 4MB

CPU (has registers)

TIme to access: 1 ns per instruction

Cache

Cache 2-10 ns

Main Memory DRAM up to 10GB Main Memory 40-100 ns

Disk many GB
23

Disk

a few milliseconds (5-10 Million ns)


24

Moores Law

Processor-Memory Performance Gap


x86 CPU speed (100x over 10 years)
1000

Pentium IV o Pentium III o Pentium Pro o Pentium o 386 o Memory gap x x x x x

Memory wall

100

10

1 89 91 93 95 97 99 01

25

26

What can be done?


Goal: Attempt to reduce the number of accesses to the slower levels. How?

Locality
Temporal Locality (locality in time) If an item is referenced, it will tend to be referenced again soon.

Spatial Locality (locality in space) If an item is referenced, items whose addresses are close by will tend to be referenced soon.

27

28

Caches
Each level is a sub-set of the level below. Cache Hit address requested is in cache Cache Miss address requested is NOT in cache Cache line size (chunk size) the number of contiguous bytes that are moved into the cache at one time
29

Examples
x = a + 6; y = a + 5; z = 8 * a; x = a[0] + 6; y = a[1] + 5; z = 8 * a[2];

30

You might also like