Professional Documents
Culture Documents
CACHE 技术讨论: sina@冰砖帮帮忙
CACHE 技术讨论: sina@冰砖帮帮忙
CACHE 技术讨论: sina@冰砖帮帮忙
sina@
CACHE
INTRO
1. The user will get upset and complain and even wont use this application again
2. The storage place will pack up its bags and leave your application , and that made a big problems(no place to store data)
1. When the client invokes a request (lets say he want to view product information) and our application gets the request it will need to access the product data in our storage (database), it first checks the cache. 2. If an entry can be found with a tag matching that of the desired data (say product Id), the entry is used instead. This is known as a cache hit (cache hit is the primary measurement for the caching effectiveness we will discuss that later on).
3. And the percentage of accesses that result in cache hits is known as the hit
rate or hit ratio of the cache.
5
On the contrary when the tag isnt found in the cache (no match were found) this is known as cache miss , a hit to the back storage is made and the data is fetched back and it is placed in the cache so in future hits it will be found and will make a cache hit. If we encountered a cache miss there can be either a scenarios from two scenarios: 1.There is free space in the cache (the cache didnt reach its limit and there is free space) so in this case the object that cause the cache miss will be retrieved from our storage and get inserted in to the cache. 2.There is no free space in the cache (cache reached its capacity) so the object that cause cache miss will be fetched from the storage and then we will have to decide which object in the cache we need to move in order to place our newly created object (the one we just retrieved) this is done by replacement policy (caching algorithms) that decide which entry will be remove to make more room which will be discussed below.
6
When a cache miss occurs, data will be fetch it from the back storage, load it and place it in the cache but how much space the data we just fetched takes in the cache memory? This is known as Storage cost
And when we need to load the data we need to know how much does it take to load the data. This is known as Retrieval cost
When cache miss happens, the cache ejects some other entry in order to make room for the previously uncached data (in case we dont have enough room). The heuristic used to select the entry to eject is known as the replacement policy.
10
Because of the need to track the two most recent accesses, access overhead increases with cache size, If I am applied to a big cache size, that would be a problem, which can be a disadvantage. In addition, I have to keep track of some items not yet in the cache (they arent requested two times yet).I am better that LRU and I am also adoptive to access patterns. I am Two Queues; I add entries to an LRU cache as they are accessed. If an entry is accessed again, I move them to second, larger, LRU cache.
12
13
14
15
Distributed caching:
1.Caching Data can be stored in separate memory area from the caching directory itself (who handle the caching entries and so on) can be across network or disk for example. 2.Distrusting the cache allows increase in the cache size. 3.In this case the retrieval cost will increase also due to network request time.
4.This will also lead to hit ratio increase due to the large size of the cache
16
Thank You
SINA@
Make Presentation much more fun