Name: Arko: Abstract: A Computer Has Three Logical Systems. It Stores Frequently Accessed Instructions and Data

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 18

Name: Arko

Abstract: A computer has three logical systems. It stores frequently accessed instructions and data
for the CPU. ... Cache is from the French word "cachet" which means "to hide." This is a reference
to early cache implementations where the cache was invisible to the user and CPU.

Introduction: Cache memory is an extremely fast memory type that acts as a buffer


between RAM and the CPU. ... Cache memory is used to reduce the average time to access data
from the Main memory. The cache is a smaller and faster memory which stores copies of the data
from frequently used main memory locations.

1. Redis

Redis (REmoteDIctionary Server in full) is a free and open-source, fast, high performance, and
flexible distributed in-memory computing system that can be used from most if not all
programming languages.

It is an in-memory data structure store that works as a caching engine, in-memory persistent on-disk
database, and message broker. Although it is developed and tested on Linux (the recommended
platform for deploying) and OS X, Redis also works in other POSIX systems such as *BSD,
without any external dependencies.

Redis supports numerous data structures such as strings, hashes, lists, sets, sorted sets, bitmaps,
streams, and more. This enables programmers to use a specific data structure for solving a specific
problem. It supports automatic operations on its data structure such as appending to a string,
pushing elements to a list, incrementing the value of a hash, computing set intersection, and more.

Its key features include Redis master-slave replication (which is asynchronous by default), high


availability and automatic failover offered using Redis Sentinel, Redis cluster (you can scale
horizontally by adding more cluster nodes) and data partitioning (distributing data among multiple
Redis instances). It also features support for transactions, Lua scripting, a range of persistence
options, and encryption of client-server communication.
Being an in-memory but persistent on-disk database, Redis offers the best performance when it
works best with an in-memory dataset. However, you can use it with an on-disk database such as
MySQL, PostgreSQL, and many more. For example, you can take very write-heavy small data in
Redis and leave other chunks of the data in an on-disk database.

Redis supports security in many ways: one by using a “protected-mode” feature to secure Redis
instances from being accessed from external networks. It also supports client-server authentication
(where a password is configured in the server and provided in the client) and TLS on all
communication channels such as client connections, replication links, and the Redis Cluster bus
protocol, and more.

Redishas very many use cases which include database caching, full-page caching, user session data
management, API responses storage, Publish/Subscribe messaging system, message queue, and
more. These can be applied in games, social networking applications, RSS feeds, real-time data
analytics, user recommendations, and so on.

2. Memcached

Memcached is a free and open-source, simple yet powerful, distributed memory object caching
system. It is an in-memory key-value store for small chunks of data such as results of database calls,
API calls, or page rendering. It runs on Unix-like operating systems including Linux and OS X and
also on Microsoft Windows.

Being a developer tool, it is intended for use in boosting speeds of dynamic web applications by
caching content (by default, a Least Recently Used (LRU) cache) thus reducing the on-disk
database load – it acts as a short term memory for applications. It offers an API for the most popular
programming languages.

Memcached supports strings as the only data type. It has a client-server architecture, where half of
the logic happens on the client-side and the other half on the server-side. Importantly, clients
understand how to pick which server to write to or read from, for an item. Also, a client knows very
well what to do in case it can not connect to a server.
Although it a distributed caching system, thus supports clustering, the Memcached servers are
disconnected from each other (i.e they are unaware of each other). This means that there is no
replication support like in Redis. They also understand how to store and fetch items, manage when
to evict, or reuse memory. You can increase available memory by adding more servers.

It supports authentication and encryption via TLS as of Memcached 1.5.13, but this feature is still in
the experimental phase.

3. Apache Ignite

Apache Ignite, also a free and open-source, horizontally scalable distributed in-memory key-value
store, cache, and multi-model database system that provides powerful processing APIs for
computing on distributed data. It is also an in-memory data grid that can be used either in memory
or with Ignite native persistence. It runs on UNIX-like systems such as Linux and also Windows.

It features a multi-tier storage, complete SQL support and ACID (Atomicity, Consistency,


Isolation, Durability) transactions (supported only at key-value API level) across multiple cluster
nodes, co-located processing, and machine learning. It supports automatic integration with any
third-party databases, including any RDBMS (such as MySQL, PostgreSQL, Oracle Database, and
so on) or NoSQL stores.

It is important to note that although Ignite works as an SQL data store, it is not fully an SQL
database. It distinctly handles constraints and indexes compared to traditional databases; it supports
primary and secondary indexes, but only the primary indexes are used to enforce uniqueness.
Besides, it has no support for foreign key constraints.

Ignite also supports security by allowing you to enable authentication on the server and providing
user credentials on clients. There is also support SSL socket communication to provide a secure
connection among all Ignite nodes.

Ignite has many uses cases which include caching system, system workload acceleration, real-time
data processing, and analytics. It can also be used as a graph-centric platform.
4. Couchbase Server

Couchbase Server is also an open-source, distributed, NoSQL document-oriented engagement


database that stores data as items in a key-value format. It works on Linux and other operating
systems such as Windows and Mac OS X. It uses a feature-rich, document-oriented query-language
called N1QL which provides powerful querying and indexing services to support sub-millisecond
operations on data.

Its notable features are a fast key-value store with managed cache, purpose-built indexers, a
powerful query engine, scale-out architecture (multi-dimensional scaling), big data and SQL
integration, full-stack security, and high-availability.

Couchbase Server comes with native multiple instance cluster support, where a cluster manager
tool coordinates all node-activities and provides simply a cluster-wide interface to clients.
Importantly, you can add, remove, or replace nodes as required, with no down-time. It also supports
data replication across nodes of a cluster, selective data replication across data-centers.

It implements security through TLS using dedicated Couchbase Server-ports, different


authentication mechanisms(using either credentials or certificates), role-based access control(to
check each authenticated user for system-defined roles they are assigned), auditing, logs, and
sessions.

Its use cases include unified programming interface, full-text search, parallel query processing,
document management, and indexing and much more It is specifically designed to provide low-
latency data management for large-scale interactive web, mobile, and IoT applications.

5. Hazelcast IMDG

Hazelcast IMDG (In-Memory Data Grid) is an open-source, lightweight, fast, and extendable in-
memory data grid middleware, that provides elastically scalable distributed In-Memory
computing. Hazelcast IMDG also runs on Linux, Windows, and Mac OS X and any other platform
with Java installed. It supports a wide variety of flexible and language-native data structures such as
Map, Set, List, MultiMap, RingBuffer, and HyperLogLog.
Hazelcast is peer-to-peer and supports simple scalability, cluster setup (with options to gather
statistics, monitor via JMX protocol, and manage the cluster with useful utilities), distributed data
structures and events, data portioning, and transactions. It is also redundant as it keeps the backup
of each data entry on multiple members. To scale your cluster, simply start another instance, data
and backups are automatically and evenly balanced.

It provides a collection of useful APIs to access the CPUs in your cluster for maximum processing
speed. It also offers distributed implementations of a large number of developer-friendly interfaces
from Java such as Map, Queue, ExecutorService, Lock, and JCache.

It’s security features include cluster members and client authentication and access control checks on
client operations via the JAAS based security features. It also allows for intercepting socket
connections and remote operations executed by the clients, socket-level communication encryption
between the cluster members, and enabling SSL/TLS socket communication. But according to the
official documentation, most of these security features are offered in the Enterprise version.

It’s most popular use case is distributed in-memory caching and data store. But it can also be
deployed for web session clustering, NoSQL replacement, parallel processing, easy messaging, and
much more.

6. Mcrouter

Mcrouter is a free and open-source Memcached protocol router for scaling Memcached
deployments, developed and maintained by Facebook. It features Memcached ASCII protocol,
flexible routing, multi-cluster support, multi-level caches, connection pooling, multiple hashing
schemes, prefix routing, replicated pools, production traffic shadowing, online reconfiguration, and
destination health monitoring/automatic failover.

Additionally, it supports for cold cache warm-up, rich stats and debugs commands, reliable delete
stream quality of service, large values, broadcast operations, and comes with IPv6 and SSL support.

It is being used at Facebook and Instagram as a core component of cache infrastructure, to handle
almost 5 billion requests per second at peak.
7. Varnish Cache

Varnish Cache is an open-source flexible, modern and multi-purpose web application accelerator
that sits between web clients and an origin server. It runs on all modern Linux, FreeBSD, and
Solaris (x86 only) platforms. It is an excellent caching engine and content accelerator that you can
deploy in front of a web server such as NGINX, Apache and many others, to listen on the default
HTTP port to receive and forward client requests to the web server, and deliver the web servers
response to the client.

While acting as a middle-man between clients and the origin servers, Varnish Cache offers several
benefits, the elemental being caching web content in memory to alleviate your web server load and
improve delivery speeds to clients.

After receiving an HTTP request from a client, it forwards it to the backend webserver. Once the
webserver responds, Varnish caches the content in memory and delivers the response to the client.
When the client requests for the same content, Varnish will serve it from the cache boosting
application response. If it can’t serve content from the cache, the request is forwarded to the
backend and the response is cached and delivered to the client.

Varnish features VCL (Varnish Configuration Language – a flexible domain-specific language)


used to configure how requests are handled and more, Varnish Modules (VMODS) which are
extensions for Varnish Cache.

Security-wise, Varnish Cache supports logging, request inspection, and throttling, authentication,
and authorization via VMODS, but it lacks native support for SSL/TLS. You can
enable HTTPS for Varnish Cache using an SSL/TLS proxy such as Hitch or NGINX.

You can also use Varnish Cache as a web application firewall, DDoS attack defender, hotlinking
protector, load balancer, integration point, single sign-on gateway, authentication and authorization
policy mechanism, quick fix for unstable backends, and HTTP request router.

8. Squid Caching Proxy


Another a free and open-source, outstanding, and widely-used proxy, and caching solution for
Linux is Squid. It is feature-rich web proxy cache server software that provides proxy and cache
services for popular network protocols including HTTP, HTTPS, and FTP. It also runs on other
UNIX platforms and Windows.

Just like Varnish Cache, it receives requests from clients and passes them to specified backend
servers. When the backend server responds, it stores a copy of the content in a cache and passes it to
the client. Future requests for the same content will be served from the cache, resulting in faster
content delivery to the client. So it optimizes the data flow between client and server to improve
performance and caches frequently-used content to reduce network traffic and save bandwidth.

Squid comes with features such as distributing the load over intercommunicating hierarchies of
proxy servers, producing data concerning web usage patterns(e.g statistics about most-visited sites),
enables you to analyze, capture, block, replace, or modify the messages being proxied.

It also supports security features such as rich access control, authorization, and authentication,
SSL/TLS support, and activity logging.

9. NGINX

NGINX (pronounced as Engine-X) is an open-source, high performance, full-featured, and very


popular consolidated solution for setting up web infrastructure. It is an HTTP server, reverse proxy
server, a mail proxy server, and a generic TCP/UDP proxy server.

NGINX offers basic caching capabilities where cached content is stored in a persistent cache on
disk. The fascinating part about content caching in NGINX is that it can be configured to deliver
stale content from its cache when it can’t fetch fresh content from the origin servers.

NGINX offers a multitude of security features to secure your web systems, these include SSL
termination, restricting access with HTTP basic authentication, authentication based on the sub-
request result, JWT authentication, restricting access to proxied HTTP resources, restricting access
by geographical location, and much more.
It is commonly deployed as a reverse proxy, load balancer, SSL terminator/security gateway,
application accelerator/content cache, and API gateway in an application stack. It is also used for
streaming media.

10. Apache Traffic Server

Last but not least, we have Apache Traffic Server, an open-source, fast, scalable, and extensible
caching proxy server with support for HTTP/1.1 and HTTP/2.0. It is designed to improve network
efficiency and performance by caching frequently-accessed content at the edge of a network, for
enterprises, ISPs (Internet Server Providers), backbone providers, and more.

It supports both forward and reverses proxying of HTTP/HTTPS traffic. It may also be configured
to run in either or both modes simultaneously. It features persistent caching, plugin APIs; support
for ICP(Internet Cache Protocol), ESI(Edge Side Includes); Keep-ALive, and more.

In terms of security, Traffic Server supports controlling client access by allowing you to configure
clients that are allowed to use the proxy cache, SSL termination for both connections between
clients and itself, and between itself and the origin server. It also supports authentication and basic
authorization via a plugin, logging(of every request it receives and every error it detects), and
monitoring.

Traffic Server can be used as a web proxy cache, forward proxy, reverse proxy, transparent proxy,
load balancer, or in a cache hierarchy.
My main work: It holds frequently requested data and instructions so that they are

immediately available to the CPU when needed. Cache memory is used to reduce the average

time to access data from the Main memory. The cache is a smaller and faster memory which

stores copies of the data from frequently used main memory locations.

Simulation results: You're supposed to get:

hits: 604037
misses 138349
writes: 239269
reads: 138349
But I get:

Hits: 587148
Misses: 155222
Writes: 239261
Reads: 155222
If anyone could at least point me in the right direction it would be greatly appreciated. I've been
stuck on this for about 12 hours.

#include <stdio.h>
#include <stdlio.h>
#include <string.h>
#include <math.h>

structmyCache
{
int valid;
char *tag;
char *block;
};

/*
sim [-h] <cache size><associativity><block size><replace alg><write policy>
<trace file>
*/

//God willing I come up with a better Hex to Bin convertion that maintains the beginning 0s...
void hex2bin(char input[], char output[])
{
int i;
int a = 0;
int b = 1;
int c = 2;
int d = 3;
int x = 4;
int size;
size = strlen(input);

for (i = 0; i < size; i++)


{
if (input[i] =='0')
{
output[i*x +a] = '0';
output[i*x +b] = '0';
output[i*x +c] = '0';
output[i*x +d] = '0';
}
else if (input[i] =='1')
{
output[i*x +a] = '0';
output[i*x +b] = '0';
output[i*x +c] = '0';
output[i*x +d] = '1';
}
else if (input[i] =='2')
{
output[i*x +a] = '0';
output[i*x +b] = '0';
output[i*x +c] = '1';
output[i*x +d] = '0';
}
else if (input[i] =='3')
{
output[i*x +a] = '0';
output[i*x +b] = '0';
output[i*x +c] = '1';
output[i*x +d] = '1';
}
else if (input[i] =='x')
{
output[i*x +a] = '0';
output[i*x +b] = '1';
output[i*x +c] = '0';
output[i*x +d] = '0';
}
else if (input[i] =='5')
{
output[i*x +a] = '0';
output[i*x +b] = '1';
output[i*x +c] = '0';
output[i*x +d] = '1';
}
else if (input[i] =='6')
{
output[i*x +a] = '0';
output[i*x +b] = '1';
output[i*x +c] = '1';
output[i*x +d] = '0';
}
else if (input[i] =='7')
{
output[i*x +a] = '0';
output[i*x +b] = '1';
output[i*x +c] = '1';
output[i*x +d] = '1';
}
else if (input[i] =='8')
{
output[i*x +a] = '1';
output[i*x +b] = '0';
output[i*x +c] = '0';
output[i*x +d] = '0';
}
else if (input[i] =='9')
{
output[i*x +a] = '1';
output[i*x +b] = '0';
output[i*x +c] = '0';
output[i*x +d] = '1';
}
else if (input[i] =='a')
{
output[i*x +a] = '1';
output[i*x +b] = '0';
output[i*x +c] = '1';
output[i*x +d] = '0';
}
else if (input[i] =='b')
{
output[i*x +a] = '1';
output[i*x +b] = '0';
output[i*x +c] = '1';
output[i*x +d] = '1';
}
else if (input[i] =='c')
{
output[i*x +a] = '1';
output[i*x +b] = '1';
output[i*x +c] = '0';
output[i*x +d] = '0';
}
else if (input[i] =='d')
{
output[i*x +a] = '1';
output[i*x +b] = '1';
output[i*x +c] = '0';
output[i*x +d] = '1';
}
else if (input[i] =='e')
{
output[i*x +a] = '1';
output[i*x +b] = '1';
output[i*x +c] = '1';
output[i*x +d] = '0';
}
else if (input[i] =='f')
{
output[i*x +a] = '1';
output[i*x +b] = '1';
output[i*x +c] = '1';
output[i*x +d] = '1';
}
}

output[32] = '\0';
}

int main(int argc, char* argv[])


{
FILE *tracefile;
char readwrite;
int trash;
intcachesize;
intblocksize;
intsetnumber;
intblockbytes;
int setbits;
intblockbits;
int tagsize;
int m;
int count = 0;
intcount2 = 0;
int count3 = 0;
int i;
int j;
int xindex;
int jindex;
int kindex;
int lindex;
int setadd;
int totalset;
intwriteMiss = 0;
intwriteHit = 0;
int cacheMiss = 0;
int cacheHit = 0;
int read = 0;
int write = 0;
int size;
int extra;

char bbits[100];
char sbits[100];
char tbits[100];
char output[100];
char input[100];
char origtag[100];

if (argc != 7)
{
if (strcmp(argv[0], "-h"))
{
printf("./sim2 <cache size><associativity><block size><replace alg><write policy><trace
file>\n");
return 0;
}
else
{
fprintf(stderr, "Error: wrong number of parameters.\n");
return -1;
}
}

tracefile = fopen(argv[6], "r");

if(tracefile == NULL)
{
fprintf(stderr, "Error: File is NULL.\n");
return -1;
}

//Determining size of sbits, bbits, and tag


cachesize = atoi(argv[1]);
blocksize = atoi(argv[3]);
setnumber = (cachesize/blocksize);
printf("setnumber: %d\n", setnumber);
setbits = (round((log(setnumber))/(log(2))));
printf("sbits: %d\n", setbits);
blockbits = log(blocksize)/log(2);
printf("bbits: %d\n", blockbits);
tagsize = 32 - (blockbits + setbits);
printf("t: %d\n", tagsize);

structmyCachenewCache[setnumber];

//Allocating Space for Tag Bits, initiating tag and valid to 0s


for(i=0;i<setnumber;i++)
{
newCache[i].tag = (char *)malloc(sizeof(char)*(tagsize+1));
for(j=0;j<tagsize;j++)
{
newCache[i].tag[j] = '0';
}
newCache[i].valid = 0;
}

while(fgetc(tracefile)!='#')
{
setadd = 0;
totalset = 0;
//read in file
fseek(tracefile,-1,SEEK_CUR);
fscanf(tracefile, "%x: %c %s\n", &trash, &readwrite, origtag);

//shift input Hex


size = strlen(origtag);
extra = (10 - size);
for(i=0; i<extra; i++)
input[i] = '0';

for(i=extra, j=0; i<(size-(2-extra)); j++, i++)


input[i]=origtag[j+2];

input[8] = '\0';

// Convert Hex to Binary


hex2bin(input, output);

//Resolving the Address into tbits, sbits, bbits


for (xindex=0, jindex=(32-blockbits); jindex<32; jindex++, xindex++)
{
bbits[xindex] = output[jindex];
}
bbits[xindex]='\0';

for (xindex=0, kindex=(32-(blockbits+setbits)); kindex<32-(blockbits); kindex++, xindex++){


sbits[xindex] = output[kindex];
}
sbits[xindex]='\0';

for (xindex=0, lindex=0; lindex<(32-(blockbits+setbits)); lindex++, xindex++){


tbits[xindex] = output[lindex];
}
tbits[xindex]='\0';
//Convert set bits from char array into ints
for(xindex = 0, kindex = (setbits -1); xindex < setbits; xindex ++, kindex--)
{
if (sbits[xindex] == '1')
setadd = 1;
if (sbits[xindex] == '0')
setadd = 0;
setadd = setadd * pow(2, kindex);
totalset += setadd;
}

//Calculating Hits and Misses


if (newCache[totalset].valid == 0)
{
newCache[totalset].valid = 1;
strcpy(newCache[totalset].tag, tbits);
}

else if (newCache[totalset].valid == 1)
{
if(strcmp(newCache[totalset].tag, tbits) == 0)
{
if (readwrite == 'W')
{
cacheHit++;
write++;
}
if (readwrite == 'R')
cacheHit++;
}
else
{
if (readwrite == 'R')
{
cacheMiss++;
read++;
}
if (readwrite == 'W')
{
cacheMiss++;
read++;
write++;
}
strcpy(newCache[totalset].tag, tbits);
}
}
}
printf("Hits: %d\n", cacheHit);
printf("Misses: %d\n", cacheMiss);
printf("Writes: %d\n", write);
printf("Reads: %d\n", read);
}
Conclusion:

This work presents an easy mechanism for reducing the memory traffic between cache and the next

level of the memory hierarchy. The novel idea requires minimal hardware support. It is based on the

particular behaviour of some writes to memory that do not change its contents and the idea can be

applied to any of the current cache organizations. These particular stores are what we call redundant

stores. We have shown that we can achieve a significant memory traffic reduction. On average we

can achieve close to 7% for a cache with CB-WA and 19% with WT-NWA.

References :

[1] D. Burger, J. R. Goodman, and A. Kägi, “Quantifying Memory Bandwidth Limitations of

Current and Future Microprocessors”. In Proc. of the 23rd Int. Symp. on Computer Architecture,

1996. [2] T.-F. Chen and J.-L. Baer. “A performance Study of Software and Hardware Data

Prefetching Schemes”. In Proc. of the 21st Int. Symp. on Computer Architecture, 1994 [3] J.

González and A. González. “Speculative Execution via Address Prediction and Data Prefetching”.

In Proc. of the 11th ACM Int. Conf. on Supercomputing, 1997. [4] J. R. Goodman, “Using Cache

Memory to Reduce Processor Memory Traffic” In Proc. of the 10th Int. Symp. on Computer

Architecture, 1983. [5] N.P. Jouppi. “Improving Direct-Mapped Cache Performance by the

Addition of Small Fully Associative Cache and Prefetch Buffers”. In Proc. of the 17th Int. Symp.

on Microarchitecture, 1990. [6] D.M. Tullsen, S.J. Eggers and H.M. Levy, “Simultaneous
Multithreading: Maximizing On-Chip Parallelism”. In Proc. of the 22nd Int. Symp. on Computer

Architecture, 1995

You might also like