Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

Topic

Assignment Grading

Course: Name

Topic

Yours Name

Professor’s Name [optional]

University

1
Topic

Consider two cloud service systems: Google File System and Amazon S3. Explain how

they achieve their design goals to secure data integrity and to maintain data consistency

while facing the problems of hardware failure, especially concurrent hardware failures.

Google File System (GFS): This is a distributed file system. Google uses GFS for the data

generation and processing which require to use by their services and also, they perform

research on collected large data sets. If there is any failure of disks, machines, and links then

system is effective enough to detect and recover data from the failure point because it

maintains continues monitoring that allow it to find the failure point. If there is any master

server failure then it maintains high availability using shadows which use up-to-date metadata

that allow them to recover data from log operations.

After collecting the operation logs, the shadow masters apply the same mutations to

its data structures and then system switch one of shadow to become master (when master

fails). In case of failure, GFS manages data integrity by using its chucnkserver which use

checksuming to detect data corruption and then recover it. (a chunk breaks into 64 KB blocks

and each block has a chunk sum of 64 bits). Moreover, GFS use Consistency Model for data

consistency which ensures GFS maintains concurrent mutations successfully and this

mutation works at all the regional locations across all the replica of data, ("GFS - Google File

System").

Amazon S3: Amazon S3 is the simple storage service for the internet that offers a highly

scalable, reliable, and low-latency cloud data storage services. Therefore, it is recognized as

web service interface where an individual can store, and retrieve their data at any time, and

from anywhere using the internet web services. Amazon ensure to maintain replica for each

storage in their respective zones across all the global locations. These replication provide

Amazon S3 services in case of any system failure. However, there is no protection for

2
Topic

accidental deletion and for data integrity because it stores only copies of the data but do not

maintain data redundancy. But for specific accounts especially purchased ones, Amazon S3

offers standard redundancy and reduced redundancy options which accomplish durability

objectives but that depends on account type. Additionally, there are versioning options for

customers that provide one more level of data protection and by getting this option,

customers who accidentally lose their data or overwrite, can recover their data which they

have lost due to intended user action or application failure, ("Amazon Simple Storage Service

(S3) FAQs").

3
Topic

References

GFS - Google File System. Retrieved from http://google-file-system.wikispaces.asu.edu/

Amazon Simple Storage Service (S3) FAQs. Retrieved from https://aws.amazon.com/s3/faqs/

You might also like