Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & ISSN

0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 60-70 IAEME

TECHNOLOGY (IJCET)

ISSN 0976 6367(Print) ISSN 0976 6375(Online) Volume 5, Issue 2, February (2014), pp. 66-70 IAEME: www.iaeme.com/ijcet.asp Journal Impact Factor (2014): 4.4012 (Calculated by GISI) www.jifactor.com

IJCET
IAEME

SURVEY ON THE PERFORMANCE ANALYSIS OF CLOUD COMPUTING SERVICES


A.Lourdes Mary1,
1

Dr. R.Ravi2

Department of computer science and Engineering, Associate Professor, SCAD College Engineering and Technology, Cheranmahadevi, India 2 Department of computer science and Engineering, Professor & Head, Francis Xavier Engineering College, Tirunelveli, India

ABSTRACT Cloud computing is a cutting edge technology for computing and people who are not a techsavy knew this buzz word. Now it is emerging rapidly in the research arena. It is an aggregation of Pay-per-use computing paradigm and a utility computing. Voluminous data storage and timeliness of the needed resources makes this as a computing-computer data centers. Numerous cloud providers are there with their own set of features and service level agreements. Users are increasing day-by-day due to the thirst for processing and retrieving large amount of data. Therefore performance of its services needs its attention. Performance may be computed based on the metrics like latency, response time, throughput and data retrieval time. For scientific computing, it is the excellent choice due to the availability and reliability of the data over a period of time. Several peta bytes of workload can be processed in several nanoseconds with the appropriate CRMS policies. Keywords: Cloud Computing, Utility Computing, Data Centers, Peta Bytes, CRMS. I. INTRODUCTION Cloud computing is not at all a new term and even a computer illiterate knew that data can be retrieved and stored in the form of cloud. Cloud computing has its own definition. It refers to both the applications and services over the Internet as well as the hardware and software in the data centers that provide those services[1].The data center hardware and software is what we will call a cloud. When a cloud is open in apay-per-use manner to the general public, We call it by name public cloud; the service being sold is called utility computing. Private cloud is the term refers to internal

66

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 60-70 IAEME

data centres of a business or other corporate, not for general public. Cloud is the aggregation of utility computing and SaaS excluding private clouds[2]. In the view of hardware, the three main aspects of cloud computing are: 1. The mirage of infinite computing resources available on demand, there is no need for cloud users to plan for resource provisioning. 2. The elimination of an up-front commitment by cloud users, thereby allowing corporate to start small and increase hardware resources only when there is an increase in their needs. 3. The ability to pay per use on a short term basis as needed and release them as needed, there by rewarding conservation by letting machines and storage go when they are no longer in use[2]. Critical obstacles to the growth of Cloud computing:- adoption, growth, policy and business. Current examples of public utility computing includes Amazon Web service(AWS),Google App Engine and Microsoft Azure. As a successful example, Amazon Elastic Cloud (EC2) sells 1.0Ghz x86 ISA slices for 10 dollars per hour,and a new slice(Instance) can be added in 2 to 5 minutes. Amazon S3 charges $0.12 to $0.15 per Gigabits, with additional bandwidth charges of $0.10 to $0.15pergigabyte to move data into and out of AWS over the Internet. Amazon, eBay, Google, Microsoft and others having scalable infrastructures such as MapReduce, Google File System, Big Table and Dynamo. To avail a cloud service like Amazon the customer need to have a credit card. Selling hardware-level virtual machine cycles, allowing users to choose their own software stack without disturbing others while sharing hardware thereby reducing costs further. Cloud offers excellent and unique opportunity for parallel batch processing that analyse terabytes of data and can take hours to finish. II. SCIENTIFIC COMPUTING
A.

Importance Scientific computing needs are always increasing in the number of resources to produce the results for larger problem sizes in a reasonable time frame. Yester years, while the large research projects were able to afford super computers, others projects were forced to use resources with minimum cost such as commodity cluster and grids. Scientific workflow is concerned with the automation of scientific processes in which tasks are structured based on their control and data dependencies. Thanks to the major paradigm shift, Cloud computing offers leasing of data centers on demand. Cloud remains an available and reliable platform the pool of resources or commodity clusters [3].Scientific workloads varies from the initial target workload one in the size and the other in performance demand. It usually requires top performance and High performance computing capabilities. Scientific computing is a high-utilization workload, with parallel production infrastructures(PPI). Among the three services models SaaS,PaaS,IaaS- IaaS is a raw infrastructure and associated middleware. Scientific computing adopts IaaS. There are number of cloud vendors. Among this, Amazon is the commercial IaaS provides an infrastructure size that can accommodate entire grids and PPI workloads. In EC2, ECU is the unit used for measuring CPU power 1.0-1.2 GHz 2007operton or Xeon processor. At peak performance 1EUC equals to 4.4GFLOPS[4].

67

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 60-70 IAEME

To Create an infrastructure from EC2 resources, the user needs the creation of one or more instances for which it specifies the instance type and the VM image; the user can specify the VM image already registered. Once the VM image is deployed on the running machine where the resource is available, the instance is booted. At the end of the boot process the resource status becomes installed. The resource installed can be used as a regular computing node via sh connection. A maximum of 20 instances can be used by the users. Amazon EC2 do not provide job execution .RMS can act as a middleware between the user and Amazon EC2 to reduce resource complexity. III. LITERATURE SURVEY Wei Huang, Jiuxing Liu et al., (2006) proposes a virtual machine technologies[5] with a revision and provides advantages such as ease of management, system security, performance isolation ,checkpoint and live migration. Challenges addressed are: CPU virtualization, memory virtualization and I/O virtualization. VM Image Management cost can be reduced by minimizing VM images, Fast and scalable Image distribution and by VM Image caching.HPC applications can achieve the performance as those running in a native, non-virtualized environments. Jianfeng.Z, Lei Wang et al.(2007) designed and implemented a innovative cloud computing software called phoenixcloud[6] to consolidate parallel batch jobs and web service, which cooperatively share the cluster resource. cooperative resource provision and management policies are used to share the cluster. To assess the performance of this cloud software, number of jobs completed jobs and the reciprocal of the turnaround time per job was taken. M .R. Palankar, A. Lamnitchi, M. Ripeanu and S . Garfinkel (2008) proposes a tool for data storage in cloud. REST protocol for failures detection.S3[7] is suitable for reading object of 16MB and larger. storage cost can be reduced by archiving cold data on low cost and maintaining only the data most likely to be used on. It offers low latency, infinite data durability and high-availability. Transfer costs can be reduced by using local caches. Unlimited storage capacity, open protocols and simple API for easy integration. Rajkumar buyya (2008) proposes Grids computing[8] as a global cyber-infrastructure for the future of e-Science applications, by integrating large-scale, distributed and heterogeneous resources. a)utilizing resources that are located in a particular domain to increase throughput or reduce execution costs, (b) execution spanning multiple administrative domains to obtain specific processing capabilities, and (c) integration of multiple teams involved in management of different parts of the experiment workflow thus promoting inter-organizational collaborations. Neizih Y, Alexandru Iosup et.al.(2009) proposes a new framework called as C-Meter[9] for generating and submitting test workloads to computing clouds. IaaS providers are responsible for maintaining the underlying infrastructure while at the same time minimizing the administration and maintenance costs for the users. GrenchMark is the tool used before for Generating and submitting synthetic of real workloads to Grid. It cannot be used for experiments with computing clouds. CMeter has developed as an extension of GrenchMark. The performance analysis of clod computing requires the framework has to be able to generate and submit both real and synthetic workloads. It should gather the statistic details of resource acquisition and release overhead. The framework should have the capability to compare with other computing environments .In addition to these ,the frameworks should provide basic resource management functionalities. Scheduling algorithms implemented are round robin and a simple heuristic algorithm that selects the resources with minimum predicted response time. Performance metrics take n for the evaluation is instance throughput and network latency. Other issues related to response time, waiting time in the queue and bounded slowdown with the threshold.

68

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 60-70 IAEME

Ewa Deelman, Gurmeet singh et al.,(2008) proposes Montage tool[10] when a science project purchases a cluster, the cluster may be cost effective but it is fully dedicated to the needs of the project. Teragrid[11] is a national level effort to provide a large-scale computation platform for science.The reservations and uregent computing are available.Not all the resources are available at a time. It is a data-intensive application .How can an application use cloud in a way that balance between cost and performance? It is desirable to run the Montage application in a resource rich environment where the assurance of the storage resources. storing the large data set in cloud is needed for science. In Amazon[12] web services ,all the resources are there for immediate occupancy and there is no advanced reservations. . IV. PERFORMANCE EVALUATION Cloud computing offers a new business model for supporting data intensive applications and provides a new horizon for scientific applications to have on-demand access to potentially large amounts of storage and computing services. Performance evaluation is necessary in cloud because so many number of cloud vendors offering the services for computing and storage. Provision plans, workflow execution modes, start up cost of the application which is a combination of launching and configuring a virtual machine and its teardown. The reliability and availability of the storage and compute resources are also an important point. Scalability is also a big issue which is need to be explored. A workload trace[3] consists of a number of jobs. For each job the trace records consists of the description about the job includes the size of the job, jobs submission, start and end time. Performance metrics includes resource acquisition time and experiment cost. Performance and cost metrics include the traditional metrics like[13]:wait time[WT],response time(ReT) and bounded slowdown(BSD). Performance analysis tools include Grid Workload Archive and DGSIM in cloud environments. Provisioning of the resources are also the major task. Appropriate scheduling algorithms have to be found to get the work done quickly and correctly in cloud. V. CONCLUSION This paper shows a comprehensive performance analysis of large computing cloud in particular science clouds. Till now the existing clouds is insufficient for computing and storage even though IaaS model provides the needed resources temporarily and immediately. REFERENCES [1] Michael Armbrust, Armando Foe, Rean Griffith, Antony D.Joseph, Randy Katz, Above the clouds :A Berkeley View of cloud computing, UC Berkeley Reliable Adaptive Distributed Systems Laboratory, Feb 10 ,2009. http://radlab.cs.berkeley.edu/. http://www.pds.ewi.tudelft.nl/Iosup/. A. Iosup, O. O. Sonmez, S. Aneop and D.H.J .Epema,The performance of bag-of-tasks in large-scale distributed systems, ACM transaction of High-performance Distributed Computing, vole 4,Issue 15,pages 97-108,2008. Wei Huang, Jiuxing Liu et al., A case for High performance computing with virtual machines, A technical report, 2008.

[2] [3] [4]

[5]

69

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 5, Issue 2, February (2014), pp. 60-70 IAEME

[6] [7]

[8]

[9] [10] [11] [12] [13]

[14]

[15]

[16]

[17]

Jianfeng. Z. Lei Wang et al., Phoenix Cloud:Consolidating Heterogeneous workloads of Large Organizations on cloud computing platforms",2007. M.R.Palankar, A.Lamnitchi, M.Ripeanu and S.Garfinkel, Amazon S3 for science grids :A viable solution? in DADC08:proceedings of the 2008 International workshop on Dataaware distributed computing, New york,ACM,2008,pp.55-64. Jia Yu, Rajkumar Buyya, Kotagiri Ramamohanarao workflow scheduling algorithms for grid computing, Metaheuristics for Scheduling in Distributed Computing Environments, 2008, springer, 173-214. Nezih.Y,I.Alexandru and Dick Epema, C-Meter:A Framework for Performance Analysis of Computing Clouds, IEEE/ACM symposium on cluster computing and cloud, October 2009. Ewa Deelman, Gurmeet singh et al., The cost of doing science on the cloud :The Montage Example, 2008, USC Information sciences Institute, a technical Report. TeraGrid. http://www.teragrid.org. http://aws.amazon.com. D.G.Feistelson, L.Rudaloph etal., Theory and practice in parallel job scheduling, proceedings of the job scheduling strategies for parallel processing workshop, vol.1291, springer-verlag, 1997, pages 1-34. Sujay Pawar and Prof. Mrs. U. M. Patil, A Survey on Secured Data Outsourcing in Cloud Computing, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 3, 2013, pp. 70 - 76, ISSN Print: 0976 6367, ISSN Online: 0976 6375. Abhishek Pandey, R.M.Tugnayat and A.K.Tiwari, Data Security Framework for Cloud Computing Networks, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013, pp. 178 - 181, ISSN Print: 0976 6367, ISSN Online: 0976 6375. Gurudatt Kulkarni, Jayant Gambhir and Amruta Dongare, Security in Cloud Computing, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 258 - 265, ISSN Print: 0976 6367, ISSN Online: 0976 6375. A.Madhuri and T.V.Nagaraju, Reliable Security in Cloud Computing Environment, International Journal of Information Technology and Management Information Systems (IJITMIS), Volume 4, Issue 2, 2013, pp. 23 - 30, ISSN Print: 0976 6405, ISSN Online: 0976 6413.

70

You might also like