Professional Documents
Culture Documents
Cloud Computing Paper Titles and Abstract
Cloud Computing Paper Titles and Abstract
IEEE PAPERS:
1. Compatibility-Aware Cloud Service Composition under Fuzzy Preferences of
Users January 2014
When a single Cloud service (i.e., a software image and a virtual machine), on its own,
cannot satisfy all the user requirements, a composition of Cloud services is required. Cloud
service composition, which includes several tasks such as discovery, compatibility checking,
selection, and deployment, is a complex process and users find it difficult to select the best
one among the hundreds, if not thousands, of possible compositions available. Service
composition in Cloud raises even new challenges caused by diversity of users with different
expertise requiring their applications to be deployed across difference geographical locations
with distinct legal constraints. The main difficulty lies in selecting a combination of virtual
appliances (software images) and infrastructure services that are compatible and satisfy a user
with vague preferences. Therefore, we present a framework and algorithms which simplify
Cloud service composition for unskilled users. We develop an ontology-based approach to
analyze Cloud service compatibility by applying reasoning on the expert knowledge. In
addition, to minimize effort of users in expressing their preferences, we apply combination of
evolutionary algorithms and fuzzy logic for composition optimization. This lets users express
their needs in linguistics terms which brings a great comfort to them compared to systems
that force users to assign exact weights for all preferences.
2. Dynamic Heterogeneity-Aware Resource Provisioning in the Cloud April 2014
Data centers consume tremendous amounts of energy in terms of power distribution and
cooling. Dynamic capacity provisioning is a promising approach for reducing energy
consumption by dynamically adjusting the number of active machines to match resource
demands. However, despite extensive studies of the problem, existing solutions have not fully
considered the heterogeneity of both workload and machine hardware found in production
environments. In particular, production data centers often comprise heterogeneous machines
with different capacities and energy consumption characteristics. Meanwhile, the production
cloud workloads typically consist of diverse applications with different priorities,
performance and resource requirements. Failure to consider the heterogeneity of both
machines and workloads will lead to both sub-optimal energy-savings and long scheduling
delays, due to incompatibility between workload requirements and the resources offered by
the provisioned machines. To address this limitation, we present Harmony, a HeterogeneityAware dynamic capacity provisioning scheme for cloud data centers. Specifically, we first use
the K-means clustering algorithm to divide workload into distinct task classes with similar
characteristics in terms of resource and performance requirements. Then we present a
technique that dynamically adjusting the number of machines to minimize total energy
consumption and scheduling delay. Simulations using traces from a Google's compute cluster
demonstrate Harmony can reduce energy by 28 percent compared to heterogeneity-oblivious
solutions.
3.
by analyzing keyword clusters. The results of this study provide a better understanding of
patterns, trends and other important factors as a basis for directing research activities, sharing
knowledge and collaborating in the area of cloud computing research
9. A Self-Scalable and Auto-Regulated Request Injection Benchmarking Tool for
Automatic Saturation Detection June 2014
Software applications providers have always been required to perform load testing prior to
launching new applications. This crucial test phase is expensive in human and hardware
terms, and the solutions generally used would benefit from further development. In particular,
designing an appropriate load profile to stress an application is difficult and must be done
carefully to avoid skewed testing. In addition, static testing platforms are exceedingly
complex to set up. New opportunities to ease load testing solutions are becoming available
thanks to cloud computing. This paper describes a Benchmark-as-a-Service platform based
on: (i) intelligent generation of traffic to the benched application without inducing thrashing
(avoiding predefined load profiles), (ii) a virtualized and self-scalable load injection system.
The platform developed was experimented using two use cases based on the reference JEE
benchmark RUBiS. This involved detecting bottleneck tiers, and tuning servers to improve
performance. This platform was found to reduce the cost of testing by 50 percent compared to
more commonly used solutions.
10. An Autonomic Approach to Risk-Aware Data Center Overbooking 2014
Elasticity is a key characteristic of cloud computing that increases the flexibility for cloud
consumers, allowing them to adapt the amount of physical resources associated to their
services over time in an on-demand basis. However, elasticity creates problems for cloud
providers as it may lead to poor resource utilization, specially in combination with other
factors, such as user overestimations and pre-defined VM sizes. Admission control
mechanisms are thus needed to increase the number of services accepted, raising the
utilization without affecting services performance. This work focuses on implementing an
autonomic risk-aware overbooking architecture capable of increasing the resource utilization
of cloud data centers by accepting more virtual machines than physical available resources.
Fuzzy logic functions are used to estimate the associated risk to each overbooking decision.
By using a distributed PID controller approach, the system is capable of self-adapting over
time-changing the acceptable level of risk-depending on the current status of the cloud data
center. The suggested approach is extensively evaluated using a combination of simulations
and experiments executing real cloud applications with real-life available workloads. Our
results show a 50 percent increment at both resource utilization and capacity allocated with
acceptable performance degradation and more stable resource utilization over time.
11. Performance and cost evaluation of an adaptive encryption architecture for
cloud database services July 2014
The cloud database as a service is a novel paradigm that can support several Internet-based
applications, but its adoption requires the solution of information confidentiality problems.
We propose a novel architecture for adaptive encryption of public cloud databases that offers
an interesting alternative to the tradeoff between the required data confidentiality level and
the flexibility of the cloud database structures at design time. We demonstrate the feasibility
and performance of the proposed solution through a software prototype. Moreover, we
propose an original cost model that is oriented to the evaluation of cloud database services in
plain and encrypted instances and that takes into account the variability of cloud prices and
tenant workloads during a medium-term period.
4
analysis focuses on exposing and quantifying the diversity of behavioral patterns for users
and tasks, as well as identifying model parameters and their values for the simulation of the
workload created by such components. Our derived model is implemented by extending the
capabilities of the CloudSim framework and is further validated through empirical
comparison and statistical hypothesis tests. We illustrate several examples of this work's
practical applicability in the domain of resource management and energy-efficiency.
17. Deadline based Resource Provisioning and Scheduling Algorithm for Scientific
Workflows on Clouds april 2014
Cloud computing is the latest distributed computing paradigm and it offers tremendous
opportunities to solve large-scale scientific problems. However, it presents various challenges
that need to be addressed in order to be efficiently utilized for workflow applications.
Although the workflow scheduling problem has been widely studied, there are very few
initiatives tailored for cloud environments. Furthermore, the existing works fail to either meet
the user's quality of service (QoS) requirements or to incorporate some basic principles of
cloud computing such as the elasticity and heterogeneity of the computing resources. This
paper proposes a resource provisioning and scheduling strategy for scientific workflows on
Infrastructure as a Service (IaaS) clouds. We present an algorithm based on the meta-heuristic
optimization technique, particle swarm optimization (PSO), which aims to minimize the
overall workflow execution cost while meeting deadline constraints. Our heuristic is
evaluated using CloudSim and various well-known scientific workflows of different sizes.
The results show that our approach performs better than the current state-of-the-art
algorithms
18. Extending MapReduce across Clouds with Bstream april 2014
Today, batch processing frameworks like Hadoop MapReduce are difficult to scale to
multiple clouds due to latencies involved in inter-cloud data transfer and synchronization
overheads during shuffle-phase. This inhibits the MapReduce framework from guaranteeing
performance at variable load surges without over-provisioning in the internal cloud (IC). We
propose BStream, a cloud bursting framework for MapReduce that couples stream-processing
in the external cloud (EC) with Hadoop in the internal cloud (IC). Stream processing in EC
enables pipelined uploading, processing and downloading of data to minimize network
latencies. We use this framework to meet job deadlines. BStream uses an analytical model to
minimize the usage of EC. We propose different checkpointing strategies that overlap output
transfer with input transfer/processing and simultaneously reduce the computation involved
in merging the results from EC and IC. Checkpointing further reduces job completion time.
We experimentally compare BStream with other related works and illustrate performance
benefits due to stream processing and checkpointing strategies in EC. Lastly, we characterize
the operational regime of BStream.
19. Virtualization Technology for TCP/IP Offload Engine February 2014
Network I/O virtualization plays an important role in cloud computing. This paper addresses
the system-wide virtualization issues of TCP/IP Offload Engine (TOE) and presents the
architectural designs. We identify three critical factors that affect the performance of a TOE:
I/O virtualization architectures, quality of service (QoS), and virtual machine monitor
(VMM) scheduler. In our device emulation based TOE, the VMM manages the socket
connections in the TOE directly and thus can eliminate packet copy and demultiplexing
overheads as appeared in the virtualization of a layer 2 network card. To further reduce
hypervisor intervention, the direct I/O access architecture provides the per VM-based
7
physical control interface that helps removing most of the VMM interventions. The direct I/O
access architecture out-performs the device emulation architecture as large as 30 percent, or
achieves 80 percent of the native 10 Gbit/s TOE system. To continue serving the TOE
commands for a VM, no matter the VM is idle or switched out by the VMM, we decouple the
TOE I/O command dispatcher from the VMM scheduler. We found that a VMM scheduler
with preemptive I/O scheduling and a programmable I/O command dispatcher with deficit
weighted round robin (DWRR) policy are able to ensure service fairness and at the same time
maximize the TOE utilization.
20. Workload-Aware Credit Scheduler for Improving Network I/O Performance in
Virtualization Environment April 2014
Single-root I/O virtualization (SR-IOV) has become the de facto standard of network
virtualization in cloud infrastructure. Owing to the high interrupt frequency and heavy cost
per interrupt in high-speed network virtualization, the performance of network virtualization
is closely correlated to the computing resource allocation policy in Virtual Machine Manager
(VMM). Therefore, more sophisticated methods are needed to process irregularity and the
high frequency of network interrupts in high-speed network virtualization environment.
However, the I/O-intensive and CPU-intensive applications in virtual machines are treated in
the same manner since application attributes are transparent to the scheduler in hypervisor,
and this unawareness of workload makes virtual systems unable to take full advantage of high
performance networks. In this paper, we discuss the SR-IOV networking solution and show
by experiment that the current credit scheduler in Xen does not utilize high performance
networks efficiently. Hence we propose a novel workload-aware scheduling model with two
optimizations to eliminate the bottleneck caused by scheduler. In this model, guest domains
are divided into I/O-intensive domains and CPU-intensive domains according to their
monitored behaviour. I/O-intensive domains can obtain extra credits that CPU-intensive
domains are willing to share. In addition, the total number of credits available is adjusted to
accelerate the I/O responsiveness. Our experimental evaluations show that the new
scheduling models improve bandwidth and reduce response time, by keeping the fairness
between I/O-intensive and CPU-intensive domains. This enables virtualization infrastructure
to provide cloud computing services more efficiently and predictably.
21. Decreasing Impact of SLA Violations: A Proactive Resource Allocation Approach
for Cloud Computing Environments February 2014
User satisfaction as a significant antecedent to user loyalty has been highlighted by many
researchers in market based literatures. SLA violation as an important factor can decrease
users' satisfaction level. The amount of this decrease depends on user's characteristics. Some
of these characteristics are related to QoS requirements and announced to service provider
through SLAs. But some of them are unknown for service provider and selfish users are not
interested to reveal them truly. Most the works in literature ignore considering such
characteristics and treat users just based on SLA parameters. So, two users with different
characteristics but similar SLAs have equal importance for the service provider. In this paper,
we use two user's hidden characteristics, named willingness to pay for service and
willingness to pay for certainty, to present a new proactive resource allocation approach with
aim of decreasing impact of SLA violations. New methods based on learning automaton for
estimation of these characteristics are provided as well. To validate our approach we
conducted some numerical simulations in critical situations. The results confirm that our
approach has ability to improve users' satisfaction level that cause to gain in profitability
arbitrarily modify the cloud data without being detected by the auditor in the auditing
process. We also suggest a solution to fix the problem while preserving all the properties of
the original protocol.
25. Panda: Public Auditing for Shared Data with Efficient User Revocation in the
Cloud
With data storage and sharing services in the cloud, users can easily modify and share data as
a group. To ensure shared data integrity can be verified publicly, users in the group need to
compute signatures on all the blocks in shared data. Different blocks in shared data are
generally signed by different users due to data modifications performed by different users.
For security reasons, once a user is revoked from the group, the blocks which were
previously signed by this revoked user must be re-signed by an existing user. The
straightforward method, which allows an existing user to download the corresponding part of
shared data and re-sign it during user revocation, is inefficient due to the large size of shared
data in the cloud. In this paper, we propose a novel public auditing mechanism for the
integrity of shared data with efficient user revocation in mind. By utilizing the idea of proxy
re-signatures, we allow the cloud to re-sign blocks on behalf of existing users during user
revocation, so that existing users do not need to download and re-sign blocks by themselves.
In addition, a public verifier is always able to audit the integrity of shared data without
retrieving the entire data from the cloud, even if some part of shared data has been re-signed
by the cloud. Moreover, our mechanism is able to support batch auditing by verifying
multiple auditing tasks simultaneously. Experimental results show that our mechanism can
significantly improve the efficiency of user revocation.
26. Shared Authority Based Privacy-Preserving Authentication Protocol in Cloud
Computing
Cloud computing is an emerging data interactive paradigm to realize users' data remotely
stored in an online cloud server. Cloud services provide great conveniences for the users to
enjoy the on-demand cloud applications without considering the local infrastructure
limitations. During the data accessing, different users may be in a collaborative relationship,
and thus data sharing becomes significant to achieve productive benefits. The existing
security solutions mainly focus on the authentication to realize that a user's privative data
cannot be illegally accessed, but neglect a subtle privacy issue during a user challenging the
cloud server to request other users for data sharing. The challenged access request itself may
reveal the user's privacy no matter whether or not it can obtain the data access permissions. In
this paper, we propose a shared authority based privacy-preserving authentication protocol
(SAPA) to address above privacy issue for cloud storage. In the SAPA, 1) shared access
authority is achieved by anonymous access request matching mechanism with security and
privacy considerations (e.g., authentication, data anonymity, user privacy, and forward
security); 2) attribute based access control is adopted to realize that the user can only access
its own data fields; 3) proxy re-encryption is applied to provide data sharing among the
multiple users. Meanwhile, universal composability (UC) model is established to prove that
the SAPA theoretically has the design correctness. It indicates that the proposed protocol is
attractive for multi-user collaborative cloud applications.
27. Enabling Efficient Multi-Keyword Ranked Search Over Encrypted Mobile
Cloud Data Through Blind Storage
In mobile cloud computing, a fundamental application is to outsource the mobile data to
external cloud servers for scalable data storage. The outsourced data, however, need to be
10
encrypted due to the privacy and confidentiality concerns of their owner. This results in the
distinguished difficulties on the accurate search over the encrypted mobile cloud data. To
tackle this issue, in this paper, we develop the searchable encryption for multi-keyword
ranked search over the storage data. Specifically, by considering the large number of
outsourced documents (data) in the cloud, we utilize the relevance score and ${k}$ -nearest
neighbor techniques to develop an efficient multi-keyword search scheme that can return the
ranked search results based on the accuracy. Within this framework, we leverage an efficient
index to further improve the search efficiency, and adopt the blind storage system to conceal
access pattern of the search user. Security analysis demonstrates that our scheme can achieve
confidentiality of documents and index, trapdoor privacy, trapdoor unlinkability, and
concealing access pattern of the search user. Finally, using extensive simulations, we show
that our proposal can achieve much improved efficiency in terms of search functionality and
search time compared with the existing proposals.
28. Secure, Efficient and Fine-Grained Data Access Control Mechanism for P2P
Storage Cloud
By combining cloud computing and Peer-to-Peer computing, a P2P storage cloud can be
formed to offer highly available storage services, lowering the economic cost by exploiting
the storage space of participating users. However, since cloud severs and users are usually
outside the trusted domain of data owners, P2P storage cloud brings forth new challenges for
data security and access control when data owners store sensitive data for sharing in the
trusted domain. Moreover, there are no mechanisms for access control in P2P storage cloud.
To address this issue, we design a ciphertext-policy attribute-based encryption (ABE) scheme
and a proxy re-encryption scheme. Based on them, we further propose a secure, efficient and
fine-grained data Access Control mechanism for P2P storage Cloud named ACPC. We
enforce access policies based on user attributes, and integrate P2P reputation system in
ACPC. ACPC enables data owners to delegate most of the laborious user revocation tasks to
cloud servers and reputable system peers. Our security analysis demonstrates that ACPC is
provably secure. The performance evaluation shows that ACPC is highly efficient under
practical settings, and it significantly reduces the computation overheads brought to data
owners and cloud servers during user revocation, compared with other state-of-the-art
revocable ABE schemes.
29. SeDaSC: Secure Data Sharing in Clouds
Cloud storage is an application of clouds that liberates organizations from establishing inhouse data storage systems. However, cloud storage gives rise to security concerns. In case of
group-shared data, the data face both cloud-specific and conventional insider threats. Secure
data sharing among a group that counters insider threats of legitimate yet malicious users is
an important research issue. In this paper, we propose the Secure Data Sharing in Clouds
(SeDaSC) methodology that provides: 1) data confidentiality and integrity; 2) access control;
3) data sharing (forwarding) without using compute-intensive reencryption; 4) insider threat
security; and 5) forward and backward access control. The SeDaSC methodology encrypts a
file with a single encryption key. Two different key shares for each of the users are generated,
with the user only getting one share. The possession of a single share of a key allows the
SeDaSC methodology to counter the insider threats. The other key share is stored by a trusted
third party, which is called the cryptographic server. The SeDaSC methodology is applicable
to conventional and mobile cloud computing environments. We implement a working
prototype of the SeDaSC methodology and evaluate its performance based on the time
consumed during various operations. We formally verify the working of SeDaSC by using
11
high-level Petri nets, the Satisfiability Modulo Theories Library, and a Z3 solver. The results
proved to be encouraging and show that SeDaSC has the potential to be effectively used for
secure data sharing in the cloud.
30. Secure Auditing and Deduplicating Data in Cloud
As the cloud computing technology develops during the last decade, outsourcing data to
cloud service for storage becomes an attractive trend, which benefits in sparing efforts on
heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is
not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud
while achieving integrity auditing. In this work, we study the problem of integrity auditing
and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity
and deduplication in cloud, we propose two secure systems, namely SecCloud and
SecCloud+. SecCloud introduces an auditing entity with a maintenance of a MapReduce
cloud, which helps clients generate data tags before uploading as well as audit the integrity of
data having been stored in cloud. Compared with previous work, the computation by user in
SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud+ is
designed motivated by the fact that customers always want to encrypt their data before
uploading, and enables integrity auditing and secure deduplication on encrypted data.
31. DROPS: Division and Replication of Data in Cloud for Optimal Performance
and Security
Outsourcing data to a third-party administrative control, as is done in cloud computing, gives
rise to security concerns. The data compromise may occur due to attacks by other users and
nodes within the cloud. Therefore, high security measures are required to protect data within
the cloud. However, the employed security strategy must also take into account the
optimization of the data retrieval time. In this paper, we propose Division and Replication of
Data in the Cloud for Optimal Performance and Security (DROPS) that collectively
approaches the security and performance issues. In the DROPS methodology, we divide a file
into fragments, and replicate the fragmented data over the cloud nodes. Each of the nodes
stores only a single fragment of a particular data file that ensures that even in case of a
successful attack, no meaningful information is revealed to the attacker. Moreover, the nodes
storing the fragments, are separated with certain distance by means of graph T-coloring to
prohibit an attacker of guessing the locations of the fragments. Furthermore, the DROPS
methodology does not rely on the traditional cryptographic techniques for the data security;
thereby relieving the system of computationally expensive methodologies. We show that the
probability to locate and compromise all of the nodes storing the fragments of a single file is
extremely low. We also compare the performance of the DROPS methodology with ten other
schemes. The higher level of security with slight performance overhead was observed
32. Enabling Fine-grained Multi-keyword Search Supporting Classified Subdictionaries over Encrypted Cloud Data
Using cloud computing, individuals can store their data on remote servers and allow data
access to public users through the cloud servers. As the outsourced data are likely to contain
sensitive privacy information, they are typically encrypted before uploaded to the cloud. This,
however, significantly limits the usability of outsourced data due to the difficulty of searching
over the encrypted data. In this paper, we address this issue by developing the fine-grained
multi-keyword search schemes over encrypted cloud data. Our original contributions are
three-fold. First, we introduce the relevance scores and preference factors upon keywords
which enable the precise keyword search and personalized user experience. Second, we
12
develop a practical and very efficient multi-keyword search scheme. The proposed scheme
can support complicated logic search the mixed AND, OR and NO operations of
keywords. Third, we further employ the classified sub-dictionaries technique to achieve better
efficiency on index building, trapdoor generating and query. Lastly, we analyze the security
of the proposed schemes in terms of confidentiality of documents, privacy protection of index
and trapdoor, and unlinkability of trapdoor. Through extensive experiments using the realworld dataset, we validate the performance of the proposed schemes. Both the security
analysis and experimental results demonstrate that the proposed schemes can achieve the
same security level comparing to the existing ones and better performance in terms of
functionality, query complexity and efficiency.
33. CloudArmor: Supporting Reputation-based Trust Management for Cloud
Services
Trust management is one of the most challenging issues for the adoption and growth of cloud
computing. The highly dynamic, distributed, and non-transparent nature of cloud services
introduces several challenging issues such as privacy, security, and availability. Preserving
consumers privacy is not an easy task due to the sensitive information involved in the
interactions between consumers and the trust management service. Protecting cloud services
against their malicious users (e.g., such users might give misleading feedback to disadvantage
a particular cloud service) is a difficult problem. Guaranteeing the availability of the trust
management service is another significant challenge because of the dynamic nature of cloud
environments. In this article, we describe the design and implementation of CloudArmor, a
reputation-based trust management framework that provides a set of functionalities to deliver
Trust as a Service (TaaS), which includes i) a novel protocol to prove the credibility of trust
feedbacks and preserve users privacy, ii) an adaptive and robust credibility model for
measuring the credibility of trust feedbacks to protect cloud services from malicious users
and to compare the trustworthiness of cloud services, and iii) an availability model to manage
the availability of the decentralized implementation of the trust management service. The
feasibility and benefits of our approach have been validated by a prototype and experimental
studies using a collection of real-world trust feedbacks on cloud services.
individual privacy for the adoption of cloud computing technologies. Existing privacy
protection researches can be classified into three categories, i.e., privacy by policy, privacy by
statistics, and privacy by cryptography. However, the privacy concerns and data utilization
requirements on different parts of the medical data may be quite different. The solution for
medical dataset sharing in the cloud should support multiple data accessing paradigms with
different privacy strengths. The statistics or cryptography technology alone cannot enforce
the multiple privacy demands, which blocks their application in the real-world cloud. This
paper proposes a practical solution for privacy preserving medical record sharing for cloud
computing. Based on the classification of the attributes of medical records, we use vertical
partition of medical dataset to achieve the consideration of different parts of medical data
with different privacy concerns. It mainly includes four components, i.e., (1) vertical data
partition for medical data publishing, (2) data merging for medical dataset accessing, (3)
integrity checking, and (4) hybrid search across plaintext and ciphertext, where the statistical
analysis and cryptography are innovatively combined together to provide multiple paradigms
of balance between medical data utilization and privacy protection. A prototype system for
the large scale medical data access and sharing is implemented. Extensive experiments show
the effectiveness of our proposed solution.
3. Healing on the cloud: Secure cloud architecture for medical wireless sensor
networks
There has been a host of research works on wireless sensor networks (WSN) for medical
applications. However, the major shortcoming of these efforts is a lack of consideration of
data management. Indeed, the huge amount of high sensitive data generated and collected by
medical sensor networks introduces several challenges that existing architectures cannot
solve. These challenges include scalability, availability and security. Furthermore, WSNs for
medical applications provide useful and real information about patients health state. This
information should be available for healthcare providers to facilitate response and to improve
the rescue process of a patient during emergency. Hence, emergency management is another
challenge for medical wireless sensor networks. In this paper, we propose an innovative
architecture for collecting and accessing large amount of data generated by medical sensor
networks. Our architecture overcomes all the aforementioned challenges and makes easy
information sharing between healthcare professionals in normal and emergency situations.
Furthermore, we propose an effective and flexible security mechanism that guarantees
confidentiality, integrity as well as fine-grained access control to outsourced medical data.
This mechanism relies on Ciphertext Policy Attribute-based Encryption (CP-ABE) to achieve
high flexibility and performance. Finally, we carry out extensive simulations that allow
showing that our scheme provides an efficient, fine-grained and scalable access control in
normal and emergency situations.
4. A scalable and dynamic application-level secure communication framework for
inter-cloud services
Most of the current cloud computing platforms offer Infrastructure as a Service (IaaS) model,
which aims to provision basic virtualized computing resources as on-demand and dynamic
services. Nevertheless, a single cloud does not have limitless resources to offer to its users,
hence the notion of an Inter-Cloud environment where a cloud can use the infrastructure
resources of other clouds. However, there is no common framework in existence that allows
the service owners to seamlessly provision even some basic services across multiple cloud
service providers, albeit not due to any inherent incompatibility or proprietary nature of the
14
foundation technologies on which these cloud platforms is built. In this paper we present a
novel solution which aims to cover a gap in a subsection of this problem domain. Our
solution offers a security architecture that enables service owners to provision a dynamic and
service-oriented secure virtual private network on top of multiple cloud IaaS providers. It
does this by leveraging the scalability, robustness and flexibility of peer-to-peer overlay
techniques to eliminate the manual configuration, key management and peer churn problems
encountered in setting up the secure communication channels dynamically, between different
components of a typical service that is deployed on multiple clouds. We present the
implementation details of our solution as well as experimental results carried out on two
commercial clouds.
5. Achieving security, robust cheating resistance, and high-efficiency for
outsourcing large matrix multiplication computation to a malicious cloud
Computation outsourcing to the cloud has become a popular application in the age of cloud
computing. This computing paradigm brings in some new security concerns and challenges,
such as input/output privacy and result veriability. Given that matrix multiplication computation (MMC) is a ubiquitous scientic and engineering computational task, we are motivated
to design a protocol to enable secure, robust cheating resistant, and efcient outsourcing of
MMC to a malicious cloud in this paper. The main idea to protect the privacy is employing
some transformations on the original MMC problem to get an encrypted MMC problem
which is sent to the cloud; and then transforming the result returned from the cloud to get the
correct result to the original MMC problem. Next, a randomized Monte Carlo verication
algorithm with one-sided error is introduced to successfully handle result verication. We
analytically show that the proposed protocol is correct, secure, and robust cheating resistant.
Extensive theoretical analysis and experimental evaluation also show its high-efciency and
immediate practicability. Finally, comparisons between the proposed protocol and the
previous protocols are given to demonstrate the improvements of the proposed protocol.
6. Toward a secure and usable cloud-based password manager for web browsers
Web users are confronted with the daunting challenges of creating, remembering, and using
more and more strong passwords than ever before in order to protect their valuable assets on
different websites. Password manager, particularly Browser-based Password Manager (BPM),
is one of the most popular approaches designed to address these chal-lenges by saving users'
passwords and later automatically lling the login forms on behalf of users. Fortunately, all
the ve most popular Web browsers have provided password managers as a useful built-in
feature. In this paper, we uncover the vulnerabilities of existing BPMs and analyze how they
can be exploited by attackers to crack users' saved passwords. Moreover, we propose a novel
Cloud-based Storage-Free BPM (CSF-BPM) design to achieve a high level of security with
the desired condentiality, integrity, and availability properties. We have implemented a CSFBPM system into Firefox and evaluated its cor-rectness, performance, and usability. Our
evaluation results and analysis demonstrate that CSF-BPM can be efciently and
conveniently used. We believe CSF-BPM is a rational design that can also be integrated into
other popular browsers to make the online experience of Web users more secure, convenient,
and enjoyable.
7. SECO: Secure and scalable data collaboration services in cloud computing
Cloud storage services enable users to remotely store their data and eliminate excessive local
installation of software and hardware. There is an increasing trend of outsourcing enterprise
15
data to the cloud for efcient data storage and management. However, this introduces many
new challenges toward data security. One critical issue is how to enable a secure data
collaboration service including data access and update in cloud computing. A data
collaboration service is to support the availability and consistency of the shared data among
multi-users. In this paper, we propose a secure, efcient and scalable data collab-oration
scheme SECO. In SECO, we employ a multi-level hierarchical identity based encryption
(HIBE) to guarantee data condentiality against untrusted cloud. This paper is the rst
attempt to explore secure cloud data collaboration services that precludes infor-mation
leakage and enables a one-to-many encryption paradigm, data writing operation and negrained access control simultaneously. Security analysis indicates that the SECO is
semantically secure against adaptive chosen ciphertext attacks (IND-ID-CCA) in the random
oracle model, and enforces ne-grained access control, collusion resistance and backward
secrecy. Extensive performance analysis and experimental results show that SECO is highly
efcient and has only low overhead on computation, communication and storage.
8. The method to secure scalability and high density in cloud data-center
Recently IT infrastructures change to cloud computing, the demand of cloud data center
increased. Cloud computing is a model for enabling ubiquitous, convenient, on-demand
network access to a shared computing resources that can be rapidly provisioned and released
with minimal management effort, the interest on data centers to provide the cloud computing
services is increasing economically and variably. This study analyzes the factors to improve
the power efficiency while securing scalability of data centers and presents the considerations
for cloud data center construction in terms of power distribution method, power density per
rack and expansion unit separately. The result of this study may be used for making rational
decisions concerning the power input, voltage transformation and unit of expansion when
constructing a cloud data center or migrating an existing data center to a cloud data center.
9. SABA: A security-aware and budget-aware workflow scheduling strategy in
clouds
High quality of security service is increasingly critical for Cloud workflow applications.
However, existing scheduling strategies for Cloud systems disregard security requirements of
workflow applications and only consider CPU time neglecting other resources like memory,
storage capacities. These resource competition could noticeably affect the computation time
and monetary cost of both submitted tasks and their required security services. To address this
issue, in this paper, we introduce immoveable dataset concept which constrains the
movement of certain datasets due to security and cost considerations and propose a new
scheduling model in the context of Cloud systems. Based on the concept, we propose a
Security-Aware and Budget-Aware workflow scheduling strategy (SABA), which holds an
economical distribution of tasks among the available CSPs (Cloud Service Providers) in the
market, to provide customers with shorter makespan as well as security services. We
conducted extensive simulation studies using six different workflows from real world
applications as well as synthetic ones. Results indicate that the scheduling performance is
affected by immoveable datasets in Clouds and the proposed scheduling strategy is highly
effective under a wide spectrum of workflow applications.
10. Improved security of a dynamic remote data possession checking protocol for
cloud storage
Cloud storage offers the users with high quality and on-demand data storage services and
frees them from the burden of maintenance. However, the cloud servers are not fully trusted.
16
Whether the data stored on cloud are intact or not becomes a major concern of the users.
Recently, Chen et al. proposed a remote data possession checking protocol to address this
issue. One distinctive feature of their protocol support data dynamics, meaning that users are
allowed to modify, insert and delete their outsourced data without the need to re-run the
whole protocol. Unfortunately, in this paper, we nd that this protocol fails to achieve its
purpose since it is vulnerable to forgery attack and replace attack launched by a mali-cious
server. Specically, we show how a malicious cloud server can deceive the user to believe
that the entire le is well-maintained by using the meta-data related to the le alone, or with
only part of the le and its meta-data. Then, we propose an improved protocol to x the
security aws and formally proved that our proposal is secure under a well-known security
model. In addition, our improvement keeps all the desirable features of the original protocol.
11. A stochastic evolutionary coalition game model of secure and dependable virtual
service in Sensor-Cloud
Various intrusion detection systems (IDSs) have been proposed in recent years to provide safe
and reliable services in cloud computing. However, few of them have considered the
existence of service attackers who can adapt their attacking strategies to the topology-varying
environment and service providers strate-gies. In this paper, we investigate the security and
dependability mechanism when service providers are facing service attacks of software and
hardware, and propose a stochastic evolutionary coalition game (SECG) framework for
secure and reliable defenses in virtual sensor services. At each stage of the game, service
providers observe the resource availability, the quality of service (QoS), and the attackers
strate-gies from cloud monitoring systems (CMSs) and IDSs. According to these
observations, they will decide how evolutionary coalitions should be dynamically formed for
reliable virtual-sensor-service compos-ites to deliver data and how to adaptively defend in the
face of uncertain attack strategies. Using the evolutionary coalition game, virtual-sensorservice nodes can form a reliable service composite by a reli-ability update function. With the
Markov chain constructed, virtual-sensor-service nodes can gradually learn the optimal
strategy and evolutionary coalition structure through the minimax-Q learning, which
maximizes the expected sum of discounted payoffs dened as QoS for virtual-sensor-service
composites. The proposed SECG strategy in the virtual-sensor-service attack-defense game is
shown to achieve much better performance than strategies obtained from the evolutionary
coalition game or stochastic game, which only maximizes each stages payoff and optimizes a
defense strategy of stochastic evolutionary, since it successfully accommodates the
environment dynamics and the strategic behavior of the service attackers.
12. A cloud-based secure authentication (CSA) protocol suite for defense against
Denial of Service (DoS) attacks
Cloud-based services have become part of our day-to-day software solutions. The identity
authentication process is considered to be the main gateway to these services. As such, these
gates have become increasingly susceptible to aggressive attackers, who may use Denial of
Service (DoS) attacks to close these gates permanently. There are a number of authentication
protocols that are strong enough to verify identities and protect traditional networked
applications. However, these authentication protocols may themselves intro-duce DoS risks
when used in cloud-based applications. This risk introduction is due to the utilization of a
heavy verication process that may consume the cloud's resources and disable the application
service. In this work, we propose a novel cloud-based authentica-tion protocol suite that not
only is aware of the internal DoS threats but is also capable of defending against external
DoS attackers. The proposed solution uses a multilevel adaptive technique to dictate the
17
efforts of the protocol participants. This technique is capable of identifying a legitimate user's
requests and placing them at the front of the authentication process queue. The authentication
process was designed in such a way that the cloud-based servers become footprint-free and
completely aware of the risks of any DoS attack.
13. An integrated task computation and data management scheduling strategy for
work flow applications in cloud environments
Aworkow is a systematic computation or a data-intensive application that has a regular
computation and data access patterns. It is a key to design scalable scheduling algorithms in
Cloud environments to address these runtime regularities effectively. While existing
researches ignore to join the tasks scheduling and the optimization of data management for
workow, little attention has been paid so far to understand the combination between the two.
The proposed scheme indicates that the coordination between task computation and data
management can improve the scheduling performance. Our model considers data
management to obtain satisfactory makespan on multiple datacenters. At the same time, our
adaptive data-dependency analysis can reveal parallelization opportunities. In this paper, we
introduce an adaptive data-aware scheduling (ADAS) strategy for workow applications. It
consist of a set-up stage which builds the clusters for the workow tasks and datasets, and a
run-time stage which makes the overlapped execution for the workows. Through rigorous
performance evaluation studies, we demonstrate that our strategy can effectively improve the
workow completion time and utilization of resources in a Cloud environment.
14. A scalable and dynamic application-level secure communication framework for
inter-cloud services
Most of the current cloud computing platforms offer Infrastructure as a Service (IaaS) model,
which aims to provision basic virtualized computing resources as on-demand and dynamic
services. Nevertheless, a single cloud does not have limitless resources to offer to its users,
hence the notion of an Inter-Cloud environment where a cloud can use the infrastructure
resources of other clouds. However, there is no common framework in existence that allows
the service owners to seamlessly provision even some basic services across multiple cloud
service providers, albeit not due to any inherent incompatibility or proprietary nature of the
foundation technologies on which these cloud platforms is built. In this paper we present a
novel solution which aims to cover a gap in a subsection of this problem domain. Our
solution offers a security architecture that enables service owners to provision a dynamic and
service-oriented secure virtual private network on top of multiple cloud IaaS providers. It
does this by leveraging the scalability, robustness and flexibility of peer-to-peer overlay
techniques to eliminate the manual configuration, key management and peer churn problems
encountered in setting up the secure communication channels dynamically, between different
components of a typical service that is deployed on multiple clouds. We present the
implementation details of our solution as well as experimental results carried out on two
commercial clouds.
15. DynamicCloudSim: Simulating heterogeneity in computational clouds
Simulation has become a commonly employed first step in evaluating novel approaches
towards resource allocation and task scheduling on distributed architectures. However,
existing simulators fall short in their modeling of the instability common to shared
computational infrastructure, such as public clouds. In this work, we present
DynamicCloudSim which extends the popular simulation toolkit CloudSim with several
factors of instability, including inhomogeneity and dynamic changes of performance at
18
Solving this model, we notice that there exists a tradeoff between the energy consumption
and performance. We hereby develop a BFGS based algorithm to optimize the tradeoff by
searching for the optimal system parameter values for the data center operators to rightsize the data centers. We implement our Stochastic Right-sizing Model (SRM) and deploy it
in the real-world cloud data center. Experiments with two real-world workload traces show
that SRM can significantly reduce the energy consumption while maintaining high
performance.
20