Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

124

A Systematic Survey on Cloud Forensics Challenges,


Solutions, and Future Directions

BHARAT MANRAL and GAURAV SOMANI, Central University of Rajasthan, India


KIM-KWANG RAYMOND CHOO, University of Texas at San Antonio, USA
MAURO CONTI, University of Padua, Italy
MANOJ SINGH GAUR, Indian Institute of Technology, India

The challenges of cloud forensics have been well-documented by both researchers and government agencies
(e.g., U.S. National Institute of Standards and Technology), although many of the challenges remain unre-
solved. In this article, we perform a comprehensive survey of cloud forensic literature published between
January 2007 and December 2018, categorized using a five-step forensic investigation process. We also present
a taxonomy of existing cloud forensic solutions, with the aim of better informing both the research and prac-
titioner communities, as well as an in-depth discussion of existing conventional digital forensic tools and
cloud-specific forensic investigation tools. Based on the findings from the survey, we present a set of design
guidelines to inform future cloud forensic investigation processes, and a summary of digital artifacts that can
be obtained from different stakeholders in the cloud computing architecture/ecosystem.
CCS Concepts: • Applied computing → Evidence collection, storage and analysis;
Additional Key Words and Phrases: Cloud forensics, digital forensics in cloud, forensic challenges in cloud,
cloud storage artifacts forensics, cloud network forensics, VM forensics in cloud, SDN forensics
ACM Reference format:
Bharat Manral, Gaurav Somani, Kim-Kwang Raymond Choo, Mauro Conti, and Manoj Singh Gaur. 2019. A
Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions. ACM Comput. Surv. 52,
6, Article 124 (November 2019), 38 pages.
https://doi.org/10.1145/3361216

1 INTRODUCTION
Cloud forensics deals with the incidents involving cloud infrastructure and their services in both
criminal investigations and civil litigation. Despite the online nature of cloud computing, the inci-
dent being investigated does not necessarily have to be purely online. For example, the cloud infras-
tructure and their services may be used in the commission of a conventional physical crime (e.g.,
storage of incriminating materials in the cloud, relating to a physical terrorist attack) and hence,
subject to a forensic investigation. Cloud forensics is more complex than the typical computing

Authors’ addresses: B. Manral and G. Somani (corresponding author), Central University of Rajasthan, Ajmer, India; emails:
justbharatmanral@gmail.com, gaurav@curaj.ac.in; K.-K. R. Choo, University of Texas at San Antonio, San Antonio, TX,
USA; email: raymond.choo@fulbrightmail.org; M. Conti, University of Padua, Padua, Veneto, Italy; email: conti@math.
unipd.it; M. S. Gaur, Indian Institute of Technology, Jammu, Jammu and Kashmir, India; email: manoj.gaur@iitjammu.ac.in.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee
provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and
the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be
honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
0360-0300/2019/11-ART124 $15.00
https://doi.org/10.1145/3361216

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:2 B. Manral et al.

Table 1. Existing Cloud Forensic Reviews/Surveys: A Comparative Summary

Contribution Solution Artifact Coverage Research gap Scope of


taxonomy? identification? of tools? identification? literature review
Our Survey     01/2009–06/2019
[14] × ×   11/2010–03/2017
[6] × × × × 01/2011–11/2017
[21] × × ×  09/2011–03/2017
[58] × × × × 10/2011–10/2014
[88] × × × × 04/2010–09/2015
[45]  × ×  01/2011–04/2015
[9] × × ×  05/2010–06/2014
[87] × × × × 01/2011–10/2014
[65] × × × × 01/2010–08/2014
[53] × × ×  03/2011–09/2014
[89] × × × × 04/2010–06/2013
[7] × ×   11/2010–10/2014
[82] × × ×  01/2011–12/2012
[104] × × ×  05/2010–06/2012
[26] × × × × 05/2010–08/2012
[36] × × ×  02/2010–03/2011
[83] × × × × 2011

device and mobile application (app) forensics, partly due to the architectural complexity of the
cloud (an amalgamation of diverse technologies, such as virtualization of resources, utility com-
puting based on remotely available resources, and distributed systems) [57]. There have been a
number of challenges noted in the literature, such as lack of compatibility and reliability of existing
tools and mechanisms to facilitate cloud forensic investigations, and evidence sources can be at the
client-side, network-side (e.g., for data-in-transit), and server-side (cloud service provider (CSP))
[21]. Most research efforts have focused on client-side forensic analysis, partly due to pragmatic
constraints as it is challenging for researchers, if not impractical, to conduct forensic analysis of
commercial cloud servers, such as those belonging to Amazon and Google. Due to the widespread
use of cloud services [56, 67] and the current privacy-conscious climate (e.g., the introduction of
the General Data Protection Regulation (GDPR)) [24], there is an increasing focus on cloud foren-
sics. Virtualization and distributed environment are two of several factors that may hinder an
investigator’s capability to timely identify and acquire relevant evidential artifacts, for example,
due to the challenge in pinpointing the precise physical location of the hardware resources. Juris-
diction is also another factor that may hinder an investigation, as the investigator may not have
jurisdiction of data stored in a foreign nation.
While cloud forensics is a fairly aged research area, as evidenced by the number of published lit-
erature reviews/surveys, there are a number of ongoing challenges that have not been addressed.
First, in Table 1, we present a comparative summary of existing cloud forensic review/survey ar-
ticles. From this summary, we observe that there is a lack of taxonomy focusing on cloud forensic
solutions, as well as coverage of cloud forensic tools and artifact identification at various lay-
ers/locations in the cloud ecosystem. Thus, in this article, we conduct a systematic literature review
relating to the various practical challenges as illustrated in Figure 1 while executing the five-stage
digital forensic process, similar to the process used by Pichan et al. [65]. These challenges also high-
light the requirements of cloud-specific solutions. This taxonomy is the first attempt to provide

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:3

Fig. 1. Major challenges at each stage of cloud forensic investigation.

detailed coverage and classification of existing cloud forensic solutions, where we also discuss the
strengths and weaknesses of these solutions. As part of this study, we also identify the relevant
stakeholders or actors in the cloud computing environment, which are potential sources of evi-
dential artifacts. In particular, we also describe the dependencies between the artifact collection
processes of different stakeholders (e.g., cloud end-user, service owner, and CSP). Last, we present
a set of cloud forensic investigation considerations in the form of a guideline.
The remainder of the article is as follows: Section 2 describes our literature review method-
ology (e.g., search methods, and inclusion criteria) and introduces our solution taxonomy used
to categorize existing cloud forensic solutions. In Sections 3–6, we present existing advances in
incident-driven cloud forensics, provider-driven cloud forensics, consumer-driven cloud foren-
sics, and resource-driven cloud forensics, respectively. In Section 7, we discuss the limitations of
existing forensics tools and the need for cloud forensic investigation metrics. Section 8 introduces
a set of cloud forensic investigation guidelines alongside with a set of future directions. Finally,
we conclude this article in Section 9.

2 SURVEY METHODOLOGY AND SOLUTION TAXONOMY


In this section, we describe the methodology used to guide this study, such as our search strategy
and inclusion criteria for the final set of papers. In addition, we present the solution taxonomy,
based on our systematic literature review.

2.1 Systematic Literature Review


We used Google Scholar and existing databases (e.g., ACM Digital Library, ScienceDirect, IEEEX-
plore and Springerlink) to search for existing literature, using keywords such as “cloud forensics,”
(“cloud computing” AND “digital forensics”). In the first phase, we found publications whose focus
is mainly on cloud resources (e.g., storage and network). Then, in our second search phase, we fo-
cused on individual resources and their underlying technology. Further scrutiny of the publications

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:4 B. Manral et al.

Table 2. Research Evolution in the Area of Cloud Forensics

Year 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Contributions 1 1 7 7 11 11 8 10 14 5 5

Fig. 2. Proposed cloud forensics solution taxonomy.

revealed that most existing research focus on logs generated either by the client or the server. This
then resulted in the next phase of the literature search, where we focused on keywords such as
(“network log” AND forensics). Our search process was iterative, and we also narrowed our search
scope to individual journals (e.g., Digital Investigation) and repeated the same search process. We
also conducted the procedure with various journals and conferences dedicated to the parent field,
cloud computing, and digital forensics.
Based on the outcomes of these searches, we removed duplicate publications and obtained our
first set of more than 450 publications. Then, we reviewed the publications’ abstract, introduction,
and conclusion, and categorized these publications as “directly related” or “not directly related”
to cloud forensics. We also tabularized directly related papers in Table 2 to present a trend and
research evolution in the area of cloud forensics over the years. In the second phase, we studied
these publications and categorized them accordingly. In the third and final phase of the literature
review, we excluded the publications if they do not satisfy the inclusion criteria below.
(1) Contributions directly related to the forensic study of cloud computing.
(2) Contributions focusing on individual forensics of the various resources in a cloud.
(3) Contributions focusing on digital forensics of the underlying cloud technologies.
(4) Contributions relevant to both cloud computing and digital forensics.
Finally, we only selected 80 publications from which we based our taxonomy as presented in
Figure 2, where each layer is color-coded to individualize each level of the taxonomy. For example,
the root “Cloud Forensics” represents the level 0 of the hierarchical taxonomy and presented in
blue color. In similar way, the four major categories incident-driven, provider-driven, consumer-
driven, and resource-driven represent level 1 in light-brown color and so forth.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:5

2.2 Proposed Cloud Forensics Solution Taxonomy


In this section, we category existing cloud forensic research into incident-driven (Section 3),
provider-driven (Section 4), consumer-driven (Section 5), and resource-driven (Section 6). With
the exception of the first category (which focuses on security incidents), the categories focus on
the different actors and resources in the cloud. Incidents are the starting point of every investiga-
tion. Contributions in cloud forensics can be models, systems, frameworks, and solutions relating
to the security incident that triggers the forensic investigation. The two most highlighted chal-
lenges located during our literature survey are CSPs’ control over the hardware infrastructure, and
consumers’ degree of control over the working environment, followed by forensic-as-a-service
(provider-driven) and CSP independence (consumer-driven). CSPs provide a range of hardware
and software resources to cloud consumers, such as compute and memory resources in the form of
VMs, storage, network, and applications. Hence, it is not surprising that a number of forensic pro-
cesses, methods, and tools focusing on these individual resources were presented in the literature.
The taxonomy includes solutions that analyze data at each of three stages: client-side, network-
side (data-in-transit), and server-side [65]. One would also observe that the solutions may belong
to more than one category. Thus, we focus on their perspective in the respective stages. For ex-
ample, if a single solution fits in multiple categories, say both incident-driven and provider-driven
category. Then, in the incident-driven category, we examine whether it is continuous forensics or
post-incident forensics. In the provider-driven category, we focus whether it includes a concept,
an issue it is addressing, and/or its practical feasibility.

3 INCIDENT-DRIVEN CLOUD FORENSICS


In today’s IT-driven environment, security incidents are becoming a norm, and examples include
violation of security policies and their implementation or enforcement measures. For example,
according to a study by McAfee [55], organizations reportedly faced more than 23 distinct threats
monthly on average, and these threats are increasing at a rate of more than 18% per year.
Security incidents are generally the trigger of a digital investigation, and hence the base of the
first category. In our classification, we further divide the forensic analysis of incidents into two
pre-incident forensics and post-incident forensics. We consider pre-incident forensics as continu-
ous, and often related as forensic readiness in the literature. In the continuous sub-category, we
focus on the forensic capable cloud architecture and related solutions. In a post-incident analy-
sis, we examine the forensic models and procedures regarding the digital investigation involving
cloud infrastructure and services. From here onwards, we use “continuous forensics” and “foren-
sic readiness” interchangeably, since both terminologies refer to the same concept of “enabling
cloud infrastructure for forensic purposes.” Figure 3 depicts a simplified view of incident-driven
forensics. In the figure, the first part is an example scenario of continuous forensics (right) through
centralized logging and the second part depicts post-incident evaluation (left). Both parts regard
logs as forensic artifacts in the investigation.

3.1 Continuous Forensics


Continuous cloud forensics includes the evaluation of forensic capabilities for an organization’s
cloud infrastructure. Normally, the forensic process is a post-incident evaluation process. How-
ever, due to the popularity of cloud services and the constant evolving technological and cyber
threat landscape, it is pivotal that forensically friendly cloud are in place for continuous evidence
collection, aggregation, and storage (coined as “forensic-by-design” [1, 2]). Hence, there have been
ongoing work in ensuring forensic competence in cloud infrastructure, as evidenced by the existing
literature [27, 40–42, 50, 63, 103]. However, the nature of cloud computing and the ecosystem can

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:6 B. Manral et al.

Fig. 3. Incident-driven cloud forensics: This subcategory includes methods following continuous forensics
through central logging. The other subcategory (post-incident forensics) has two options for artifact collec-
tion, namely: (1) cloud consumer credentials and (2) by request to the CSP.

complicate forensic readiness, for example due to location of the data, multi-tenant environment,
heterogeneous log-formats, and the significantly increasing data (e.g., network traffic) [40, 71].
We will discuss some of these contributions here, although we remark that they also overlap with
other categories. For example, Kebande and Venter [41, 42] proposed a distributed agent-based so-
lution, where CSPs infect virtual instances with a botnet, to facilitate the collection, preservation,
and storing of evidence. Liu and Zou [50] proposed a forensic agent-based solution for proactive
forensics. There have also been focus on designing secure log capturing or storage mechanisms,
given the importance of logs in digital forensics and digital investigations [4, 98, 103].

3.2 Post Incident Forensics


Post incident forensics has been extensively studied, although a number of challenges remain. For
example, Dykstra and Sherman [38] developed FROST, a forensic toolkit, which focuses on evi-
denced acquisition for virtual disks, API logs, and guest firewall logs. Alex and Kishore [5] also
focused on the evidence collection stage through a centralized server, since investigators within
an organization may have access to the organization’s server to collect evidential artifacts. Other
researchers, such as those in References [8, 16], focused on collecting relevant information from
VM files for regenerating events and further analysis. Qi et al. [68] developed a forensic hypervi-
sor to perform reliable collection and preservation of evidential data from a compromised system,
including malicious guest OS. Other solutions proposed in the literature will be discussed later
in Sections 5 and 6, due to the overlaps with other categories. Post incident forensics has been
extensively studied, although a number of challenges remain. For example, Dykstra and Sherman
[38] developed FROST, a forensic toolkit, which focuses on evidenced acquisition for virtual disks,
API logs, and guest firewall logs. Alex and Kishore [5] also focused on the evidence collection
stage through a centralized server, since investigators within an organization may have access to
the organization’s server to collect evidential artifacts. Other researchers such as those in Refer-
ences [8, 16] focused on collecting relevant information from VM files for regenerating events and
further analysis. Qi et al. [68] developed a forensic hypervisor to perform reliable collection and
preservation of evidential data from a compromised system, including malicious guest OS. Other
solutions proposed in the literature will be discussed later in Sections 5 and 6, due to the overlaps
with other categories.
ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:7

Table 3. Provider-driven Cloud Forensics: An Overview

Log-based solutions
[54] Focused on cloud-based logging and presented logging guidelines for software-as-a-service
(SaaS) model.
[97] Proposed a central logging model for cloud forensic readiness.
[63] Presented a logging framework for forensic-enabled cloud architecture through cloud forensic
modules that gather forensic and log data from the VMs.
[40] Introduced a functional architecture for timely and efficient digital investigation through
MapReduce.
Proposed Secure-Logging-as-a-Service (SecLaaS) for forensic-enabled cloud infrastructure.
[103] The logs aer available to cloud consumers and investigators through RESTful APIs.
[76] Proposed a blockchain technology-based secure logging-as-a-service (BlockSLaaS) for the
cloud environment.
[98] Proposed a public auditing model for the verification of integrity of cloud storage logs using
third-party auditor.
[73] Presented a framework named, CLOSER (Cloud Services-based Event Reconstruction) for
event reconstruction, underlined the role of aggregation algorithms for effective event
reconstruction, and proposed two variants of Leader-Follower (LF) algorithm.
[74] Proposed a framework and an algorithm for event reconstruction in the cloud environment.
Agent-based solutions
[50] Introduced forensic agent in VM that gathers necessary information and send them to the
forensic center for storing and further processing.
[41] Proposed to use bots as a forensic agent for continuous forensic in the cloud and implemented
the proposal in SaaS, Botnet-as-a-service (BaaS).
[42] Discussed the agent-based solution to harvest information from VMs and provide them as
evidence to investigators as Agent-Based-Solution-as-a-Service. It also ensured the evidence
integrity through hashing (ABSaaS).

4 PROVIDER-DRIVEN CLOUD FORENSICS


The role of CSPs in forensic investigations is crucial, particularly when they host and control the
underlying hardware infrastructure upon which consumer’s application and data reside. Service
models also affect the degree of control over a working environment and play an important role in
forensics as well. In this category, we discuss CSP dependent solutions, which can be categorized
into log-based solutions and agent-based solutions—see also Tables 3 and 4. We also observed
that most provider-driven solutions emphasized on the forensic readiness of the underlying cloud
infrastructure and on the continuous collection of evidence in a centralized manner. A major focus
of most approaches in this category is on continuous evaluation by CSPs, which they provide as a
service to the consumers and investigators. We visually depict provider-driven cloud forensics in
Figure 4.

4.1 Log-based Solutions


Logging is a continuous activity that documents every event of the system, including both hard-
ware and software components in a set of files referred to as log files. The purpose of log files differs
and depends on the software and applications that generates the events. In addition to fault tol-
erance, logs are an essential aspect of security control and digital investigations as they can help
the organizations to detect incidents, security violations, and malicious activities. For example,
investigators can determine the source and timeline of relevant events including reconstruction
of events [73, 74] for forensic purposes. Cloud architecture generates and stores various types of

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:8 B. Manral et al.

Table 4. Provider-driven Cloud Forensics: A Comparative Summary

Log-based solutions
Publications Strengths Challenges/Weaknesses
[54] Proactive framework for application logging. Focus was on particular applications.
[97] Data acquisition through remote and central Additional requirement of central log server.
log server. Single point of failure.
[63] A single forensic module (FM) that gathers Requires hypervisor access and performance/
all evidence and log information. time penalties due to interception.
[40] A model to minimize forensic evidence Lacks implementation and model evaluation.
analysis time in the cloud environment
through MapReduce.
[103] Logging scheme with non-trusted user, CSP, Lacks security assessment of APIs.
and investigators. RESTful APIs for
investigators to access forensic logs.
[76] Blockchain used for integrity verification of No consideration of CSPs being malicious.
cloud logs.
[98] Storage of tags, instead of actual log Vague discussion on non-trusted CSP. No
information on blockchain results in reduced consideration of log validation prior to
computation and storage overhead. integrity verification.
Agent-based solutions
[50] Uses existing tools to gather forensic No security consideration regarding forensic
information. center, or discussion of malicious instances.
[41] No major infrastructure modification Lacks implementation and model evaluation.
required.
[42] Does not require modification of cloud Lacks prototype implementation.
infrastructure.

Fig. 4. Provider-driven cloud forensics, such as solutions based on forensic-as-a-service through central log
management.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:9

logs including system, network, application, virtual machine, setup, security, web-server, and audit
logs [45] at multiple levels (VM, VMM, cloud) and access to these logs depends upon the particular
service model.
Raju et al. [73, 74] attempted to address challenges associated with event reconstruction in cloud
environment due to multi-tenancy and privacy violation, multiple heterogeneous data sources, and
large number of events. Specifically, they proposed frameworks and algorithms for effective cloud
event reconstruction based on log aggregation algorithm Leader-Follower (LF) [73] and traditional
digital event reconstruction [74]. In the cloud, log analysis is not trivial due to log decentralization,
formats, volatility, and accessibility [54]. Marty [54] proposed using log management architecture
where logging, log transport, and log tuning features are enabled, and presented the guidelines for
application logging in SaaS service model using technologies such as Django,1 Javascript, Apache,
and MySQL. Trenwith and Venter [97] posited the potential of logging all activities centrally to ex-
pedite the post-incident investigation. They also identified the requirements for forensic readiness
and proposed a model comprising a communication channel, encryption, compression, authenti-
cation and integrity of log data, authentication of client and server, and time-stamping. Patrascu
and Patriciu [63] presented a forensic-enabled cloud architecture to monitor different activities
in a cloud environment. They introduced a cloud forensic module that gathered all the forensic
evidence including logs from virtual machines by interacting with the underlying hypervisor. For
timely and efficient evidence analysis, Kebande and Venter [40] proposed Cloud Forensic Readi-
ness Evidence Analysis System (CFREAS) that utilizes distributed hardware clusters in the cloud
environment through MapReduce. The system comprises modules including Forensic Database
(FD) and Forensic MapReduce Task (FMT) modules. FMT retrieved the potential digital evidence
through MapReduce process, including forensic logs, virtual images, and hypervisor error logs and
stored them in the Forensic Database.
In cloud forensics, confidentiality and integrity of all generated logs are crucial to ensure reliable
investigations as corrupted logs can have a significant impact on the entire discourse and outcome.
A number of authors have discussed the security aspects and forensic soundness of different logs
in the cloud computing environment [4, 76, 98, 103]. Zawoad et al. [103], for example, proposed a
SecLaaS scheme that collects logs for persistent storage and minimizes the potential for tampering
using Proof of Past Log (PPL). The authors stated that the investigators could access the logs
through RESTful APIs to ensure confidentiality. In addition, they proposed a Bloom filter variation
and Bloom tree for integrity verification process to achieve improved performance in terms of
time and space. Ahsan et al. [4] proposed the Cloud Log Assuring Soundness and Secrecy (CLASS)
process to secure logs in the cloud environment through asymmetric encryption. The authors
generated PPL using Rabin’s fingerprint method and bloom filter. Rane and Dixit [76] leveraged
the immutable property of Blockchain to ensure the confidentiality and integrity of cloud logs and
proposed secure logging-as-a-service for the cloud environment. The scheme includes steps such
as extraction of logs from a virtual environment, creation of encrypted log entries for each log
using public key encryption, and storage of encrypted log entries on the blockchain. In a similar
way, Wang et al. [98] proposed a public verification model via Blockchain to ensure the integrity of
logs in cloud storage using a third-party auditor. The authors utilized homomorphic and one-way
hash functions to generate the tags for log entries and Merkel tree structure for storage.
Central logging [97] is another potential solution owing to the distributed nature of cloud com-
puting. In the cloud, identification and collection of data is not a trivial task due to virtualization,
data distribution, and replication. Central logging allows investigators to have a timely and effi-
cient identification and collection of evidence artifacts using a single storage location. A number of

1 https://www.djangoproject.com/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:10 B. Manral et al.

authors have also highlighted the importance of forensic-enabled cloud and the need for a separate
module that collects forensic evidence [40, 63]. For example, Kebande and Venter [40] proposed
using MapReduce to facilitate timely analysis of large evidence-set in the cloud environment and
introduced a functional architecture of a system utilizing data analytics.
Log-correlation is a major consideration in cloud log analysis, although this may not be widely
considered in existing works. For example, Marty [54] only focused on the logging of particular
applications and did not consider the correlation of the different logs. The prototype implementa-
tion described in Reference [97] was designed for Windows event logs, and did not consider the
security of the central log server. Kernel modification [63] for forensic enabled cloud architecture
can be costly due to the scale of the cloud’s hardware infrastructure, and may not be feasible for
CSPs as it will require the shutdown and restart of current cloud services (i.e., financial and legal
implications due to the shutdown). In the approach of Kebande and Venter [40], the authors did
not present any functional prototype of their proposed concept and assumed that the CSP can be
fully trusted.

4.2 Agent-based Solutions


A number of agent-based solutions have also been proposed in the literature, such as the (con-
troversial) use of bots and botnets as a forensic agent. Bots are a type of malware that exploits
the vulnerabilities of network devices and try to infect as many systems as possible by creating a
chain of infection from one compromised computing device (e.g., personal computer or Internet of
Things (IoT) device) to another computing device. These compromised machines can then form a
botnet, a network of (malicious) agents, which perform a set of predefined function in predefined
times or on the order of its Command and Control (C&C) server/master [20]. Publications in this
category include those that introduced pro-active measurements to extract information in a non-
malicious manner to facilitate post-incident forensic investigations. For example, Kebande and
Venter [41, 42] proposed infecting each virtual instance with a bot agent, to harvest information
from these machines and store them in a database. Kebande and Venter [42] gathered both volatile
and non-volatile data from infected VMs, including network, and sent them to the centralized evi-
dence preservation system. Later, investigators may access the information as Botnet-as-a-Service
(BaaS) [41] or as Agent-Based-Solution-as-a-Service (ABSaaS) [42] for forensic purposes. Liu and
Zou [50] also proposed a solution based on forensic agents and proposed a system model compris-
ing of forensic center and forensic query server. The forensic center is responsible for processing
raw forensic data and converting them into a series of tuples for the database storage. Forensic
query server provided an interface to query, and analyze the evidential data such as logged users,
opened files, network connections, process information, running and auto-run services, and sys-
tem logs. However, the scenario of compromised VM instance and collection of tampered evidence
through a forensic agent were not considered by the authors. We also remarked that BaaS [41, 42]
may be questioned in a court of law, due to the use of bot malware to compromised the suspect’s
or target’s machine. Hence, it is highly recommended that the legal team be consulted prior to
using such an approach.

5 CONSUMER-DRIVEN CLOUD FORENSICS


It is generally acknowledged that CSP plays a crucial role in cloud forensics, as discussed earlier.
In other words, having less visibility/control in the cloud environment, particularly in Platform as
a service (PaaS) and Software as a service (SaaS), forensic investigators have to collaborate with
CSPs during the collection of evidence. There are a number of legal and operational challenges [17].
For example, CSPs have the right to decline a request from a foreign investigator (e.g., legal juris-
dictional restrictions) or consumer (e.g., if the request is not part of the service level agreements

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:11

Fig. 5. Consumer-driven cloud forensics: Showing two CSP independent solutions, management plane and
central forensic server for evidence collection.

(SLAs)). Even if the CSP cooperates, we have to consider issues such as evidence integrity and reli-
ability and chain of custody. Hence, there is a need to devise solutions that are consumer-driven, as
compared to CSP-driven or dependent and with trust in mind. In Figure 5, we present an example
of a consumer-driven forensic solution involving a forensic server [5], a centralized location for
evidence artifact collection and storage.

5.1 Physical Acquisition-based Solutions


Physical acquisition is a fundamental part of the conventional digital investigation process. Inves-
tigation teams identify and acquire potential sources of digital artifacts that may provide useful
insights into the on-going investigation. However, cloud computing differs from traditional com-
puting environment, and one such example is the multi-tenancy environment through resource
virtualization. In addition, the distributed nature of resources and jurisdiction further complicate
the physical acquisition of cloud resources.
In this section, we shift our focus from cloud-side forensics to client-side forensics. In Section 1,
we discussed the stages for forensics in cloud environment [21]: client-side, network, and server-
side. Most of our discussion in this survey is on server-side forensic solutions. However, it is es-
sential to realize that client-side machines also present a rich set of evidence artifacts for forensic
analysis. There is minimal CSP dependency and is entirely consumer-driven. We would also like
to highlight that solutions between this category and storage forensics (Section 6.2) overlap as the
physical acquisition-based contributions focus on forensic artifacts on an end-user machine with
physical access. Therefore, we discuss the overlapped contributions briefly here (see also Tables 5
and 6), since we will be describing them in more depth in Section 6.2.
Client-side forensic artifacts can be broadly categorized into memory, disk, and network. Each
artifact provides information about user behavior and/or activity in the form of volatile, non-
volatile, and communication data. However, in this category, we limit our discussion to contri-
butions that address client-side forensics involving cloud applications and services. These contri-
butions mostly focus on the identification stage of the digital investigation process and require
physical access to client’s devices. For example, Chung et al. [22] presented a procedure for the
digital investigation of cloud storage applications with identification of potential artifacts on desk-
tops and mobile phones. Hale [37] identified forensic remnants residing on client desktop utilizing
Amazon Cloud Drive during upload, download, and delete operations. The remnants included
browser history and cache files, registry, installation paths, and remnants after an un-installation.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:12 B. Manral et al.

Table 5. Consumer-driven Cloud Forensics: An Overview

Physical acquisition-based solutions


[30] Presented live Android forensic technique and marked the potential forensic artifacts for
Dropbox cloud storage service.
[18] Discussed Windows mobile forensics with multiple scenarios including logical acquisition
with phone on/off, acquisition effect on obtained data with locked phone screen, and reset
operation effect on obtained data.
[32] Discussed circumvention of SSL/TLS validation on iOS devices through HTTPS traffic
interception and proxy server.
[72] Focused on bulk data analysis of different devices including challenges at each stage of
forensic process. Presented disparate device analysis process based on the authors’ digital
forensic intelligence analysis cycle.
Management plane
[38] Suggested API-based forensic acquisition through CSP independent management plane.
[5] Introduced a monitoring plane for continuous evidence collection and storage in a server
outside of cloud infrastructure.

Table 6. Consumer-driven Cloud Forensics: Strengths and Challenges/Weaknesses

Physical acquisition-based solutions


Contributions Strengths Challenges/Weaknesses
[30] Live Android forensics without flashing a Requires identification of a flaw in
device. bootloader. Flaws can be bootloader version
specific.
[18] Evaluated three existing tools for Windows The findings could be specific to models and
mobile forensics. In addition, evaluated the may vary with other Windows mobile
impact of device settings (including screen phones.
lock and device reset operation) on forensics
results.
[32] Presented multiple methods for Exposes weaknesses and vulnerabilities of
circumvention of SSL/TLS validation on iOS various apps that may lead to number of
devices to facilitate data collection from the exploits and data security violations.
devices for forensic analysis.
[72] Discussed the positive aspect of bulk data Identification of potential evidences from
from various IoT devices for forensic different IoT devices is still an unexplored
analysis as more data means more case area including data reduction methods for
related knowledge and evidence. Presented forensic analysis.
digital forensic process for disparate data
analysis.
Management plane
[38] Management console as a CSP independent Requirement of establishing trust in CSP.
solution.
[5] Centralized and separate storage for Additional need and cost of forensic server,
evidence collection. single point of failure, and lacks security
consideration for FS.

Quick and Choo [69] demonstrated that one can forensically recover client data artifacts from a
client’s machine amid various interactions between client devices and Dropbox cloud storage. The
authors also discussed the preservation, analysis, and presentation of those artifacts.
Do et al. [30] discussed existing mobile forensic techniques, including flashing a device and de-
vice exploitation for root access privileges. They presented a forensic methodology for Android

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:13

devices through live analysis with Dropbox as a case study. The authors exploited a flaw in the
bootloader to load a custom image directly into the memory that further collects artifacts from the
internal memory of the device. Cahyani et al. [18] evaluated three mobile forensic tools: Paraben
Device Seizure,2 XRY Forensic Pack,3 and Cellebrite UFED Touch,4 in terms of their
forensic capability in analyzing three Windows smartphones (i.e., Nokia Lumia 900, 265, and 735).
They experimented with three scenarios and evaluated the comprehensiveness and applicability
of these forensic tools during acquisition. In addition, they also evaluated the effects of different
mobile settings on a final acquisition. Three scenarios included initial power state [on/off], screen
[unlocked/locked], and remote-reset operation [yes] with storage media acquisition and power
state [on/off]. Call logs, contacts, locations, SMS, MMS, and many others were various artifacts ac-
quired during the acquisition process. D’Orazio and Choo [32] discussed the forensic importance
of circumventing the SSL/TSL validations on iOS devices. The authors proposed a technique to by-
pass both default and built-in SSL/TSL validations through network traffic interception to/from a
device. They used a proxy server that acts as MITM (man in the middle) and bypasses the system’s
default certificate validations through self-signed SSL/TLS CA certificate. To bypass built-in vali-
dations, the authors proposed five methods (i.e., certificate common name (CM) field modification,
insertion of proxy CA certificate in-app bundle of root CAs, SSL Kill Switch tool, app binary tem-
pering at runtime, and app executable tempering). Quick and Choo [72] described how one may
automate the forensic process in seven stages for disparate device data and cross-device analysis.
The authors evaluated using real-world (big) data and reviewed the processing time for both full
imaging process and data reduction by selective imaging (DRbSI) method with commercial tools
such as NUIX,5 EnCase, RegRipper,6 IEF,7 NetAnalysis,8 and Bulk Extractor.9 They
also listed the time difference for both the processes with a much shorter time frame for DRbSI.
A major challenge in this category is the need for physical access to the user devices, or the ex-
istence of some model- and/or version-specific vulnerabilities (e.g., in Reference [30], the authors
exploited the vulnerabilities of bootloader). Such vulnerabilities or circumvention of existing se-
curity measures (e.g., circumvention of SSL/TLS validations in Reference [32] for various apps on
iOS devices) can, however, be exploited to facilitate cyber attacks.

5.2 Management Plane


The management plane is an interface/dashboard that provides consumers access to cloud services
without the CSP’s intervention. It manages every aspect of cloud infrastructure, including mon-
itoring, resource usage, and security credentials. This is clearly CSP independent, since it allows
one to centralize the management of cloud resources through an API and web browser and assists
evidence collection. Tables 5 and 6 present a comparative summary of the benefits and shortcom-
ings of management plane solutions, respectively.
There are only two major solutions in this category, which suggested management console as a
CSP independent forensic solution [5, 38]. Dykstra and Sherman [38] designed and implemented a
solution named FROST, which comprised forensic tools for the public cloud platform OpenStack to
gather evidence in the form of virtual disks, API logs, and guest firewall logs. The authors added

2 https://paraben.com/.
3 https://www.msab.com/products/xry/.
4 https://www.cellebrite.com/en/press/cellebrite-introduces-ufed-touch2-platform/.
5 https://www.nuix.com/.
6 https://github.com/keydet89/RegRipper2.8.
7 https://www.magnetforensics.com/magnet-ief/.
8 https://www.digital-detective.net/digital-forensic-software/netanalysis/.
9 https://github.com/simsong/bulk_extractor.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:14 B. Manral et al.

a new set of APIs to the NOVA project to enable consumers/investigators to perform forensic
acquisition from the cloud infrastructure independent of CSP. The authenticated logging service
segregated the logs for each user by storing it in different sub-trees of a single hash tree. The
authors used cryptographic hashes for the integrity of the collected evidence. A distinct server [5]
for evidence collection is a novel idea and solves many issues, including distributed evidence. Alex
and Kishore [5] proposed a Forensic Monitoring Plane (FMP), placed between a consumer and a
provider and a Forensic Server, both placed outside of cloud infrastructure. All the communication
between consumer and provider first had to pass through FMP, which monitors both in-stream
and out-stream traffic using Forensic Toolkit (FTK) analyzer and E-detection, and stored
it in a forensic server using bit by bit stream encryption. FMP also monitored VMs and saved a
current state in another forensic server. Investigators could directly access the server using user
credentials and acquire the evidence. FROST [38] eliminates the CSP dependency over evidence
acquisition in the cloud as it provides the APIs as a part of a management console. As detailed in
Section 4, central logging and evidence collection are one of the most discussed topics in the cloud
forensics. Alex and Kishore [5] also emphasized on a single place for all the evidence artifacts that
overcome various issues regarding identification and collection.
The management console is a set of software APIs to provision, monitor, and manage the cloud
services over the Internet. Software APIs are becoming a de facto standard for organizations to
deliver their product and services to their consumers, since APIs allow applications to communi-
cate with each other. However, APIs may have vulnerabilities that can be exploited. For example, a
report by SmartBear identified API security as one of the biggest challenges for organizations, and
the Cloud Security Alliance (CSA) also identified insecure interfaces and APIs as one of the key
security threats in a cloud computing environment. Security assessment regarding APIs is crucial
as APIs can be exploited as an attack-vector. In Reference [38], however, there was no discussion
or evaluation of API security. FROST also requires trust in the underlying cloud infrastructure and
did not consider insider threats, including malicious CSP. In Reference [5], both FMP and forensic
server are vulnerable to various attacks, including MITM and eavesdropping due to the placement
and can have serious security and forensic implications if compromised and controlled by mali-
cious entities. In addition, FMP and the forensic server can also lead to service unavailability and
security implications.

6 RESOURCE-DRIVEN CLOUD FORENSICS


Utility computing and resource metering are fundamental to cloud computing’s business-oriented
pay-as-you-go model that enables it to provide resources as services to the consumers. In the
cloud, virtualization is not just restraint to the servers, storage, and network virtualization, also
considered as the key components in a cloud environment. But cloud’s gain is forensics’ loss, i.e.,
virtualization helps cloud for better hardware utilization, but introduces new obstacles to foren-
sics process. Multi-tenancy, data distribution, and redundancy are features of a cloud, however
these features become challenges for the forensics. There are three fundamental services most
CSPs deliver, compute, storage, and networking. In the cloud, “compute” does not refer to the bare
physical hardware, rather virtual hardware in the form of virtual machines having vCPUs, and
“storage” and “networking” relates to virtual disks and virtual network. For forensics, we focus on
these three fundamental resources and categorize them into three categories of Virtual Machine
Forensics, Storage Forensics, and Network Forensics. Figure 6 depicts the resource-driven foren-
sics, and we can dissect this diagram into three parts. One part focuses on SDN (software-defined
networking) forensics, second part focuses on the storage forensics with client-side artifacts, and
third part focused on VM forensics including VM isolation, VMI (Virtual Machine Introspection),
and VM snapshot.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:15

Fig. 6. Resource-driven cloud forensics: Marks solution contributions, which include client and server-side
artifact analysis, VM instance isolation, VM snapshot, VMI, and potential forensic locations in SDN archi-
tecture.

6.1 Virtual Machine Forensics


We further categorized virtual machine forensics into two sub-categories: static analysis and live
analysis. Static analysis is a post-incident analysis process, performed by the forensic investigators
once they gather all the relevant pieces of evidence and preserve them in persistent storage. In the
cloud, live analysis is also a preferred forensic method as it works without shutting down the
cloud/VMs. Table 7 and Table 8 provide a brief overview of each of the contributions related to the
static and live analysis.

6.1.1 Static Analysis. Static analysis is a process of examination and analysis of data, stored and
preserved in permanent storage. The primary objective of this approach is to perform a critical
evaluation of collected data and evidence to facilitate the digital investigation that can stand in
the court of law. Static analysis may provide aid to the investigators by tracing the origin or by
constructing a timeline for the whole incident. Static analysis in itself represents a set of solutions,
however, in our taxonomy, we focus on static analysis concerning virtual machines.
VM snapshot. VM snapshot is a process of preserving the runtime state of a virtual machine at a
particular point of time. It creates a copy of VM that records the state and the data to safeguard a
runtime condition of VM for later restoration. The state of the VM includes the operational state
of VM, including all the configuration parameters, and the data includes the disk, memory, and
network files. All these contents are the fundamental elements of almost all the digital investigation
cases that makes VM snapshot a viable forensic evidence.
To evaluate the forensic soundness of VM snapshot, Almulla et al. [8] proposed a model to exam-
ine snapshot files without altering its original content. They listed the digital forensic requirements
and checked them against a Copy-on-write (COW) snapshot method. Belorkar and Geethakumari
[16] extended the snapshot analysis from a single VM to the multiple VMs connected together,
referred to as Virtual Network Infrastructure (VNI). They used the Vnsnap [39], a distributed VNI
snapshot tool, to save the periodic snapshots for later analysis and event regeneration. There is a
possibility of compromise by the malicious administrator or privileged VM, in that case, the tam-
pered VM can subvert the investigation. Rani and Geethakumari [77] proposed a forensic model
for cloud forensics that leverages conventional intrusion detection system (IDS) to identify the
ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:16 B. Manral et al.

Table 7. Virtual Machine Forensics: An Overview

Static analysis
[8] Proposed forensic model for the analysis of VM snapshots in a private cloud
utilizing existing digital forensic tools.
[16] Presented an approach for event regeneration through system snapshots utilizing
VNsnap [39].
[77] Proposed a model for forensic investigation in cloud using VM snapshots and IDSs
on VM and VMM level.
[86] Proposed a cloud forensic model using VM Snapshot Server for continuous
storage of snapshots of complete cloud environment.
[75] Proposed a framework SNAPS (Snapshots-based Provenance Aware System) to
overcome the challenges and obstacles regarding VM snapshot forensics including
size.
[39] A snapshot tool to capture the entire virtual network infrastructure.
[92] A hypervisor-based system, HyperShot, to capture un-compromised VM
snapshots with non-trusted cloud administrator.
Live analysis
[29] Identified six conditions for successful instance isolation as location, incoming
and outgoing blocking, collection, non-contamination, and separation.
[28] Suggested isolation of compromised instance to a controlled environment for
further analysis in forensic investigation.
[13] Discussed the issue regarding VMI in public cloud platforms and presented
VMI-as-a-Service solution, CloudVMI.
[84] Introduced a framework for memory analysis for VMs through independent
agency. The framework contains transmission, evidence storage, operation
logging, and forensic analysis component.
[68] Discussed the security issues of current hypervisors and proposed a new dynamic
forensic hypervisor, ForenVisor.

malicious VM. The authors also explained the importance of taking snapshots of suspected VM(s)
and isolating such VM(s) for further analysis. Srivastava et al. [92] proposed a hypervisor-based
system, HyperShot, that ensures the integrity of the snapshot files from various attacks including
attacks on the snapshot service through hashing. Hypervisor signed the snapshot using TPM
to establish trust between a consumer and a provider. Sharmila and Aparna [86] proposed
having continuous snapshots of the complete cloud environment using VM snapshot servers to
facilitate forensic investigation in the cloud. Seeking to deal with significant size of VM snapshot
data, Raju and Geethakumari [75] proposed a framework, named SNAPS (Snapshots-based
Provenance Aware System), which has a number of modules to facilitate the monitoring, snapshot
accumulation, effective storage of VM snapshots using spatio-temporal model, and regeneration
of VM snapshots.

6.1.2 Live Analysis. Live analysis is a process of examining current state of the running system.
Live analysis overcomes the shortcomings of traditional static analysis that fails to gather volatile
and continuously changing information, including process list, memory contents, open ports, and
connections. Live analysis in a cloud environment is possible due to remote accessibility feature
and also it does not require any shutdown of VMs. In the cloud model, service availability is an
important factor, and shutting down VMs for each occurrence of a security incident is not feasible.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:17

Table 8. Virtual Machine Forensics: Strengths and Challenges/Weaknesses

Static analysis
Contributions Strengths Challenges/Weaknesses
[8] Assessed forensic soundness of Focused only on the analysis stage.
COW snapshot method using Excluded important forensic artifacts
Digital Forensic Framework (DFF). in form of logs.
[16] Focused on partial CSP Lacks practical evaluation of
independent solution, VM proposed method.
snapshot through event
regeneration.
[77] VM snapshot analysis is a viable Not applicable to IaaS due to
solution for cloud forensics. consumer control over the VMs. Lacks
experimental evaluation of the model
[86] VM Snapshot servers would Continuous storage of snapshots
contain the footprints of every require vast amount of extra
activity in the cloud including resources.
malicious one.
[75] Proposed a prior creation of CSP dependent solution due to usage
provenance information to tackle of VMI (Virtual Machine
the challenges associated with the Introspection) to detect suspicious
conventional VM snapshot VM.
forensics including snapshot size
and regeneration and analysis
time.
Live analysis
[29] Discussion of instance isolation in Lacks practical scenarios of instance
different cloud service models and isolation methods.
deployment models.
[28] Proposed various instance Lacks evaluation of methods through
isolation methods. experiments/case studies.
[13] Proposed VMI as-a-service High resource requirement due to
through libVMI library. monitoring VM per user-request.
Lacks discussion on privacy issues.
[84] Utilization of existing forensic Not enough details on framework
tools for memory analysis with components and based on many
cloud independent agency. assumptions.
[68] Live analysis through dynamic Limited to PaaS cloud service model
hypervisor without system reboot. with conditional Docker environment.

In this category of live analysis, we discuss the multiple alternatives to the static analysis and VM
shutdown. This includes VM isolation, VMI, and forensic hypervisor. Each sub-category focuses
on distinct methods for live VM analysis, their shortcomings, and challenges.
VM isolation. VM isolation is a resource and environment separation process for a compromised
or a malicious VM from its neighbored VMs as a security measure and forensic analysis. There’s
a possibility that if a single VM gets compromised and still is in running state, it may also lead to
compromise of other co-hosted VMs, sharing the same hardware.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:18 B. Manral et al.

A detachment of a virtual instance from its host environment to a more safe and controlled
environment for later examination is a secure measure as it limits the security implications to the
affected VM only. Delport et al. [28] discussed the need to isolate a VM and proposed distinct in-
stance relocation methods, including server farming, address relocation, sandboxing, and fail-over.
Server farming refers to creating multiple copies of a single instance over different nodes for con-
tinuous service availability even if one instance gets compromised and isolated. Address relocation
and fail-over methods are similar to a server farm where a single instance has multiple replicas,
but in address relocation, backup instances are used with the same IP address as the real server.
Sandboxing prevents one instance from interacting with other instances and provides isolation.
Delport and Olivier [29] identified the impact of deployment and service models on instance relo-
cation. They also proposed conditions for successful VM isolation such as VM location, blocking
of incoming and outgoing traffic, data collection, non-contamination of data, and VM separation.
Contributions such as References [28, 29] also addressed the conditions and methods for in-
stance isolation. However, these contributions do not provide working case studies of instance
isolation method in a forensic study. Delport and Olivier [29] limited the practical scenario to the
identification of a node on which instance under consideration resides.
Virtual Machine Introspection (VMI). Garfinkel et al. introduced VMI [34] as an intrusion detec-
tion technique using a hypervisor. VMI is an introspection technique to monitor the run time state
of a virtual machine at the hypervisor level outside the monitored VM. VMI gained ample atten-
tion in the field of computer security, including forensics and became an important method for
virtual machine analysis. Numerous studies discussed the application of VMI in digital forensics,
including the semantic gap problem and its solutions [64, 66].
Some studies that considered VMI as a forensic solution proposed the notion of VMI-as-a-service
[13]. Baek et al. [13] proposed CloudVMI that allows consumers to introspect their own virtual
machines by monitoring VMs. They also addressed the issue pertaining to the inaccessibility of
hypervisor in public cloud platforms using LibVMI10 library. The consumers could issue a mon-
itoring request through a server module that handles the invocations of monitoring VMs using
LibVMI. Rui et al. [84] introduced a framework for memory forensics utilizing out-of-box VMI in-
trospection. This framework has four components, including transmission, data storage, forensic
analysis, and operation-logging. In Reference [84], independent agencies can acquire the runtime
state information of particular VM such as process list and socket connections by creating a secure
connection through transmission component, and save the information in a storage component
using NoSQL database.
Forensic-as-a-service through VMI is a promising forensic aid, but access to the hypervisor
(CSP-owned) is a practical concern. Baek et al. [13] addressed the access concern by restricting the
consumer’s access to memory page mapping functionality and required consumers to bring their
own monitoring software. The proposed solution requires lot of resources just for VM monitoring
considering ever increasing number of users and security incidents in public cloud platforms. The
main focus of Reference [84] was to shift the forensic analysis of VM from the cloud platform
to an external investigation agency. They also presented a concept of connection establishment
between a cloud provider and an independent agency. However, the authors in Reference [84] did
not provide details about the CSP side component and evidence acquisition.
Forensic hypervisor. A hypervisor is the middleware component that allows abstraction to
host multiple virtual machines isolated from one another while sharing physical resources. The
security potential of hypervisor is already a part of numerous contributions and commercial

10 https://github.com/libvmi/libvmi.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:19

implementations. Bitdefender Hypervisor Introspection (HVI)11 monitors VM memory to detect


real-time exploits is one such implementation. Hypervisor, as a forensic solution, also has ample
potential to aid the forensic investigation.
A live analysis is a preferred way of forensics in cloud computing, and hypervisors may also
help here. However, conventional hypervisors have a huge code base that make them vulnerable
to various exploits and can have severe security and forensic implications. The reliability of the
collected evidence is also a factor to consider as falsified evidence may negatively impact the dig-
ital investigation. Qi et al. [68] proposed a new hypervisor, referred as ForenVisor, to address
reliability and security concerns. ForenVisor had a small Trusted Computing Base (TCB) and was
loaded as a dynamic driver, i.e., it did not require VMs to reboot and preserved the volatile infor-
mation for live forensic analysis. To ensure reliability, ForenVisor collected evidence directly from
the hardware and used Filesafe [99] module to ensure the integrity of the collected evidence.
Forensic hypervisor represents another step to enhance the forensic capabilities of the cloud
environment. Qi et al. [68] is more applicable to a non-virtualized environment and may have
limited adoption in an IaaS service model where there is already a hypervisor available. It may be
applicable to the PaaS model based on Docker technology, rather than hypervisor technology.

6.2 Storage Forensics


Storage-as-a-service (StaaS) is a prominent cloud service model for cloud consumers. In Reference
[78], Forrester surveyed and showed more than 70% cost reduction using cloud storage when com-
pared to on-premise storage. Cost and ubiquitous access of the public cloud storage is appealing,
however, malicious entities can also exploit cloud storage for many criminal activities such as il-
legal content hosting. Personal and business data in public cloud storage also attracts attackers
as we are witnessing more and more cases of data breaches and identity theft. Forensics of the
crimes associated with cloud storage services present new challenges to the investigators due to
the manner in which these services work. We further classify storage forensics in two categories,
first, artifacts analysis that discusses the collection, preservation, and analysis of the evidential
artifacts, and second, data provenance methods that emphasizes on the importance of provenance
in the cloud forensics.
6.2.1 Artifacts Analysis. Artifacts are the snippets of data left behind due to the interactions
between the storage users such as client applications and storage services. These artifacts may
comprise log files, registry entries, meta-data information, and many others, residing either on the
client-side or server-side. Storage-as-a-service operates on a client-server model where consumers
connect to the services facilitated on the cloud using different client devices, including desktop,
laptop, and mobile devices. It is essential to identify and analyze the numerous digital artifacts
residing on different client devices for proper investigation of cloud storage services.
We provide a overview of range of artifact analysis methods in storage space in Table 9. Table 10
shows strengths and weaknesses of different solutions providing artifact analysis. Chung et al. [22]
contended the possibility of a criminal investigation involving cloud storage despite inaccessibility
to the cloud servers. They provided potential forensic traces of four storage services, Amazon S3,12
DropBox,8 Google Docs,13 and Evernote,14 that existed in the client-side devices and introduced
a procedure to investigate these devices. Log file of browsers and client application artifacts in
both PC and mobile devices were the preferential elements of their forensic investigation. Hale

11 https://www.bitdefender.com/business/enterprise-products/hypervisor-introspection.html.
12 https://aws.amazon.com/s3/.
13 https://docs.google.com/.
14 https://evernote.com/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:20 B. Manral et al.

Table 9. Storage Forensics: An Overview

Artifacts analysis
[22] Identified potential digital artifacts on distinct accessible devices including PC-based
systems (Windows and Mac) and smartphones (iPhone and Android smartphone).
Detailed a case study on information leakage using identified artifacts.
[37] Identified forensic remnants residing on client PC utilizing Amazon Cloud Drive during
upload, download, and delete operations. Introduced Perl-based scripts to automate the
artifact collection.
[69] Determined client data remnants on client’s Windows 7 machine and Apple iPhone 3G
amid various interactions between client devices and Dropbox cloud storage.
[81] Proposed API-based forensic acquisition of cloud-side artifacts using Google Docs as a
case study.
[79] Introduced cloud storage forensic tools suite including kumodd, kumodocs, and kumofs.
[33] Presented remote data acquisition tool, Cloud Data Imager (CDI) for cloud storage
services with two main features: directory browsing and logical copy of selected folder to
local repository.
[70] Discussed whether data and meta-data information change amid upload, storage, and
download operations.
[25] Focused on artifacts identification on Android and iOS platform for cloud storage
application, MEGA.
[1] Proposed an incident handling model with forensic-by-design concept. The model consist
of six iterative phases with integrated forensic principles and practices such as readiness,
collection and analysis, and presentation.
[94] Identified potential forensic artifacts on both mobile and personal computers including
Windows, Mac OS, Ubuntu, iOS, and Android. Presented Symform, a cooperative cloud
storage service as a case study.
[59] Marked the locations of distinct forensic artifacts on Windows machines and iOS devices
with SpiderOak, JustCloud, and pCloud storage services.
[95] Determined the data remnants from the operational use of BitTorrent Sync on OS
platforms such as Windows, MacOS, Ubuntu, iOS, and Android. Also proposed
investigation methodology using five-step digital forensic framework [52].
[93] Presented Syncany as case study to determine the data remnants for big data storage
forensics to reduce investigation time and resources.
Data provenance
[3] Discussed the challenges for the provenance due to the architectural nature of the cloud.
[61] Made arguments about utility of data provenance to the storage providers.
[10] Listed the properties of a secure provenance scheme. Introduced a scheme that ensured
confidentiality of data provenance through encrypted search.
[47] Proposed a scheme for secure provenance based on group signature and attribute-based
signature techniques.
[51] Presented a secure provenance scheme based on a bi-linear pairing technique.
[60] Discussed the properties and application of data provenance in the cloud. Designed three
protocols for provenance preservation by using same properties as metrics.
[105] Surveyed provenance mechanisms in the cloud and proposed DataPROVE approach for
integrity and security of consumer’s data.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:21

Table 10. Storage Forensics: Strengths and Challenges/Weaknesses

Artifacts analysis
Contributions Strengths Challenges/Weaknesses
[22] Identification of potential forensic artifacts on Focused only on disk artifacts, excludes memory
distinct client devices. content. Lacks server-side artifacts identification.
Requires physical acquisition of the client
machine.
[37] Two Perl-based scripts to ease and automate Lacks server-side artifacts analysis. Requires
parsing process for digital investigators. physical acquisition of client machine.
[69] Potential locations of artifacts for DropBox Focused only on client-side artifacts. Physical
storage service on client machine. Considered acquisition required.
anti-forensics techniques too.
[81] Focused on cloud-native artifacts through Assumed the possession of user credentials by
service provider’s API and introduced new tool investigators.
kumodocs.
[70] Assessed data and meta-data change through Conclusions are application and version specific.
comparison during access, storage, and
download operations. Identified timestamp
variation.
[33] A remote data acquisition tool, Cloud Data Requires reliable network connection. Process
Imager (CDI) that leverages RESTful web APIs. interruption may result in data content change
No physical acquisition required. and requires restart.
[79] Cloud storage forensics tools: kumodd, Lacks security assessment of public and private
kumodocs, and kumofs. Server-side artifacts service APIs.
collection and analysis.
[1] Integration of forensic principles with existing Cloud users require CSP support to implement
incident handling practices would ease incident handling strategies regarding SaaS
investigative process in terms of time, cost, and services, in particular.
resources.
[25] Identified and documented artifacts related to Requires physical acquisition to obtain images of
various operations including login, uploading, internal memory regarding particular devices.
and downloading for MEGA storage service. Focused only on smartphone platforms.
[94] Focused on new form of cloud storage solution, Live network traffic capturing is difficult to
cooperative storage service. attain. Artifacts can be OS and application
version specific.
[59] Detailed documentation of data remnants for Capturing live network traffic is difficult. Results
SpiderOak, JustCloud, and pCloud. can be version specific.
[95] Focused on P2P cloud storage service. Requires physical acquisition of devices.
Identified artifacts on range of OSes including
Windows, MAC OS, Ubuntu, Android, and iOS.
[93] Focused on green forensics owing to prior Dependency over user (suspect) cooperation for
identification and documentation of forensic effective investigation. Requires physical
artifacts. acquisition of devices. In addition, requires
installation of network capturing tool on the host
machine to acquire client-server communication.
Data provenance
[51] Unforgeability and conditional privacy Lacks multi-layer access policy.
preservation of data in cloud through secure
provenance.
[47] Low computational overhead on user-side. Requires trust establishment in Third Party
Auditor (TPA) for dispute resolution.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:22 B. Manral et al.

[37] conducted a test on Amazon Cloud Drive15 via an online interface and a client application
to determine the artifacts generated on the local machine using multiple uploads, downloads, and
delete operations. The authors conducted the operation cycle for a period spanning two weeks.
To determine the file transfers to or from Cloud Drive, the authors also suggested three options
for forensic examination. These methods included access with user’s credentials, extraction, and
parsing of browser cache files, and extraction and parsing of the ADriveNativeClientService.log
file from the local Windows machine. The authors of Reference [37] emphasized on the parsing of
browser cache files even if investigators were able to acquire the user credentials.
Quick and Choo [69] presented a DropBox case study for the identification, preservation, and
analysis of client-side artifacts. Darren et al. [69] created 28 virtual machines to examine DropBox
for numerous scenarios with different browsers, namely Internet Explorer,16 Mozilla Firefox,17
Apple Safari,18 and Google Chrome.19 They identified and generated a forensic copy of VMDK
and VMEM files for the hard disk and memory, and PCAP files for the network captures using
FTK imager.20 The authors also examined these copies using tools, including Guidance EnCase,16
AccessData FTK,17 and Magnet Software Internet Evidence Finder.21
Daryabar et al. [25] focused on Android and iOS platforms for cloud storage service, MEGA.22
They identified data artifacts related to various operations such as login, uploading, downloading,
deletion, and sharing. Mohtasebi et al. [59] identified the memory, disk, and network artifacts
on user devices that include Windows 8.1 and iPhone 5S. The authors implemented Windows-
based experiments through virtual machines using a number of configurations involving different
desktop applications and distinct browsers such as Firefox, Internet Explorer, and Google Chrome.
They showed various files collected and analyzed (.vmem file and .vmdk files) for memory and disk
artifacts for each VM configuration. In addition, they used Wireshark for network traffic analysis.
The authors also deduced that no change in content or meta-data information is needed except for
timestamp and ADS (alternate data stream) during download operation.
Teing et al. [94] focused on cooperative storage service, Symform. A cooperative storage ser-
vice is a peer-to-peer model in which each user contributes a portion of its unused local storage
to help to form a large aggregated storage. The authors marked artifacts of forensic value with
sources on both mobile and personal computers with Windows, iOS, Ubuntu, and Android. Teing
et al. [95] identified data remnants related to memory and storage from the usage of BitTorrent
Sync’s client and web applications. These artifacts consisted various .dat files including settings,
sync, and history with files in storage, data, and “.sync” directory. They also focused on remnants
that were part of iOS and Android devices. In addition, the authors provided a research method-
ology for forensic practitioners to examine BitTorrent Sync applications. Prior-identification and
documentation of potential forensic artifacts reduce the investigation time and resources when
it comes to real-world investigation. Teing et al. [93] focused on cloud-enabled big data storage
service, Syncany to extract relevant forensic artifacts on both client device and server. The client-
side artifacts included metadata related to sync, file management, authentication, and encryption,
cloud transaction logs, network captures, and memory dumps. The authors marked administra-
tive and file management metadata, cloud logging and authentication data, and data repositories

15 https://www.amazon.com/drive.
16 https://microsoft.com/ie.
17 https://www.mozilla.org/firefox/.
18 https://apple.com/safari.
19 https://www.google.com/chrome/.
20 https://marketing.accessdata.com/ftkimager4.2.0.
21 https://www.magnetforensics.com/magnet-ief/.
22 https://mega.nz/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:23

as server artifacts. Ab Rahman et al. [1] proposed an integrated cloud incident handling model,
which is a “forensic-by-design” model. This model allows the integration of forensic capabilities
with existing incident handling strategies. The authors evaluated their model with multiple cloud
applications such as Google Drive, Dropbox, and OneDrive to determine the cloud user security
responsibility and validity of the model. They also listed the forensic meta-data artifacts for each
application.
Most of these studies focused only on the client-side of storage service and tried to identify the
potential evidence on client-side devices. However, the authors of References [79–81] questioned
the traditional approach of client-side storage analysis and shifted the focus to the cloud-side anal-
ysis. They raised important issues such as partial replication, artifact revisions, and cloud-native
artifacts. They also emphasized on an API-based acquisition to address the problems in client-side
artifact analysis. Roussev et al. [80] developed kumodd tool as a proof-of-concept with three logical
layers including dispatcher, drivers, and user interface. The authors further used Google Drive,9
Microsoft OneDrive,11 Dropbox,10 and Box23 cloud services for the validation. In Reference [81],
the authors highlighted the shortcoming of their earlier work [80] as kumodd tool could not acquire
cloud artifacts in their original form due to the lack of API support. They introduced a Python-
based tool kumodocs to examine Google Docs artifacts, roused by a browser extension DraftBack24
that can replay the entire history of documents. Roussev et al. [79] demonstrated the incompati-
bility of traditional tools with StaaS applications and proposed three new forensic tools, including
kumodd, kumodocs, and kumofs.
Federici [33] gave a list of requirements for cloud storage forensic applications. These require-
ments include logical acquisitions, performed functions, low-level interface, read-only access, of-
ficially supported interfaces, on-demand folder browsing, native logging, and folder imaging. The
authors developed a Cloud Data Imager (CDI) tool based on CDI library. Quick and Choo
[70] analyzed whether data collection from cloud storage could lead to a change in data and its
meta-data information. The authors compared the original content with downloaded content us-
ing hashing methods (MD5 and SHA-1) and presented no change in data and meta-data as a final
outcome with the exception that timestamp information changes with an access and download
method.
Different authors presented the case studies of popular cloud storage services, including Drop-
Box, Amazon Cloud Drive, Amazon S3, and Google Docs. They identified the potential evidence
artifacts in client-side and server-side (cloud-native). Several papers utilized the service provider’s
APIs to acquire forensic artifacts [33, 70, 80, 81] and aimed to overcome the shortcomings of client-
side analysis.
The above-mentioned contributions touch all three forensic aspects of cloud storage applica-
tions: client devices, network, and cloud servers. However, many of the contributions focused
only on client-side data artifacts [22, 37, 69] without server-side forensic artifact identification
and analysis. In addition, most of the contributions only focus at identification stage of the over-
all forensic process. The authors in [79–81] discussed server-side artifacts through the provider’s
API, however, these contributions also required prior-information of user-credentials to conduct
an investigation.
6.2.2 Data Provenance. Data provenance refers to the documentation of all the activities and
entities that influence the data over its lifetime, i.e., it is a historical record of data and its origin.
Provenance is a crucial aspect as it provides a view to the sequence of developments regarding a

23 https://www.box.com/.
24 http://draftback.com/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:24 B. Manral et al.

particular data object that explains the current status of the data object. It is a meta-data informa-
tion that describes the data object, including its identity, ownership, and dependencies. It is also
a way to differentiate between two data objects. Data provenance has a long history as a topic
of academic research. Various contributions discussed the significance of the data provenance in
general, and also in the context of forensics. There are many contributions in the past that focused
on secure data provenance schemes. However, we do not include them in our study as we limit
our survey to the cloud data provenance only.
Data provenance in the context of a cloud is part of several contributions such as [3] that de-
scribed the challenges for provenance in cloud with a focus on virtualization, distributed nature,
and the scale of a cloud environment. They emphasized on the collection of linkable data from
all the layers, including physical, virtual, and application. Muniswamy-Reddy et al. [60] discussed
the importance of provenance for cloud storage and identified multiple properties of a data prove-
nance system. These properties deal with coupling of data, ordering of objects, persistent states,
and query efficiency. They proposed three new protocols to maintain provenance in the cloud and
pointed out further research challenges, including issues related to system architecture, security,
provenance storage, and economics. Muniswamy-Reddy and Seltzer [61] highlighted importance
of data provenance to the storage providers. The authors also highlighted application anomaly
detection, object clustering, and content-based search as utilities of provenance in the cloud. In
addition, they presented a set of requirements, including co-operation between storage and com-
pute facilities, allow users to record provenance information related to consistency, persistence,
and security of data objects. Asghar et al. [10] identified a set of challenges based on the architec-
tural complexity of the cloud. The authors discussed the importance of securing the provenance
information, especially if the stored data is sensitive, for example, health records of patients.
In the context of forensics, Lu et al. [51] proposed a provenance scheme utilizing bi-linear pair-
ings that consisted five algorithms. These algorithms deal with setup, generation of keys, tracking,
and access authorization for the data forensics in the cloud. The authors considered cloud architec-
ture with actors such as trusted system manager (SM), service provider, and the number of users.
Li et al. [47] also introduced a provenance scheme based on group signature and attribute-based
signature. They extended the system model of Reference [51] by introducing multiple authorities
and a third party auditor (TPA). The purpose of TPA was to address the disputes that arise in stored
documents exchanged between a consumer and a provider.
Provenance is a vital source of information and security of provenance data is essential to the
success of data forensics [47, 51]. Lu et al. [51] identified two basic requirements for secure prove-
nance: unforgeability and conditional privacy preservation. However, Li et al. [47] identified five
essential properties; confidentiality, unforgeability, anonymity, traceability, and fine-grained ac-
cess control. Li et al. [47] focused on shifting the computational overhead of cryptographic oper-
ations to the server-side to utilize profound computational resources of a cloud environment.

6.3 Network Forensics


Network data between different entities is a crucial element for both security and forensics as it
provides a live perspective of the network data stream. Network forensics focuses on the monitor-
ing and analysis of the network traffic for detection, prevention, and a possible reconstruction of
the security events. Network forensics is a broad field as researchers introduced various techniques
and tools over time to investigate the expanding number of criminal exercises as summarized in
Reference [46]. Cloud computing architecture introduces new challenges to the network forensics
due to virtual network, multi-tenancy, and remote location, as discussed in References [35, 91]. In
the cloud, we identified two places for network forensic examination; In-cloud and Outside-cloud
network forensics. By in-cloud forensics, we refer to techniques involving network traffic inside

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:25

Table 11. Network Forensics: An Overview

In-cloud forensics : SDN-based


[44] Identified the potential locations for evidence collection in SDN architecture
including application, control, and infrastructure layer and northbound-
southbound interfaces.
[43] Introduced the forensic management layer (FML) in SDN architecture to find the
root cause of cyber-attacks.
[15] Discussed forensics in the context of data center networks and presented a
solution for forensic-enabled SDN architecture through Provenance Verification
Points (PVP) middleware.
[106] Discussed design goals and technical requirements of SDN forensics. Proposed
SDN forensics framework with six components: data acquisition, extraction,
fusion, anomaly detection, security alarm, and evidence conservation.
Outside-cloud forensics
[91] Identified cloud usage information through analysis of raw network data.

Table 12. Network Forensics: Strengths and Challenges/Weaknesses

In-cloud forensics: SDN-based


Contributions Strengths Challenges/Weaknesses
[44] Marked potential evidential Focused only on identification stage
locations in SDN architecture. of digital investigative process.
[43] Root cause identification of The solution required modification in
malicious attacks through traffic SDN architecture.
monitoring.
[15] Forensic system to detect covert The system required modification in
communication by capturing a SDN architecture. No security
copy of all the traffic. considerations for PVP.
[106] Identified various forensic artifacts Lacks implementation and framework
at each SDN layer. evaluation.
Outside-cloud forensics
[91] Identified cloud usage information Required capture of all the network
from raw network data for the data first, not suitable for live
cloud services including DropBox network forensics.
(SaaS), Google App Engine (PaaS),
and Amazon EC2 (IaaS).

cloud infrastructure. It may also consist of a private network of the particular user or underlying
network infrastructure of the cloud. Outside-cloud forensics refers to techniques involving analy-
sis of communication between a provider and a consumer over the Internet. The former category
constitutes a number of contributions whereas the latter has a single contribution, as shown in
Table 11 and Table 12.
6.3.1 In-cloud Forensics. In-cloud forensics covers forensic investigation of various compo-
nents inside the cloud provider infrastructure. It will cover all the communication inside cloud
among various hardware and software (virtual) components. The scope of in-cloud varies from
a single data center network to the network of multiple data centers, dispersed geographically
and connected through high-speed WANs utilizing virtual private network (VPN). Cloud does not
ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:26 B. Manral et al.

restrict resource scaling and on-demand availability just to the compute resources. Virtualization
plays a significant role to enable cloud providers to treat compute, storage, and network as a pool of
resources and provision them dynamically. Major cloud service providers like Amazon and Google
already have a networking infrastructure based on SDN to create public, private, and hybrid net-
works. Amazon provides network isolation through its Virtual Private Cloud (VPC) is an example
of software working on top of the hardware stack. SDN is an emerging technology and serves as
solution to combat various security attacks, including DDoS and TCP SYN flooding attacks [102].
In this category, we are going to focus on the forensics in SDN, including challenges and potential
locations for evidence.
SDN-based forensics. Software-defined networking [101] came as an innovative solution to pluck
the intelligence out of network devices and centralize that intelligence for dynamic, automated,
and flexible networking infrastructure. SDN architecture has three functionality-driven layers,
including infrastructure, control, and application. Discussion of SDN in the context of the cloud
is already a part of various contributions [11, 12, 90]. In Reference [90], Son and Buyya categorize
different ways, cloud computing encompasses SDN in its architecture and presents as a taxonomy.
Yan et al. [102] proposed a SDN-based solution to defeat DDoS attacks with software-based traffic
analysis, centralized control, and global view of the network.
Due to its centralized control, SDN provides a mean to configure and manage all the network
devices from a single place, but can also become single point of breach and failure. Forensics in
SDN is still in its infancy and currently has limited set of contributions. Khan et al. [43] proposed
a forensics management layer (FML) to investigate attacks in infrastructure and control layer. The
authors further divided the FML into two parts: centralized-FML and distributed-FML for network
traffic monitoring and investigation through various interfaces (east, west, north, and south) in
SDN. Khan et al. [44] reviewed the differences between TN (traditional network forensics) and
SDN with parameters including network device, forensic approach, and traceback. They reviewed
the challenges in SDN forensics and highlighted different components in SDN architecture that are
vulnerable to attacks. The authors also provided potential locations for evidence collection and a
generalized model for the forensics in SDN.
Zhang et al. [106] reviewed the security threats to the SDN, and divided current solutions in
three categories, controller security, application security, and DoS/DDoS attack defense. They
stated a set of goals and requirements for the forensic in SDN and also presented a forensic frame-
work with six components, including data acquisition, extraction, fusion, anomaly detection, se-
curity alarm, and evidence conservation. Bates et al. [15] designed a SDN-based forensic system
for data center networks by considering SDN itself as a point of observation. The authors intro-
duced “Provenance Verification Points (PVP)” as middleboxes in SDN to collect forensic informa-
tion. They discussed the placement of the PVPs in SDN architecture and the ways network traffic
would be routed through these PVPs.
A major shortcoming of past solutions such as [15, 44, 106] is their theoretical conceptual design
without any practical implementation. Zhang et al. [106] proposed a solution based on assumptions
of security of the SDN architecture by placing the trust on both the network devices and controller.
Security of SDN is in itself a question that remains to be addressed.
6.3.2 Outside-cloud Forensics. Outside-cloud forensics consist of the examination of communi-
cation between a consumer (client) and a provider. It mostly covers conventional network forensics
and is discussed in numerous past contributions over the years. Client-side network forensics is
in itself a broad area and not part of our contributions in this survey.
Even a single user generates a huge amount of network traffic that interact with various services
over the Internet. It is difficult to identify the communication that refers to a user’s interaction with
their cloud services. In Reference [91], Spiekermann et al. examined raw network data to identify
ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:27

the information related to the different cloud services and evaluated the applicability of current
network tools including Wireshark,25 Wildpackets Omnipeek Enterprise,26 and NetworkMiner.27

7 FORENSIC TOOLS AND METRICS


7.1 Forenic Tools
Development of digital forensic tools is an evolving process due to the dynamic nature of emerging
technologies and an increasing number of devices utilizing these technologies. NIST prepared a
catalog of various digital forensics tools and their functionalities [62]. However, the category for
cloud services includes only a set of six tools in Reference [62]. In this section, we classify the
forensics tools in following two major categories: conventional digital forensics tools and cloud-
specific tools. We present a list of these tools in Table 13.
7.1.1 Conventional Digital Forensics Tools. In this category, we discuss only those tradi-
tional digital forensics tools that are part of the cloud forensics’ literature. Digital Forensic
Framework (DFF) is an open source tool for a forensic investigation that consists identifica-
tion, collection, and preservation of evidence artifacts. DFF extends its functionality to the vir-
tual environment by supporting VM disk reconstruction and is VMware (VMDK) compatible [8].
Wireshark, Wildpackets Omnipeek Enterprise, and NetworkMiner are network packet captur-
ing, protocol analysis, traffic filtering, and visualization tools [91]. Encase comprises a complete
investigation life-cycle. File and folder recovery, file signature analytic, hash analysis, find email
and Internet artifacts, and data and meta-data indexing are the main features of the EnCase foren-
sic solutions. Forensic toolkit is a collection of forensic tools in a single place. It contains email
analysis, file decryption, data carving, data visualization, web viewer, cerberus (malware triage),
and OCR. AccessData Forensic ToolKit (FTK) is an entire suite of investigative tools to con-
duct digital investigations. Many forensics studies used these tools in their forensic solutions for
cloud environment. Dykstra and Sherman [31] evaluated the compatibility of EnCase and FTK
for the cloud environment for remote acquisition. Alex and Kishore [5] also used FTK to capture,
store, and analyze forensic images that included network traffic and current state of VM. X-Ways
Forensic provides an integrated forensic environment with Windows compatibility and used by
contributions such as [69].
7.1.2 Cloud-specific Tools. In this category, we discuss forensic tools that were designed specif-
ically for cloud environment. Cloud introduces new challenges to the forensic examiners when it
comes to conducting an investigation involving cloud due to it is architectural and remote nature.
Compatibility and reliability are the main issues with conventional DF tools when it comes to the
cloud.
FROST is a collection of API-based forensic tools for the OpenStack cloud platform [38]. It con-
sists of three API-based tools for the trustworthy forensic acquisition of virtual disks, API logs,
and guest firewall logs and supports IaaS service model. Trust in the CSP is the main limitation of
the FROST tool as it requires trust in underlying levels below the guest operating system such as
CSP-owned hypervisor, hardware, and networking infrastructure. Kumodd is a service provider’s
API-based forensic tool for the analysis of the cloud-side artifacts to overcome the shortcomings
of client-side analysis that includes partial data replication and revision retrieval regarding cloud
storage [79]. Kumodocs is an artifact analysis tool for Google Docs and is based on DraftBack
browser extension. Roussev et al. [79] also developed kumofs tool for the meta-data information

25 https://www.wireshark.org/.
26 https://www.savvius.com/product/omnipeek/.
27 https://www.netresec.com/?page=NetworkMiner.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:28 B. Manral et al.

Table 13. Forensics Tools

Conventional digital forensics tools


Service
Tool Functions
model
DFF [8] Forensic tool to identify, collect, and preserve evidences IaaS
with chain of custody.
EnCase Forensic [31] Forensic solution in the form of collection of software IaaS
to collect, preserve, analysis, and report evidences in
the court validated format.
AccessData Forensic It is an aggregation of forensic tools for email-analysis, IaaS
ToolKit (FTK) [5] [31] data carving, and others including FTK imager for disk
imaging.
Wireshark [91] Network protocol analyzer IaaS
Wildpackets Enterprise solution for network packet and protocol All
Omnipeek40 [91] analysis.
NetworkMiner [91] Open source Network Forensics Analysis Tool (NFAT). All
X-Ways Forensic [69] Integrated forensic environment with variety of SaaS
features including disk imaging and cloning, file and
directory catalog, and access to file system structures
with deleted partitions.
Cloud-specific tools
FROST [38] Forensic toolkit for the OpenStack cloud platform to IaaS
gather forensic evidence without CSP’s intervention.
Kumodd [81] [79] Cloud storage forensic tool for cloud drive acquisition SaaS
including snapshot of cloud-native artifacts in format
such as PDF.
Kumodocs [79] Analysis tool for Google Docs based on the DraftBack, SaaS
a browser extension that replay the complete history of
documents residing in the Document folder.
Kumofs [79] Forensic tool for acquisition/analysis of file meta-data SaaS
residing in the cloud.
VNsnap [8] Snapshot tool for virtual network infrastructures in the IaaS
cloud.
Cloud Data Imager [33] Novel tool for the remote acquisition of cloud storage SaaS
with two main features, directory browsing and logical
copy of selected folder tree.
LINEA [19] A forensic tool for live network evidence acquisition SaaS
from online services.
ForenVisor [68] A tool for live forensic analysis in the form of dynamic IaaS
hypervisor.

stored in the file system of a cloud storage. Timelining and correlating events with the help of
distinct logs generated in the cloud environment is not a trivial task due to the number of logs
generated, a vast number of sources, and distinct time zones. Thorpe et al. [96] introduced VM Log
Auditor to create a timeline of VM hypervisor log events from various physical operating sys-
tems. VNsnap is a snapshot tool for virtual network infrastructures (VNI). The snapshot includes
VM execution, communication, and storage images. LINEA [19] is a live network evidence acquisi-
tion tool for the World Wide Web (WWW) services with the assumption of availability of Trusted
Third Party (TTP) that act as an interface and collects all the information for the forensic investiga-
tors. Qi et al. [68] introduced dynamic hypervisor, ForenVisor as a tool for live (volatile) evidence

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:29

collection and preservation. It overcomes the reliability issue for the live forensics by reducing the
Trusted Computing Base (TCB) size and evidence collection directly from the hardware.

7.2 Forensic Metrics


Metrics are measurable quantities that assist in quantifying the property under consideration of
a particular process. Metrics present a perspective that highlights various aspects, including the
effectiveness and efficiency of any process, system, or organization. They also help in providing
feedback for further improvement, comparison, and future innovations.
In the context of forensic investigations, Cohen [23] mentioned different categories of metrics
such as metrics on witnesses (expert and non-expert) and metrics on evidence (reliable, relevant,
authentic, and original writing). In addition, the author focused on evidence level metrics for digital
forensics and discussed the issues regarding each metric. For example, it is not a trivial task to
determine the reliability of digital evidence, and even difficult task is to measure the reliability.
In another viewpoint, quantity and quality are two performance metrics that can be useful for
investigative authorities. Quantity refers to a number of completed investigations, and quality
refers to a number of investigations that lead to prosecution or penalty.
However, our focus in this section is on metrics that allow the comparison between two forensic
solutions. In the course of our literature survey, we noticed there’s no discussion over metrics in
both digital and cloud forensics. This represent a research gap as evaluation of two methods may
provide insight to forensic investigators about their effectiveness and impact on the duration of
the investigation. For example, in the cloud, log forensics is fundamental and part of a number of
forensic solutions. However, there should be metrics such as “analysis time” and “effort to evaluate”
for comparison as there is a large amount of log information. In the case of live analysis, it is crucial
that the system should generate quick alerts with minimum false positives, which are popular
metrics in the case of IDS and firewalls. Forensic metrics may not be useful as cybercrime differs
in scale, method, and target. Each has a significant impact on the forensics as the scale and method
of the cyber attack may influence time duration.

8 CLOUD FORENSIC INVESTIGATION GUIDELINES


In the course of our literature survey, we realized that investigators confront with multiple issues
regarding forensic investigations in a cloud environment. In this section, we enlist various foren-
sic artifacts available at each stakeholder or contributor in the cloud ecosystem. Later, we give a
comprehensive list of important considerations in the form of investigation guidelines for cloud
forensics.

8.1 Forensic Stakeholders/Contributors


We believe that the collection of artifacts at different layers will provide a comprehensive view to
the forensic investigators about where and how to find valuable evidential artifacts. We present an
analysis to discuss the role of different stakeholders amid the collection, evaluation, and presen-
tation of collectible evidence. In particular, we evaluate and mark the position of cloud end-user,
service owner, and CSP while collecting different artifacts on one or more of the stakeholders.
We also highlight different access mechanisms for the related artifacts that include a command-
line interface, a management console/dashboard, and APIs. Most of these mechanisms exclude the
physical acquisition of underlying resources, which highlights the compulsory need for alternate
access methods due to remote location and vast geographical boundaries in cloud computing. For a
simplified view, we designed and marked all the stakeholders of a cloud environment in Figure 7.
We provide a detailed listing of forensic artifacts at each stakeholder, including resources and
access mechanisms in Table 14. This table also marks the mutual dependencies to one or more

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:30 B. Manral et al.

Fig. 7. This figure marks different forensic stakeholders/contributors in cloud eco-system.

particular stakeholders during the collection of potential forensic artifacts related to a specific
stakeholder.
8.1.1 Cloud End-user. Cloud end-users (service users) utilize cloud applications, which include
all the web-services hosted in a cloud. Examples include cloud-hosted websites, e-commerce por-
tals, email, cloud storage, and content sharing applications. End-users utilize the services over
the Internet through a browser, desktop application, or a phone-based app/browser. Each access
method leaves data artifacts/remnants on the end-user machine due to the various interactions
with cloud-based services. Identification of these artifacts may provide forensic investigators a
potential evidence on an end-user machine. As the location of the digital artifacts is cloud end-
user itself, the dependency to collect forensic artifacts lies with cloud end-user only.
8.1.2 Cloud Service Owner. Cloud service owners are VM owners that host particular
server/service and has complete control over the execution environment. Cloud service owners
usually manage their cloud services through three CSP-provided mechanisms: CLI (command-
line interface), management console, and APIs. Service owners have access to all these artifacts,
so we show a dependency on cloud service owner in Table 14. In most of the cases, each access
method is CSP independent. However, CSP generates all related information on behalf of cloud
service owners. In the scenario of non-cooperative service owner, forensic investigators may have
to rely upon CSPs to acquire the artifacts. Therefore, it will be apt for us also to mark the CSP
dependency over the collection of the artifacts related to cloud service owner.
8.1.3 Virtual Machine. Virtual machines provide a rich source of artifacts to the investigators
as it has a complete operating system and all the related services. VM owners can access these
artifacts using three methods: CLI, dashboard, and APIs. There are guest OS files that provide
all the information about the user’s activities and data. There are numerous logs associated with
various applications and background processes, including databases and transaction logs. Logs
associated with different security measures such as firewall, anti-virus software, and IDS/IPS may
provide information about malicious incoming and outgoing activities of a virtual machine. All
artifacts listed in Table 14 are easily accessible to the VM owners, which results in a dependency
on service service owner. CSP is another major actor that forensic investigators may have to rely
upon for these artifacts in case of non-availability of access by service owners or the cases of other
related emergent situations.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:31

Table 14. Location of Forensic Artifacts in Each Forensic Stakeholder/Contributor with Dependency

Stakeholders/ Access mechanisms Dependency


Forensic Artifacts
Contributors and Resources U O P
Browser Browsing history, cache, and cookies  × ×
Cloud Application logs, database files, network captures,
Desktop application  × ×
end-user registry, and syslog/event logs
Mobile Log and database files  × ×
Directory listing and data revision/changelog, and
APIs × × 
metadata information
CLI Account activities, service, application, and data logs ×  
Cloud service
Management
owner Account activities, service, application, and data logs ×  
console/dashboard
APIs Account activities, service, application, and data logs ×  
Guest OS, firewall, antivirus/antimalware, and
IDS/IPS logs, VM snapshots (disk, memory) and
CLI ×  
Virtual clones, application log files, database and application
machine backups, transaction logs, and network/flow logs
Guest OS, firewall, antivirus/anti malware, and
Management IDS/IPS logs, VM snapshots and clones, application
×  
console/dashboard log files, database and application backups,
transaction logs, and network/flow logs
Guest OS, firewall, antivirus/anti malware, and
IDS/IPS logs, VM snapshots and clones, application
APIs ×  
log files, database and application backups,
transaction logs, and network/flow logs
Event logs/Syslog, audit logs and messages, /var/log/
CLI × × 
CSP and /proc folder
(Hypervisor) Management Event logs/Syslog, audit logs and messages, /var/log/
× × 
console/dashboard and /proc folder
Event logs/Syslog, audit logs and messages, /var/log/
APIs × × 
and /proc folder
Various logs associated with different components of
CSP (Cloud CLI cloud environment including VMs, hypervisor, cloud × × 
management storage, and cloud network
softwares) Various logs associated with different components of
Management
cloud environment including VMs, hypervisor, cloud × × 
console/dashboard
storage, and cloud network
Various logs associated with different components of
APIs cloud environment including VMs, hypervisor, cloud × × 
storage, and cloud network
Backups, archives, snapshots, replicas, databases, and
CSP (Cloud CLI × × 
logs
storage
Management Backups, archives, snapshots, replicas, databases, and
management) × × 
console/dashboard logs
Backups, archives, snapshots, replicas, databases, and
APIs × × 
logs
Router, switches, and gateway logs, SDN controller,
CSP (Cloud CLI × × 
northbound, and southbound APIs’ logs
network Management Router, switches, and gateway logs, SDN controller,
management) × × 
console/dashboard northbound, and southbound APIs’ logs
Router, switches, and gateway logs, SDN controller,
APIs × × 
northbound, and southbound APIs’ logs
U: Cloud End-user, O: Cloud Service Owner, and P: Cloud Service Provider.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:32 B. Manral et al.

8.1.4 CSP (Hypervisor). Many of the artifacts that are part of traditional OS environments are
also available in hypervisors. Event logs/Syslog, audit logs, and messages are the fundamental ar-
tifacts available in a hypervisor. These logs provide various information about individual events of
the VM instances. In the cloud, hypervisor access is not available to cloud consumers and restricted
to the CSP only. To collect low-level system information, forensic investigators must depend on
the cooperation of CSP.
8.1.5 CSP (Cloud Management Software). Examples of cloud management software include
OpenStack28 and Eucalyptus.29 These software suites help in establishing cloud platforms that
manage computing, networking, and storage resources in a data center along with features such
as accounting, elasticity, and resource allocation. All these software are part of the cloud infras-
tructure and has a control of CSP, which results in their CSP dependency.
8.1.6 CSP (Cloud Storage Management). Logs associated with various storage management
software may aid the investigators with access information of various storage devices. As part
of storage devices, backups, archives, VM snapshots, and replicas/clones may provide information
about the residing content for each user. Database information is another important source of ar-
tifacts in cloud storage. Forensic investigators must acquire these artifacts from the CSP, which
marks the CSP dependency for having artifact acquisition for this category.
8.1.7 CSP (Cloud Network Management). In today’s time, CSPs’ infrastructure spans to tens of
data centers consisting of hundreds of devices including compute, storage, and network devices.
From the forensics’ point of view, logs associated with each network device are fundamental arti-
facts in the CSP network. In the case of SDN, SDN controller logs are important as SDN manages
complete networking infrastructure from a single location including northbound and southbound
APIs. Logs associated with these APIs may also provide a detailed history of communication be-
tween various devices. As for the dependency, again it is CSP dependent as CSP owns and controls
the complete cloud side network infrastructure.

8.2 Solution Considerations


As an important outcome of our survey, we must acknowledge the fact that the cloud forensics in
its current state lacks comprehensive, reliable, and compatible forensic solutions. In this section, we
provide some essential directions that may help the field of cloud forensics as guidelines to future
solutions. We believe and anticipate that these guidelines may help the forensic practitioners and
CSPs to assist in bringing effective forensic solutions for the cloud environment.
8.2.1 Live and Remote Analysis. While using conventional digital forensic tools and procedures,
investigators face the compatibility and reliability issues in a cloud environment. It is impossible
to acquire the physical access to underlying hardware resources in cloud owing to the unknown
physical location, contrary to the conventional on-site computing environment. We believe that
forensic examiners should emphasize on the compulsory need for live analysis for the cloud en-
vironment. Due to geographically distributed resources with no physical access, it is only logical
to deduce the remote nature of live system analysis in cloud forensics as well. Another argument
in support of live and remote analysis is its practicality in the cloud as in the case of compromised
systems; investigators could lose valuable information with VM shutdown or deletion. Level of so-
phistication of current attacks is another major concern as they leave no trace in persistent storage.

28 https://openstack.org/.
29 http://www.eucalyptus.com/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:33

For example, a file-less malware that creates itself in the victim computer’s main memory rather
than using the typical installation approach [100].
8.2.2 Log Aggregation and Correlation for Forensics. Log aggregation and correlation is crucial;
however, it is not a trivial task due to distant types and formats of logs with different and in-
compatible structures, and a sheer large volume of logging data. There are many enterprise SEIM
(Security information and event management) solutions available such as Splunk Enterprise Se-
curity (ES),30 LogRhythm SIEM,31 Micro Focus ArcSight,32 and Elastic Stack33 that provides log
collection, aggregation, and correlation for the cloud environment. However, these are security
solutions and focus on incident detection and handling. We believe researchers must evaluate SIEM
solutions for the post-incident part of forensics in the cloud environment. CSPs may also need to
extend their logging capabilities that enable loging for forensics, by default.
8.2.3 CSP as a Forensic Contributor. CSPs own and control the underlying hardware infras-
tructure of a cloud. CSP dependency over the potential evidential data is a significant issue
both for forensic practitioners and investigators. Artifact acquisition may quickly become a time-
consuming process if CSPs are non-cooperative due to internal policies, privacy issues, or area of
jurisdiction. It may hamper time-sensitive and real-time criminal investigations. Ab Rahman et al.
[2] emphasized on forensic-by-design framework that allows integration of forensic tools into the
development of cloud physical system to mitigate risks and enable forensic capabilities. We believe
it is high time that CSPs evolve methods to provide forensic capabilities to the investigators, which
should not be just limited to the logs. It may be in the form of “forensic-as-a-service.” There are
other major issues when we discuss the access methods to evidential data residing within a cloud
infrastructure. Trust in CSP is one of the important issues as forensic artifacts within cloud archi-
tecture are vulnerable to insider threats such as malicious administrator and a current or former
employee [31]. We stress that CSPs’ must focus on security mechanisms for the artifacts that are
even “CSP proof.”
8.2.4 Privacy and Evidence Isolation. Virtualization allows CSPs to create a multi-tenant envi-
ronment that hosts multiple users on a single server for effective and efficient resource utilization.
However, resource virtualization may lead to privacy violations. Privacy and forensics always con-
flict with each other as former is all about disclosure of information, while the latter focus on the
concealment. In the cloud, privacy and evidence isolation are points of concern owing to a multi-
tenant environment of a cloud and overlapping of memory and storage regions. It is also a lead-
ing issue, especially in live forensics. Forensic practitioners perform the live analysis of a system
through memory dumps. However, it is possible that the memory contents of other users/VMs are
also a part of that memory dump. Later, the analysis stage may reveal sensitive information about
non-considerate users and violates their privacy. We believe that cloud forensics requires serious
effort on evidence isolation methods to overcome the privacy concern regarding current forensic
solutions.
8.2.5 Role of Trusted Computing. The trust in cloud forensics is one of the main concerns due to
the shift of control from consumers to CSP for both data and computation. Trust in CSP means to
trust in underlying infrastructure and artifacts. It is a daunting task to ensure that evidential data
within cloud infrastructure is authentic and not modified/subverted by entities, including CSP.

30 https://www.splunk.com/.
31 http://www.logrhythm.com/.
32 https://software.microfocus.com/en-us/products/siem-security-information-event-management/overview.
33 https://www.elastic.co/elk-stack.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:34 B. Manral et al.

To ensure trust in cloud architecture, many authors explored the idea of trusted computing for
the cloud environment [48, 49, 85]. We believe that the Trusted Platform Module (TPM34 )-based
solutions are a viable solution to enable verifiable trust in a multi-tenant environment such as a
public cloud. Trusted computing addresses the problem of evidence integrity and authenticity.

9 CONCLUSION
Cloud forensics will be increasingly important as more of the things around us become cloud-
dependent (e.g., Cloud-of-Things). In this article, we surveyed existing literature and presented
a cloud forensic investigation solution taxonomy, focusing on incident-driven, provider-driven,
consumer-driven, and resource-driven investigations. For example, in incident-driven cloud foren-
sics, we considered measures and methods that focus on “pre” and “post” part of security inci-
dent detection. In both provider-driven and consumer-driven cloud forensics, we focused on the
two principal actors of cloud architecture, namely, CSP and consumer. In provider-driven cloud
forensics, we focused on solutions primarily from the CSP’s perspective given the nature of such
investigations (and potential evidential source). Not surprisingly, in consumer-driven cloud foren-
sics, we focused on solutions at the client-side for the collection of potential evidential artifacts.
The resource-driven cloud forensics category focused on forensic methods of individual cloud re-
sources such as virtual machine, storage, and network.
The challenges and discussions presented in this article (e.g., detailed list of potential digital
artifacts available at each stakeholder, tools available to access and collect the artifacts, and depen-
dencies between various stakeholders in the cloud computing ecosystem) contributed to a better
understanding of cloud forensics. We also presented cloud forensic investigation guidelines relat-
ing to live analysis, log aggregation and correlation from a forensic perspective, CSP as a forensic
contributor, user privacy and evidence isolation, and the role of trusted technologies in a cloud en-
vironment. We also highlighted a number of research opportunities, such as the need to develop
cloud forensic tools and solutions with less CSP dependence, especially with a focus on public
cloud platforms. In addition, cloud forensic readiness, the security of cloud logs, forensic com-
pliant SLAs and Forensic-as-a-Service (FaaS), big data, and data and privacy compliance (GDPR
regulations) are different aspects that require significant efforts with respect to cloud forensics.
REFERENCES
[1] Nurul Hidayah Ab Rahman, Niken Dwi Wahyu Cahyani, and Kim-Kwang Raymond Choo. 2017. Cloud incident
handling and forensic-by-design: Cloud storage as a case study. Concurr. Comput.: Pract. Exper. 29, 14 (2017), e3868.
[2] Nurul Hidayah Ab Rahman, William Bradley Glisson, Yanjiang Yang, and Kim-Kwang Raymond Choo. 2016.
Forensic-by-design framework for cyber-physical cloud systems. IEEE Cloud Comput. 3, 1 (2016), 50–59.
[3] Imad M. Abbadi, John Lyle et al. 2011. Challenges for provenance in cloud computing. In Proceedings of the Interna-
tional Workshop on Theory and Practice of Provenance (TaPP’11).
[4] MA Manazir Ahsan, Ainuddin Wahid Abdul Wahab, Mohd Yamani Idna Idris, Suleman Khan, Eric Bachura, and
Kim-Kwang Raymond Choo. 2018. CLASS: Cloud log assuring soundness and secrecy scheme for cloud forensics.
IEEE Trans. Sustain. Comput. 60, C (2018), 193–205.
[5] M. Edington Alex and R. Kishore. 2017. Forensics framework for cloud computing. Comput. Electr. Eng. 60 (2017),
193–205.
[6] Syed Ahmed Ali, Shahzad Memon, and Farhan Sahito. 2018. Challenges and solutions in cloud forensics. In Pro-
ceedings of the 2nd International Conference on Cloud and Big Data Computing (ICCBDC’18). ACM, New York, NY,
6–10.
[7] Sameera Almulla, Youssef Iraqi, and Andrew Jones. 2014. A state-of-the-art review of cloud forensics. J. Dig. Forens.
Secur. Law 9, 4 (2014), 2.
[8] Sameera Almulla, Youssef Iraqi, and Andrew Jones. 2016. Digital forensic of a cloud-based snapshot. In Proceedings
of the 6th International Conference on Innovative Computing Technology (INTECH’16). 724–729.

34 https://trustedcomputinggroup.org/resource/trusted-multi-tenant-infrastructure-reference-framework/.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:35

[9] Saad Alqahtany, Nathan Clarke, Steven Furnell, and Christoph Reich. 2015. Cloud forensics: A review of challenges,
solutions and open problems. In Proceedings of the International Conference on Cloud Computing (ICCC’15). IEEE,
1–9.
[10] Muhammad Rizwan Asghar, Mihaela Ion, Giovanni Russello, and Bruno Crispo. 2012. Securing data provenance in
the cloud. In Open Problems in Network Security. Springer, 145–160.
[11] Siamak Azodolmolky, Philipp Wieder, and Ramin Yahyapour. 2013. SDN-based cloud computing networking. In
Proceedings of the 15th International Conference on Transparent Optical Networks (ICTON’13). IEEE, 1–4.
[12] Azodolmolky, Siamak and Wieder, Philipp and Yahyapour, Ramin. 2013. Cloud computing networking: Challenges
and opportunities for innovations. IEEE Commun. Mag. 51, 7 (2013), 54–62.
[13] Hyun Baek, Abhinav Srivastava, and Jacobus Van der Merwe. 2014. Cloudvmi: Virtual machine introspection as a
cloud service. In Proceedings of the International Conference on Cloud Engineering (IC2E’14). IEEE, 153–158.
[14] James Baldwin, Omar M. K. Alhawi, Simone Shaughnessy, Alex Akinbi, and Ali Dehghantanha. 2018. Emerging
from the cloud: A bibliometric analysis of cloud forensics studies. Cyber Threat Intell. (2018), 311–331. DOI:10.1007/
978-3-319-73951-9_16
[15] Adam Bates, Kevin Butler, Andreas Haeberlen, Micah Sherr, and Wenchao Zhou. 2014. Let SDN be your eyes: Secure
forensics in data center networks. In Proceedings of the NDSS Workshop on Security of Emerging Network Technologies
(SENT’14).
[16] Abha Belorkar and G. Geethakumari. 2011. Regeneration of events using system snapshots for cloud forensic anal-
ysis. In Proceedings of the India Conference (INDICON’11). IEEE, 1–4.
[17] Adam J. Brown, William Bradley Glisson, Todd R. Andel, and Kim-Kwang Raymond Choo. 2018. Cloud forecasting:
Legal visibility issues in saturated environments. Comput. Law Secur. Rev. 34, 6 (2018), 1278–1290.
[18] Niken Dwi Wahyu Cahyani, Ben Martini, Kim-Kwang Raymond Choo, and AKBP Muhammad Nuh Al-Azhar. 2017.
Forensic data acquisition from cloud-of-things devices: Windows smartphones as a case study. Concurr. Comput.:
Pract. Exper. 29, 14 (2017), e3855.
[19] Aniello Castiglione, Giuseppe Cattaneo, Giancarlo De Maio, Alfredo De Santis, and Gianluca Roscigno. 2017. A
novel methodology to acquire live big data evidence from the cloud. IEEE Trans. Big Data 99 (2017), 1–14. DOI:10.
1109/TBDATA.2017.2683521
[20] Kim-Kwang Raymond Choo. 2007. Zombies and botnets.Trends Iss. Crime Crim. Just. 333 (2007), 1–6.
[21] Kim-Kwang Raymond Choo, Christian Esposito, and Aniello Castiglione. 2017. Evidence and forensics in the cloud:
Challenges and future research directions. IEEE Cloud Comput. 4, 3 (2017), 14–19.
[22] Hyunji Chung, Jungheum Park, Sangjin Lee, and Cheulhoon Kang. 2012. Digital forensic investigation of cloud
storage services. Dig. Investig. 9, 2 (2012), 81–95.
[23] Dr. Fred Cohen. 2011. Metrics for digital forensics. Retrieved on October 12, 2018 from http://securitymetrics.org/
attachments/Metricon-5.5-Cohen-Metrics-in-Digital-Forensics.pdf.
[24] Gartner Risk Management Leadership Council. 2018. Top 10 Emerging Risks of Q2 2018. Retrieved on September
17, 2018 from https://www.gartner.com/en/audit-risk/trends/top-ten-emerging-risks.
[25] Farid Daryabar, Ali Dehghantanha, and Kim-Kwang Raymond Choo. 2017. Cloud storage forensics: MEGA as a case
study. Austral. J. Forens. Sci. 49, 3 (2017), 344–357.
[26] Farid Daryabar, Ali Dehghantanha, Nur Izura Udzir, Solahuddin bin Shamsuddin, Farhood Norouzizadeh et al. 2013.
A survey about impacts of cloud computing on digital forensics. Int. J. Cyber-Secur. Dig. Forens. 2, 2 (2013), 77–95.
[27] Lucia De Marco, M. Tahar Kechadi, and Filomena Ferrucci. 2013. Cloud forensic readiness: Foundations. In Proceed-
ings of the International Conference on Digital Forensics and Cyber Crime. Springer, 237–244.
[28] Waldo Delport, Michael Köhn, and Martin S. Olivier. 2011. Isolating a cloud instance for a digital forensic investi-
gation. In Proceedings of the International Information Security South Africa Conference (ISSA’11).
[29] Waldo Delport and Martin Olivier. 2012. Isolating instances in cloud forensics. In Proceedings of the IFIP International
Conference on Digital Forensics. Springer, 187–200.
[30] Quang Do, Ben Martini, and Kim-Kwang Raymond Choo. 2015. A cloud-focused mobile forensics methodology.
IEEE Cloud Comput. 2, 4 (2015), 60–65.
[31] Josiah Dykstra and Alan T. Sherman. 2012. Acquiring forensic evidence from infrastructure-as-a-service cloud com-
puting: Exploring and evaluating tools, trust, and techniques. Dig. Investig. 9 (2012), S90–S98.
[32] Christian J. D’Orazio and Kim-Kwang Raymond Choo. 2017. A technique to circumvent SSL/TLS validations on iOS
devices. Future Gen. Comput. Syst. 74 (2017), 366–374.
[33] Corrado Federici. 2014. Cloud data imager: A unified answer to remote acquisition of cloud storage areas. Dig.
Investig. 11, 1 (2014), 30–42.
[34] Tal Garfinkel, Mendel Rosenblum et al. 2003. A virtual machine introspection-based architecture for intrusion de-
tection. In Proceedings of the Network and Distributed System Security Symposium (NDSS’03), Vol. 3. 191–206.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:36 B. Manral et al.

[35] Tobias Gebhardt and Hans P. Reiser. 2013. Network forensics for cloud computing. In Proceedings of the IFIP Inter-
national Conference on Distributed Applications and Interoperable Systems. Springer, 29–42.
[36] George Grispos, Tim Storer, and William Bradley Glisson. 2012. Calm before the storm: The challenges of cloud
computing in digital forensics. Int. J. Dig. Crime Forens. 4, 2 (2012), 28–48.
[37] Jason S. Hale. 2013. Amazon cloud drive forensic analysis. Dig. Investig. 10, 3 (2013), 259–265.
[38] Josiah Dykstra and Alan T. Sherman. 2013. Design and implementation of FROST: Digital forensic tools for the
OpenStack cloud computing platform. Dig. Investig. 10 (2013), S87–S95.
[39] Ardalan Kangarlou, Patrick Eugster, and Dongyan Xu. 2009. Vnsnap: Taking snapshots of virtual networked en-
vironments with minimal downtime. In Proceedings of the Conference on Dependable Systems and Networks. IEEE,
524–533.
[40] Victor Kebande and H. S. Venter. 2015. A functional architecture for cloud forensic readiness large-scale poten-
tial digital evidence analysis. In Proceedings of the European Conference on Cyber Warfare and Security. Academic
Conferences Int’l Limited, 373.
[41] Victor R. Kebande and Hein S. Venter. 2014. A cloud forensic readiness model using a Botnet as a Service. In Proceed-
ings of the International Conference on Digital Security and Forensics. The Society of Digital Information and Wireless
Communication, 23–32.
[42] Victor R. Kebande and Hein S. Venter. 2018. On digital forensic readiness in the cloud using a distributed agent-based
solution: Issues and challenges. Austral. J. Forens. Sci. 50, 2 (2018), 209–238.
[43] Suleman Khan, Abdullah Gani, Ainuddin Wahid Abdul Wahab, Ahmed Abdelaziz, and Mustapha Aminu Bagiwa.
2016. FML: A novel forensics management layer for software defined networks. In Proceedings of the 6th International
Conference on Cloud System and Big Data Engineering (Confluence’16) IEEE, 619–623.
[44] Suleman Khan, Abdullah Gani, Ainuddin Wahid Abdul Wahab, Ahmed Abdelaziz, Kwangman Ko, Muhammad Khur-
ram Khan, and Mohsen Guizani. 2016. Software-defined network forensics: Motivation, potential locations, require-
ments, and challenges. IEEE Netw. 30, 6 (2016), 6–13.
[45] Suleman Khan, Abdullah Gani, Ainuddin Wahid Abdul Wahab, Mustapha Aminu Bagiwa, Muhammad Shiraz, Samee
U. Khan, Rajkumar Buyya, and Albert Y. Zomaya. 2016. Cloud log forensics: Foundations, state of the art, and future
directions. ACM Comput. Surveys 49, 1 (2016), 7.
[46] Suleman Khan, Abdullah Gani, Ainuddin Wahid Abdul Wahab, Muhammad Shiraz, and Iftikhar Ahmad. 2016. Net-
work forensics: Review, taxonomy, and open challenges. J. Netw. Comput. Appl. 66 (2016), 214–235.
[47] Jin Li, Xiaofeng Chen, Qiong Huang, and Duncan S. Wong. 2014. Digital provenance: Enabling secure data forensics
in cloud computing. Future Gen. Comput. Syst. 37 (2014), 259–266.
[48] Xiao-Yong Li, Li-Tao Zhou, Yong Shi, and Yu Guo. 2010. A trusted computing environment model in cloud archi-
tecture. In Proceedings of the International Conference on Machine Learning and Cybernetics (ICMLC’10), Vol. 6. IEEE,
2843–2848.
[49] Dongxi Liu, Jack Lee, Julian Jang, Surya Nepal, and John Zic. 2010. A cloud architecture of virtual trusted platform
modules. In Proceedings of the IEEE/IFIP International Conference on Embedded and Ubiquitous Computing. IEEE,
804–811.
[50] Zhenbang Liu and Hengming Zou. 2014. Poster: A proactive cloud-based cross-reference forensic framework. In
Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, 1475–1477.
[51] Rongxing Lu, Xiaodong Lin, Xiaohui Liang, and Xuemin Sherman Shen. 2010. Secure provenance: The essential of
bread and butter of data forensics in cloud computing. In Proceedings of the 5th ACM Symposium on Information,
Computer and Communications Security. ACM, 282–292.
[52] Ben Martini and Kim-Kwang Raymond Choo. 2012. An integrated conceptual digital forensic framework for cloud
computing. Dig. Investig. 9, 2 (2012), 71–80.
[53] Ben Martini and Kim-Kwang Raymond Choo. 2014. Cloud forensic technical challenges and solutions: A snapshot.
IEEE Cloud Comput. 1, 4 (2014), 20–25.
[54] Raffael Marty. 2011. Cloud application logging for forensics. In Proceedings of the ACM Symposium on Applied Com-
puting. ACM, 178–184.
[55] McAfee. 2016. Cloud Adoption and Risk Report. Retrieved on July 12, 2018 from https://www.skyhighnetworks.
com/cloud-report/.
[56] McAfee. 2018. Navigating a Cloudy Sky: Practical Guidance and the State of Cloud Security. Retrieved on November
04, 2018 from https://www.mcafee.com/enterprise/en-us/solutions/lp/cloud-security-report.html.
[57] P. Mell and T. Grance. 2014. NIST cloud computing forensic science challenges. Draft Nistir 8006 (2014).
[58] Shaik Khaja Mohiddin, Suresh Babu Yalavarthi, and Shaik Sharmila. 2017. A complete ontological survey of cloud
forensic in the area of cloud computing. In Proceedings of the 6th International Conference on Soft Computing for
Problem Solving. Springer, 38–47.

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
A Systematic Survey on Cloud Forensics Challenges, Solutions, and Future Directions 124:37

[59] SeyedHossein Mohtasebi, Ali Dehghantanha, and K.-K. R. Choo. 2017. Cloud storage forensics: Analysis of data
remnants on SpiderOak, JustCloud, and pCloud. In Contemporary Digital Forensic Investigations of Cloud and Mobile
Applications. Elsevier, 205–246.
[60] Kiran-Kumar Muniswamy-Reddy, Peter Macko, and Margo I Seltzer. 2010. Provenance for the cloud. In Proceedings
of the USENIX Conference on File and Storage Technologies (FAST’10), Vol. 10.
[61] Kiran-Kumar Muniswamy-Reddy and Margo Seltzer. 2010. Provenance as first class cloud data. ACM SIGOPS Operat.
Syst. Rev. 43, 4 (2010), 11–16.
[62] National Institute of Standards and Technology. 2018. Computer Forensics Tool Catalog. Retrieved from https://
toolcatalog.nist.gov.
[63] Alecsandru Patrascu and Victor-Valeriu Patriciu. 2014. Logging system for cloud computing forensic environments.
J. Control Eng. Appl. Info. 16, 1 (2014), 80–88.
[64] Jonas Pfoh, Christian Schneider, and Claudia Eckert. 2009. A formal model for virtual machine introspection. In
Proceedings of the 1st ACM Workshop on Virtual Machine Security. ACM, 1–10.
[65] Ameer Pichan, Mihai Lazarescu, and Sie Teng Soh. 2015. Cloud forensics: Technical challenges, solutions and com-
parative analysis. Dig. Investig. 13 (2015), 38–57.
[66] James Poore, Juan Carlos Flores, and Travis Atkison. 2013. Evolution of digital forensics in virtualization by using
virtual machine introspection. In Proceedings of the 51st ACM Southeast Conference. ACM, 30.
[67] PricewaterhouseCoopers. 2016. Financial Services Technology 2020 and Beyond: Embracing disruption. Retrieved on
November 01, 2018 from https://www.pwc.com/gx/en/financial-services/assets/pdf/technology2020-and-beyond.
pdf.
[68] Zhengwei Qi, Chengcheng Xiang, Ruhui Ma, Jian Li, Haibing Guan, and David S. L. Wei. 2017. ForenVisor: A tool
for acquiring and preserving reliable data in cloud live forensics. IEEE Trans. Cloud Comput. 5, 3 (2017), 443–456.
DOI:10.1109/tcc.2016.2535295
[69] Darren Quick and Kim-Kwang Raymond Choo. 2013. Dropbox analysis: Data remnants on user machines. Dig.
Investig. 10, 1 (2013), 3–18.
[70] Darren Quick and Kim-Kwang Raymond Choo. 2013. Forensic collection of cloud storage data: Does the act of
collection result in changes to the data or its metadata? Dig. Investig. 10, 3 (2013), 266–277.
[71] Darren Quick and Kim-Kwang Raymond Choo. 2014. Impacts of increasing volume of digital forensic data: A survey
and future research challenges. Dig. Investig. 11, 4 (2014), 273–294.
[72] Darren Quick and Kim-Kwang Raymond Choo. 2018. IoT device forensics and data reduction. IEEE Access 6 (2018),
47566–47574.
[73] B. K. S. P. Raju, Nikhil Bharadwaj Gosala, and G. Geethakumari. 2017. CLOSER: Applying aggregation for effec-
tive event reconstruction of cloud service logs. In Proceedings of the 11th International Conference on Ubiquitous
Information Management and Communication. ACM, 62.
[74] B. K. S. P. Kumar Raju and G. Geethakumari. 2018. Timeline-based cloud event reconstruction framework for virtual
machine artifacts. In Progress in Intelligent Computing Techniques: Theory, Practice, and Applications. Springer, 31–42.
[75] B. K. S. P. Kumar Raju and G. Geethakumari. 2019. SNAPS: Towards building snapshot-based provenance system
for virtual machines in the cloud environment. Comput. Secur. 86 (2019), 92–111. DOI:https://doi.org/10.1016/j.cose.
2019.05.020
[76] Sagar Rane and Arati Dixit. 2019. BlockSLaaS: Blockchain assisted secure logging-as-a-service for cloud forensics.
In Proceedings of the International Conference on Security and Privacy. Springer, 77–88.
[77] Deevi Radha Rani and G. Geethakumari. 2015. An efficient approach to forensic investigation in cloud using VM
snapshots. In Proceedings of the International Conference on Pervasive Computing (ICPC’15). IEEE, 1–5.
[78] Andrew Reichman. 2011. File Storage Costs Less In The Cloud Than In-House. Retrieved on August 11, 2018 from
https://media.amazonwebservices.com/Forrester_File_Storage_Costs_Less_In_The_Cloud.pdf.
[79] Vassil Roussev, Irfan Ahmed, Andres Barreto, Shane McCulley, and Vivek Shanmughan. 2016. Cloud forensics–Tool
development studies & future outlook. Dig. Investig. 18 (2016), 79–95.
[80] Vassil Roussev, Andres Barreto, and Irfan Ahmed. 2016. Forensic acquisition of cloud drives. arXiv preprint arXiv:
1603.06542 (2016).
[81] Vassil Roussev and Shane McCulley. 2016. Forensic analysis of cloud-native artifacts. Dig. Investig. 16 (2016), S104–
S113.
[82] Keyun Ruan, Joe Carthy, Tahar Kechadi, and Ibrahim Baggili. 2013. Cloud forensics definitions and critical criteria
for cloud forensic capability: An overview of survey results. Dig. Investig. 10, 1 (2013), 34–43.
[83] Keyun Ruan, Joe Carthy, Tahar Kechadi, and Mark Crosbie. 2011. Cloud forensics. In Proceedings of the IFIP Interna-
tional Conference on Digital Forensics. 35–46.
[84] Yang Rui, Jiang-chun Ren, Bai Shuai, and Tang Tian. 2017. A digital forensic framework for cloud based on VMI.
DEStech Trans. Comput. Sci. Eng. 868–878. DOI:10.12783/dtcse/cst2017/12595

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.
124:38 B. Manral et al.

[85] Nuno Santos, Krishna P. Gummadi, and Rodrigo Rodrigues. 2009. Towards trusted cloud computing. In Proceedings of
the 2009 Conference on Hot Topics in Cloud Computing (HotCloud’09). USENIX Association. http://dl.acm.org/citation.
cfm?id=1855533.1855536.
[86] Shaik Sharmila and Ch Aparna. 2019. VMSSS: A proposed model for cloud forensic in cloud computing using VM
snapshot server. In Soft Computing for Problem Solving. Springer, 483–493.
[87] George Sibiya, Hein S. Venter, and Thomas Fogwill. 2015. Digital forensics in the cloud: The state of the art. In
Proceedings of the IST-Africa Conference. IEEE, 1–9.
[88] Stavros Simou, Christos Kalloniatis, Stefanos Gritzalis, and Haralambos Mouratidis. 2016. A survey on cloud foren-
sics challenges and solutions. Secur. Commun. Netw. 9, 18 (2016), 6285–6314.
[89] Stavros Simou, Christos Kalloniatis, Evangelia Kavakli, and Stefanos Gritzalis. 2014. Cloud forensics solutions: A
review. In Proceedings of the International Conference on Advanced Information Systems Engineering. Springer, 299–
309.
[90] Jungmin Son and Rajkumar Buyya. 2018. A taxonomy of software-defined networking (SDN)-enabled cloud com-
puting. ACM Comput. Surveys 51, 3 (2018), 59.
[91] Daniel Spiekermann, Tobias Eggendorfer, and Jörg Keller. 2015. Using network data to improve digital investigation
in cloud computing environments. In Proceedings of the International Conference on High Performance Computing
and Simulation. IEEE, 98–105.
[92] Abhinav Srivastava, Himanshu Raj, Jonathon Giffin, and Paul England. 2012. Trusted VM snapshots in untrusted
cloud infrastructures. In Proceedings of the International Workshop on Recent Advances in Intrusion Detection. Springer,
1–21.
[93] Yee-Yang Teing, Dehghantanha Ali, Kim Choo, Mohd T. Abdullah, and Zaiton Muda. 2019. Greening cloud-enabled
big data storage forensics: Syncany as a case study. IEEE Trans. Sustain. Comput. 4 (2019), 204–216. Issue 2.
[94] Yee-yang Teing, Ali Dehghantanha, Kim-Kwang Raymond Choo, Tooska Dargahi, and Mauro Conti. 2017. Forensic
investigation of cooperative storage cloud service: Symform as a case study. J. Forensic Sci. 62, 3 (2017), 641–654.
[95] Yee-Yang Teing, Ali Dehghantanha, Kim-Kwang Raymond Choo, and Laurence T. Yang. 2017. Forensic investigation
of P2P cloud storage services and backbone for IoT networks: BitTorrent Sync as a case study. Comput. Electr. Eng.
58 (2017), 350–363.
[96] Sean Thorpe, Indrajit Ray, Tyrone Grandison, and Abbie Barbir. 2011. The virtual machine log auditor. In Proceedings
of the IEEE 1st International Workshop on Security and Forensics in Communication Systems. 1–7.
[97] Philip M. Trenwith and Hein S. Venter. 2013. Digital forensic readiness in the cloud. In Proceedings of the Conference
on Information Security for South Africa. IEEE, 1–5.
[98] Jia Wang, Fang Peng, Hui Tian, Wenqi Chen, and Jing Lu. 2019. Public auditing of log integrity for cloud storage
systems via blockchain. In Proceedings of the International Conference on Security and Privacy in New Computing
Environments. Springer, 378–387.
[99] Junqing Wang, Miao Yu, Bingyu Li, Zhengwei Qi, and Haibing Guan. 2012. Hypervisor-based protection of sensitive
files in a compromised system. In Proceedings of the 27th Annual ACM Symposium on Applied Computing. ACM,
1765–1770.
[100] Shaun Waterman. 2018. New malware works only in memory, leaves no trace—Cyberscoop. Retrieved from https://
www.cyberscoop.com/kaspersky-fileless-malware-memory-attribution-detection/.
[101] Wenfeng Xia, Yonggang Wen, Chuan Heng Foh, Dusit Niyato, and Haiyong Xie. 2015. A survey on software-defined
networking. IEEE Commun. Surveys Tutor. 17, 1 (2015), 27–51.
[102] Qiao Yan, F. Richard Yu, Qingxiang Gong, and Jianqiang Li. 2016. Software-defined networking (SDN) and distributed
denial of service (DDoS) attacks in cloud computing environments: A survey, some research issues, and challenges.
IEEE Commun. Surveys Tutor. 18, 1 (2016), 602–622.
[103] Shams Zawoad, Amit Dutta, and Ragib Hasan. 2016. Towards building forensics enabled cloud through secure
logging-as-a-service. IEEE Trans. Depend. Secure Comput. 1 (2016), 1–1.
[104] Shams Zawoad and Ragib Hasan. 2013. Cloud forensics: A meta-study of challenges, approaches, and open problems.
arXiv preprint arXiv:1302.6312 (2013).
[105] Olive Qing Zhang, Markus Kirchberg, Ryan K. L. Ko, and Bu Sung Lee. 2011. How to track your data: The case for
cloud computing provenance. In Proceedings of the 3rd IEEE International Conference on Cloud Computing Technology
and Science. IEEE, 446–453.
[106] Shu-hui Zhang, Xiang-xu Meng, and Lian-hai Wang. 2017. SDNForensics: A comprehensive forensics framework
for software defined network. Development 3, 4 (2017), 5.

Received February 2019; revised July 2019; accepted September 2019

ACM Computing Surveys, Vol. 52, No. 6, Article 124. Publication date: November 2019.

You might also like