Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2019 22nd International Conference on Computer and Information Technology (ICCIT), 18-20 December, 2019

SMIFD: Novel Social Media Image Forgery


Detection Database
Md. Mehedi Rahman∗1 , Jannatul Tajrin∗2,3 , Abul Hasnat2,3 , Naushad Uzzaman2 and G. M. Atiqur Rahaman1
1 Computer Science and Engineering Discipline, Khulna University, Khulna, Bangladesh.
2 blackbird.ai
3 Cognitive Insight Limited.

Abstract—Image forgery or manipulation changes the contents


of a set of original images to create a new image. Unfortunately,
manipulated images become a growing concern with respect to
spreading misinformation via image sharing in the social media.
Despite the availability of a large number of automatic Image
Forgery Detection (IFD) methods, their evaluation in real-world
benchmarks seems to be limited due to the lack of diverse
datasets. Moreover, the motifs behind the manipulation remains
unclear. This research aims to address these issues by proposing a
novel social media IFD database, called SMIFD-500, to evaluate
the efficiency and generalizability of the IFD methods. The unique
property of this dataset is the availability of the technical and
social attributes in its ground truth annotations. These will
benefit the scientific community to develop efficient methods
by exploiting such annotations. Moreover, it provides interesting
statistics, which highlights the motifs of image manipulation from
social science perspective.
Index Terms—Image Forgery, Image Manipulation, Computer Fig. 1: Illustration of image forgery: three original images
Vision Database.
(within black box) and Photoshop-CS6 software have been
used to create the manipulated image.
I. I NTRODUCTION
Image forgery/manipulation/tempering refers to the process The labeled image databases are one of the most important
of changing the content of one or more original images in elements to develop and evaluate the powerful AI based
order to create a new image [2], [24]. It causes little to large methods, which exploits the advanced machine learning al-
difference in the appearance and content among the source gorithms. The large public image datasets, such as ImageNet
images and the manipulated one. Fig. 1 illustrates an example1 [16] and Microsoft COCO [13] significantly contributed to
of image forgery/manipulation. The goal of IFD is to identify a the developments of the powerful AI enabled computer vision
manipulated image and localize the manipulated region within systems in two ways: (a) train and evaluate the different deep
it. In the early days, image forgery was mostly performed to convolutional neural networks [9], [10] based learning meth-
enhance the original image quality and appearance using the ods with these datasets [16] and (b) use the trained weights to
basic tools. Today, a large number of different advanced tools, perform transfer learning on the domain/task specific smaller
computer software and web applications exist to easily perform datasets [15]. Therefore, it is evident that the datasets are
image forgery. This causes an increased use of the manipulated essential and obvious pre-requisite to practically realize the
images in the news portals and particularly in the social media. computer vision based automatic systems. This research is
Unfortunately, today the manipulated images become a motivated from the above facts and collects a real world
major source for spreading fake news [12], [22], [25] and practical dataset for the problem of social media IFD.
create an extreme negative impact. Therefore, detecting the Numerous IFD datasets have been already proposed in the
manipulated images to verify the authentic content becomes a literature, see Section II-B for a detail study and overview.
significantly demanding [3], [4], [8], [23] task for the Artificial These datasets can be characterized from different aspects,
Intelligence (AI) community. This research aims to aid the such as data source (artificial/real-world), image quality
development of the automatic IFD task by providing a novel (high/low), image format (raw/compressed), etc. From a detail
and challenging social media manipulated image dataset and study, it is evident that the web collected low quality and
hence fight against the spread of mis-information. compressed images are very challenging to detect forgery by
the state of the art methods. This naturally makes many of
* Denotes equal contribution.
1 Thisexample is created from the original shared image available at - the existing datasets as an inappropriate choice for evaluation.
https://www.pinterest.fr/pin/317081629995063665/ Besides, it lacks clear insights of numerous essential details,

978-1-7281-5842-6/19/$31.00 2019
c IEEE
Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.
such as the motifs and forgery types information. II. BACKGROUND AND THE STUDY OF RELATED WORK
We believe that these missing attributes can play important In this section, first we provide the details of IFD and then
role to develop advanced and specialized forgery detection briefly study the available datasets (related work). Besides, we
methods. For example, if the forgery consists of the face of provide the distinction and indicate the novelty of our dataset.
a person then a method can particularly focus on detecting
it. Similarly, if the manipulated images often accompany A. Background of IFD
additional texts then the presence of scene text in the image
should indicate the increase of forgery probability. Considering The idea of image forgery is to the change the appearance
these facts, we particularly focus on the development of a real- and content of one or more original images in order to
world dataset which consists of the following properties: create a new image, see Fig 1 for an example. Numerous
different approaches have been used to perform image forgery
• most appropriate image sources - the dataset must
using the digital photo editing applications. IFD is primarily
appropriately address the real-world problem and hence
decomposed into active and passive forgery [22], [24]. Active
should be collected from relevant sources and saved them
forgery detection refers to the process of detecting image
without any further modification.
forgery based on the presence of a previously inserted invisible
• low quality and compressed images – should be collected
mark, such as image watermark within the image. In contrary,
in this format in order to represent the real challenges of
the passive forgery detection performs the task without any
forgery detection.
pre-specified information about the image and hence provides
• labelled database – should provide ground truth with
significant challenge for the IFD methods. This research only
pixel level details so that it can better discriminate the
considers the passive forgery detection.
performance of different methods.
The image forgery types have been categorized differently
• additional attributes – should provide additional details
in the literature, such as 6 types in [24] and 4 types in
which can be exploited to develop advanced methods.
[22]. An analysis over both categorizations provides a unified
This research provides a novel IFD dataset of 250 manipu-
taxonomy of the forgery types as: (a) splicing or cut-paste +
lated and 250 non-manipulated images. Our primary goal is to
erase-fill; (b) copy-move; (c) in-painting or image generating
help the community to evaluated the IFD methods on the social
and (d) broad enhancement. Among these, while the first two
media manipulated images. Therefore, this dataset is called
categories unambiguously qualify as true forgeries [22], we
SMIFD-500 dataset. SMIFD respects all the above mentioned
also consider the third category as a true forgery type based on
properties by collecting the images from appropriate source
our findings from the collected dataset (see Section IV). Fig.
and without applying any further modifications. Indeed, the
2 illustrates the examples of the different types of forgeries
images shared in the social media usually passed through
from our proposed SMIFD dataset.
certain level of compression according to different image Splicing [21], [25] is the most important type of image
uploading protocol [23]. Furthermore, SMIFD-500 provides forgery found in the web collected images. It refers to the
appropriate level of annotations by including additional at- process of composing the contents from multiple images to
tributes for each manipulated image, which were performed create a single image. This is accomplished by copying the
by multiple human IFD experts. Finally, the contributions of contents from one or more original images and then placing
this research can be summarized as follows: them into the target one. See Fig. 2(a) and (b) for the examples.
• We revisit and study IFD from the perspective of current
The copy-move [17] refers to the process of copying part of an
challenges, available resources and future requirements to image and place this copy at one or more different locations
develop advanced methods. of the same image. See Fig.2(c) for an example. In-painting
• We collect a novel IFD dataset, called SMIFD-500, which
[2], [24] refers to the task of drawing over an image using
appropriately addresses the existing challenges of the different tool. This research considers the insertion of texts as
task. an in-painting forgery since it resembles the drawing of text
• We provide the ground truths from two different perspec-
patterns over an original image. See Fig.2(d) for an example.
tives, which are not only useful for proper evaluation of Note that, in-painting may equally referred as splicing [22].
the state-of-the-art methods, but also will be helpful to Numerous techniques have been proposed over the years for
develop future specialized forgery detection methods. automatic IFD, such as traditional image processing based [2],
• We provide an in-depth analysis of the collected dataset,
[8], [17], [21]–[24] and advanced deep convolutional neural
which provides very interesting statistics of the real-world network based [12], [20], [25]. While [2], [24] provides a brief
image forgery in the social media. survey of the earlier techniques to detect a large number of
SMIFD-500 will be released publicly2 . forgery types, [22] provides more recent review with the most
In the remaining part of this paper, first we provide the prominent forgery types and then benchmarked different recent
background of image forgery and study the state-of-the-art traditional methods on their proposed Wild Web dataset [23].
forgery detection datasets in Section II, describe our forgery
dataset construction strategy in Section III, present statistical B. IFD Datasets
analysis of our dataset and discuss them in Section IV and
finally draw conclusions in Section V. A number of different labeled publicly available datasets
exist to perform image forensic task and detect the manip-
2 https://github.com/Rana110223/SMIFD-500 ulated images. [24] and [22] provided the brief overview of

Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.
Fig. 3: Examples of the manipulated image samples from
different datasets: (a) splicing from Columbia color [11];
(b) copy-move from COVERAGE [19]; (c) splicing from
WildWeb [21] and (d) splicing from our proposed SMIFD.

on the fixed image size in the collected dataset. While the


fixed image size provides flexibility to design an algorithm,
variable image sizes provides additional challenges. Next, we
provide a detail study of several publicly available datasets.
a) The Columbia gray and color datasets [11], [14]:
in the field of image tempering these are the earliest with
online access. In 2004, the dataset named Columbia Gray
Fig. 2: Examples of different types of manipulated images was released [14]. The images of this dataset are only gray
(Column 1) from the proposed SMIFD database. The ma- scale blocks. Among the 322 photos, the authors captured 10
nipulated regions (in Column 2) associated to each image images and 310 are taken from the CalPhotos dataset. The
is shown as the binary masks. Row (a), (b) illustrate image images have the same size of 128×128 and are stored in BMP
splicing, row (c) illustrates copy-move based forgery and row format. The dataset has 933 authentic blocks and 912 cut-paste
(d) illustrates the example of inpainting based forgery. The image blocks which were performed using Adobe Photoshop
motif of manipulations in these images are (a) celebrity- without any post processing. However, this dataset has a main
movie; (b) celebrity-sports; (c) celebrity-social-media and (d) limitation that it does not have tempering mask which is
celebrity-sports. required for tempering localization task. The Columbia color
and uncompressed image splicing detection and evaluation
dataset [11] was published in 2006 to overcome the limitations
those datasets. Fig. 3 illustrates several samples from different of their previous dataset. It contains 183 original color images
datasets. One of the primary distinctions of these datasets and 180 cut-paste images. The size of the 183 color images
is the nature of the generation or collection of the images, ranges from 757×568 to 1152×768 which are all saved
i.e., whether they are artificially created or collected from the in uncompressed TIFF format. These images are captured
web. Indeed, the web-collected datasets provide more realistic both from complete indoor and outdoor scenes. In case of a
cases and hence impose enormous challenges to the forgery more realistic tampering detection, this dataset showed good
detection methods [22]. Therefore, based on this distinction, performance, see Fig. 3(a) for an example.
we subdivide the datasets into two initial categories: (1) b) The CASIA datasets [6]: two tampering datasets (v1.0
Artificial and (2) web-collected. Fig. 3(a)-(b) provide the and v2.0) were publicly launched in a website by the CASIA
artificially created samples and Fig. 3(c)-(d) provide the web- team in 2009. It contains both copy-move and cut-paste
collected samples. The second distinction is based on the images. CASIA v1.0 has 1721 color images of a fixed size
applied compression on the collected images. It is evident 384×256, in which 800 are original and 921 are tampered.
that, while the uncompressed images are relatively easier to CASIA v2.0 has 12614 color images of size from 240×160
apply the basic image processing based algorithms to detect to 900×600, in which 7491 are authentic and 5123 are
forgery, the multiple time compressed images are significantly tampered. There are two different file formats such as TIFF
harder [22]. The third distinction is based on the resolution of for uncompressed and JPEG for compressed images.
the image. While, the high resolution/quality images are more c) The MICC datasets [1]: three datasets named MICC-
appropriate for forgery detection, the low resolution/quality F220, MICC-F2000 and MICC-F600 have been released by
images are very challenging. The fourth distinction is based the MICC team in order to detect the copy-move IFD. In

Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.
MICC-F220 dataset, 220 images of size 722×480 to 800×600
are found half tampered and half original. The MICC-F2000
dataset contains 2000 JPEG images of size 2048×1536 where
1300 images are original and 700 are tampered. The above
two datasets did not make much effort in the case of selecting
realistic tampering regions like the Columbia datasets. To over-
come these issues, another dataset MICC-F600 was released Fig. 4: Workflow of the SMIFD database construction method.
in 2013, which contains 440 original and 160 tampered images
saved as JPEG or PNG format.
d) The IMD dataset [5]: it was published in 2012 in or- a) Collect images from popular social media: We initiate
der to study the visually imperceptible copy-move tampering. the database construction process by focusing on the most
More semantically meaningful regions named as “snippets” appropriate media where the manipulated images are vastly
were manually selected by skillful photo shoppers instead of shared and available. Therefore, we primarily consider two
considering rectangular regions. It contains 48 image pairs popular social media, called Facebook3 and Twitter4 as the
where each pair of size 3000×2300 (on average) is formed source to collect images. Before collecting the images, we
with the original and its copy-move tampered versions. trained multiple experts with sufficient knowledge of image
e) The CoMoFoD dataset [18]: it was proposed to forgery from both technical and social perspectives. The
overcome the shortcomings of the previously proposed copy- technical knowledge helps us to better realize the image
move forgery detection datasets, such as [1], [5]. It consists contents and collect the probable manipulated images. The
of 200 original images of size 512×512. For each image the social perspective helps us to identify and select certain user
copy-move and snippet transformation are allowed to perform. groups, which share such manipulated images. After collecting
Additionally, 25 different post-processed versions have been the images, we perform cross check/validation of each image
synthesized to augment the total number of tampered images with three expert observers. Note that, in order to maintain
up to 5200. proper evaluation, we have collected both manipulated and
f) The COVERAGE dataset [19]: it is perhaps the most original images of equivalent image properties with respect to
challenging copy-move based forgery detection dataset, re- the image quality and size.
leased in 2016. It contains 100 original images and at the b) Categorized images into original and manipulated im-
same time there are 100 tampered versions, which are saved ages: This step consists of applying an additional verification
in TIFF format. See Fig. 3(b) for an example of this dataset. to categorize the images as original or manipulated. It is
g) The Wild Web dataset [21]: released in 2015, this performed by three observers’ agreement and further reverse
dataset collected all the images from the Web in contrast to image search (using google-image search) in case an image is
the previous datasets. Therefore, it represents the real-world found suspected to be an original or manipulated by anyone.
scenario and it is more difficult and challenging to identify the We simply discard an image if it remains susceptible after
tampered images in this dataset. It contains more than 13000 performing the different steps of verification.
tampered images which are proved through strong experiment, c) Perform fine and coarse annotation of the manipulated
and these are found from the investigation of 90 such cases. images: The next step is to annotate the manipulated images
Most of the images are in JPEG format, where there are some to provide different level of image details which is suitable
PNG, GIF or TIFF images. While most of the images are cut- for different types of manipulation detection method as well
paste images, few copy-move and erase-fill images also exist as the evaluation procedures. We provide two types of details:
in this dataset, see Fig. 3(c) for an example. fine and coarse. The fine details based annotation consists of
Besides these datasets, several other datasets are available pixel-level details, which means that each image pixel will be
publicly. However, we did not discuss them here as they are labelled as either original or manipulated. The coarse anno-
not as frequently used for evaluation as many of the above- tation consists of simply drawing a bounding box around the
mentioned datasets. Moreover, the access of those datasets are manipulated region. We exploit the popular VGG annotation
often limited beyond the academic use. tool [7] to accomplish these tasks. While the fine annotation
is performed by drawing polygons, the coarse annotation is
III. M ETHODOLOGY: DATABASE C ONSTRUCTION performed with the rectangles. In Fig. 2, the examples in
the left column shows the overlayed annotated polygons and
In this section, we briefly present our methodology to the right column shows the outcome of these as a binary
construct the proposed dataset. Fig. 4 provides an illustration mask, where the white pixels indicate the manipulated region.
of our SMIFD-500 construction strategy, which follows the However, one can exploit the binary mask from pixel-level
steps below: annotation and extract the region properties of the image
• Step 1: Collect images from the popular social media. connected components to automatically obtain the rectangles
• Step 2: Categorize images as original and manipulated. or bounding boxes. Note that, the original image does not need
• Step 3: Perform fine and coarse annotation. the annotations as they do not contain any manipulation and
• Step 4: Perform attribute-level annotation.
• Step 5: Generate ground truth (binary mask and at- 3 www.facebook.com

tributes) and save to database. 4 www.twitter.com

Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.
hence their ground truth will contain a single-label image and
empty bounding box.
d) Perform attribute-level annotation of the manipulated
images: Attribute-level annotation is a unique property of
our proposed SMIFD dataset. It requires high-level technical
expertise as well as sufficient general knowledge from social
and political perspectives. We perform two types of attribute
annotation: technical perspective – to label an image with
appropriate manipulation types (i.,e., spliced, copy-move or
in-painting) and social perspective – to label an image with
the motif of manipulation, i.,e., political, celebrity, general and
natural. Therefore, these types of annotation provide very rich
information about the image manipulation and hence can aid
the development of advanced IFD methods.
e) Generate ground truth (binary mask and attributes)
and save to database: This step reforms and accumulates all
types of annotations into appropriate format, combine them
together, associate them with the image and finally store
them into the dataset. The annotation reform is applied to
the polygons in order to generate binary masks from them. Fig. 5: Several examples of manipulated images from the
It is simply done by labelling each image pixel as original proposed SMIFD-500 dataset.
or manipulated based on its location, i.e., inside or outside of
polygons. Indeed, this binary mask can also be exploited to
extract the region properties of the image connected compo-
nents to automatically obtain the rectangles or bounding boxes.
Finally, these fine and coarse annotations are associated with
the attributes level annotations for each image.
Fig. 2 provides examples from the SMIFD-500 database and
its associated annotations, such as fine labeling, manipulation
type and motif attributes.

IV. F INAL DATASET, S TATISTICS AND D ISCUSSION


This section provides the details and relevant statistics of
the finalized dataset by analyzing its properties based on the
ground truths obtained from the labels and attributes.
The proposed SMIFD-500 dataset finally constitutes 500 Fig. 6: Histogram of the manipulated region coverage.
images, which are selected from a set of 800 initially col-
lected images. This is a balanced dataset, where each of the
original and manipulated image set contains 250 images. Fig
5 illustrates several examples of the images from this dataset.
Fig. 7(a) illustrates the distribution of the forgery types, from
First, we analyse the dataset based on the image-level
which it is evident that most (59.08%) of the manipulation
annotations. Therefore, we compute the manipulated region
concerns the image splicing. The next category of forgery
coverage as a measure for each image and then compute
is in-painting (37.62%). Very few images are manipulated
the histogram from the entire database. For a given image
with copy-move forgery. Fig. 7(b) provides the distribution
(binary mask as ground truth), the region coverage is simply
of the motifs/targets of manipulation. It is clear that while
computed by counting the total number of manipulated pixels
the celebrities (50.50%) are the primary target, the general
and dividing it by the total number of pixels in the image. Fig.
category (40.59%) also appears widely in the dataset. There
6 illustrates the histogram as the relevant statistics. We observe
are few manipulated images of the natural scene (6.93%) and
that, the manipulation in the collected dataset mostly occurs in
politicians (1.98%). Besides, it is clearly observed that the
relatively smaller area (less than 30% area) of the image. There
human (celebrities and politicians) are the primary target of
are very few images which consist of relatively larger region,
manipulation with the aim to create entertaining contents as
which is caused by the insertion of larger objects within the
well as spread mis-information.
image. The smaller manipulation region often consists of the
insertion of the smaller objects such as human faces. In future, we aim to extend this work as follows: (a) apply
Next, we analyse the dataset based on the attributes concern- the publicly available image manipulation tools and state-of-
ing the forgery types and the motifs/targets of manipulation. the-art IFD methods on our dataset to provide the benchmark
Therefore, we compute the distributions of the relevant in- and (b) extend this dataset by regularly collecting more images
formation from the attributes based ground truth information. to construct a set of 10000 images.

Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.
[6] J. Dong, W. Wang, and T. Tan. Casia image tampering detection
evaluation database. In 2013 IEEE China Summit and International
Conference on Signal and Information Processing, pages 422–426.
IEEE, 2013.
[7] A. Dutta and A. Zisserman. The VIA annotation software for images,
audio and video. In Proceedings of the 27th ACM International
Conference on Multimedia, MM ’19, New York, NY, USA, 2019. ACM.
[8] S. Elkasrawi, A. Dengel, A. Abdelsamad, and S. S. Bukhari. What
you see is what you get? automatic image verification for online news
content. In 2016 12th IAPR Workshop on Document Analysis Systems
(DAS), pages 114–119. IEEE, 2016.
[9] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press,
2016.
(a) Forgery types. [10] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu,
X. Wang, G. Wang, J. Cai, et al. Recent advances in convolutional
neural networks. Pattern Recognition, 2017.
[11] Y.-F. Hsu and S.-F. Chang. Detecting image splicing using geometry
invariants and camera characteristics consistency. In 2006 IEEE Inter-
national Conference on Multimedia and Expo, pages 549–552. IEEE,
2006.
[12] M. Huh, A. Liu, A. Owens, and A. A. Efros. Fighting fake news:
Image splice detection via learned self-consistency. In Proceedings of
the European Conference on Computer Vision (ECCV), pages 101–117,
2018.
[13] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan,
P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context.
In European conference on computer vision, pages 740–755. Springer,
(b) Motifs of manipulation. 2014.
[14] T.-T. Ng, S.-F. Chang, and Q. Sun. A data set of authentic and spliced
Fig. 7: Distribution of the (a) forgery types and (b) motifs of image blocks. Columbia University, ADVENT Technical Report, pages
203–2004, 2004.
manipulation in the dataset. [15] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring
mid-level image representations using convolutional neural networks.
In Proceedings of the IEEE conference on computer vision and pattern
recognition, pages 1717–1724, 2014.
V. C ONCLUSION [16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and
In this paper, we proposed a novel social media IFD L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV,
115(3):211–252, 2015.
database to evaluate the efficiency and generalizability of [17] S. Sadeghi, S. Dadkhah, H. A. Jalab, G. Mazzola, and D. Uliyan. State
the IFD methods. Besides providing a set of challenging of the art in passive digital image forgery detection: copy-move image
forgery. Pattern Analysis and Applications, 21(2):291–306, 2018.
manipulated images, it provides the different types of ground [18] D. Tralic, I. Zupancic, S. Grgic, and M. Grgic. Comofod—new database
truth annotations from both technical and social communi- for copy-move forgery detection. In Proceedings ELMAR-2013, pages
cation perspectives. Therefore, it will be beneficial for the 49–54. IEEE, 2013.
[19] B. Wen, Y. Zhu, R. Subramanian, T.-T. Ng, X. Shen, and S. Winkler.
community from several perspectives: (a) new benchmark Coverage – a novel database for copy-move forgery detection. In IEEE
helps to understand the efficiency and generalizability of the International Conference on Image processing (ICIP), pages 161–165,
IFD methods; (b) attributes based annotations helps to develop 2016.
[20] W. A. Yue Wu and P. Natarajan. Mantra-net: Manipulation tracing
efficient methods by exploiting the information extracted from network for detection and localization of image forgerieswith anomalous
them and (c) the interesting statistics highlighting the motifs features. 2019.
[21] M. Zampoglou, S. Papadopoulos, and Y. Kompatsiaris. Detecting image
of image manipulation help to understand the trend of image splicing in the wild (web). In 2015 IEEE International Conference on
based misinformation spreading in the social media. Multimedia & Expo Workshops (ICMEW), pages 1–6. IEEE, 2015.
[22] M. Zampoglou, S. Papadopoulos, and Y. Kompatsiaris. Large-scale eval-
uation of splicing localization algorithms for web images. Multimedia
Tools and Applications, 76(4):4801–4834, 2017.
R EFERENCES [23] M. Zampoglou, S. Papadopoulos, Y. Kompatsiaris, R. Bouwmeester,
and J. Spangenberg. Web and social media image forensics for news
[1] I. Amerini, L. Ballan, R. Caldelli, A. Del Bimbo, L. Del Tongo, and professionals. In Tenth International AAAI Conference on Web and
G. Serra. Copy-move forgery detection and localization by means of Social Media, 2016.
robust clustering with j-linkage. Signal Processing: Image Communica- [24] L. Zheng, Y. Zhang, and V. L. Thing. A survey on image tampering
tion, 28(6):659–669, 2013. and its detection in real-world photos. Journal of Visual Communication
[2] G. K. Birajdar and V. H. Mankar. Digital image forgery detection and Image Representation, 58:380–399, 2019.
using passive techniques: A survey. Digital investigation, 10(3):226– [25] P. Zhou, X. Han, V. I. Morariu, and L. S. Davis. Learning rich
245, 2013. features for image manipulation detection. In Proceedings of the IEEE
[3] C. Boididou, S. E. Middleton, Z. Jin, S. Papadopoulos, D.-T. Dang- Conference on Computer Vision and Pattern Recognition, pages 1053–
Nguyen, G. Boato, and Y. Kompatsiaris. Verifying information with 1061, 2018.
multimedia content on twitter. Multimedia Tools and Applications,
77(12):15545–15571, 2018.
[4] C. Boididou, S. Papadopoulos, Y. Kompatsiaris, S. Schifferes, and
N. Newman. Challenges of computational verification in social mul-
timedia. In Proceedings of the 23rd International Conference on World
Wide Web, pages 743–748. ACM, 2014.
[5] V. Christlein, C. Riess, J. Jordan, C. Riess, and E. Angelopoulou. An
evaluation of popular copy-move forgery detection approaches. IEEE
Transactions on information forensics and security, 7(6):1841–1854,
2012.

Authorized licensed use limited to: Murdoch University. Downloaded on June 16,2020 at 01:53:28 UTC from IEEE Xplore. Restrictions apply.

You might also like