Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Journal Pre-proofs

High Capacity Adaptive Image Steganography with Cover Region Selection


using Dual-Tree Complex Wavelet Transform

Inas Jawad Kadhim, Prashan Premaratne, Peter James Vial

PII: S1389-0417(19)30511-X
DOI: https://doi.org/10.1016/j.cogsys.2019.11.002
Reference: COGSYS 916

To appear in: Cognitive Systems Research

Received Date: 3 April 2019


Revised Date: 11 July 2019
Accepted Date: 11 November 2019

Please cite this article as: Jawad Kadhim, I., Premaratne, P., James Vial, P., High Capacity Adaptive Image
Steganography with Cover Region Selection using Dual-Tree Complex Wavelet Transform, Cognitive Systems
Research (2019), doi: https://doi.org/10.1016/j.cogsys.2019.11.002

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover
page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version
will undergo additional copyediting, typesetting and review before it is published in its final form, but we are
providing this version to give early visibility of the article. Please note that, during the production process, errors
may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

© 2019 Elsevier B.V. All rights reserved.


1

High Capacity Adaptive Image Steganography with Cover Region


Selection using Dual-Tree Complex Wavelet Transform

Inas Jawad Kadhim1,2 [0000-0001-9404-5653], Prashan Premaratne 1, Peter James Vial1


1 School of Electrical and Computer and Telecommunications Engineering
University of Wollongong, North Wollongong, NSW 2522 Australia
2 Electrical Engineering Technical College, Middle Technical University, Baghdad, Iraq

ijk720@uowmail.edu.au

Abstract. The importance of image steganography is unquestionable in the field of secure multimedia
communication. Imperceptibility and high payload capacity are some of the crucial parts of any mode
of steganography. The proposed work is an attempt to modify the edge-based image steganography
which provides higher payload capacity and imperceptibility by making use of machine learning
techniques. The approach uses an adaptive embedding process over Dual-Tree Complex Wavelet
Transform (DT-CWT) subband coefficients. Machine learning based optimization techniques are
employed here to embed the secret data over optimal cover-image-blocks with minimal retrieval error.
The embedding process will create a unique secret key which is imperative for the retrieval of data and
need to be transmitted to the receiver side via a secure channel. This enhances the security concerns
and avoids data hacking by intruders. The algorithm performance is evaluated with standard benchmark
parameters like PSNR, SSIM, CF, Retrieval error, BPP and Histogram. The results of the proposed
method show the stego-image with PSNR above 50dB even with a dense embedding of up to 7.87 BPP.
This clearly indicates that the proposed work surpasses the state-of-the-art image steganographic
systems significantly.

Keywords: Image steganography • Adaptive embedding • DT-CWT • Edge detection • K-NN method

1 Introduction

Currently, the digital communication spreads through different data formats and structure. This has led to
an unparalleled quest for secure communication. Encryption and Information Hiding are the two major
categories in information security systems (Kadhim, Premaratne, Vial, & Halloran, 2018; Premaratne &
Premaratne, 2012b). Both approaches are geared towards securing information but their techniques differ.
Hence, a lot of innovative information hiding techniques have been proposed to conceal information in
various data types such as image, video, audio, text etc. (Joseph & Vishnukumar, 2015; Kadhim, 2012).
Image Steganography has emerged as an exciting and significant research field. Watermarking is another
class of information hiding mechanism, where multimedia data is involved for copyrighting or
authentication purposes (Kadhim, Premaratne, Vial, et al., 2018). In a generic manner, image
steganography can be defined as a process of hiding private, secret data in images, without creating any
noticeable changes over the host image (Ogihara, Nakamura, & Yokoya, 1996). The embedded information
in the cover image can be text messages, binary bits or even another secret image. The embedding should
be performed in such a manner as to reduce the cover image distortion as much as possible (Al-Dmour &
Al-Ani, 2016; R. A. Wazirali & Chaczko, 2015). This is critical since the details of the cover image need to
be preserved to resemble the stego-image similar to the cover image. So, eavesdroppers cannot detect the
presence of any added information inside the stego-image. Another important factor needs to be considered
is the payload capacity. It defines the amount of secret data that can be embedded inside the cover image.
2

Generally, it is measured in terms of Bits Per Pixel (BPP). BPP denotes the average number of bits that can
be embedded in a single pixel in the cover media. The ability of the steganographic algorithm to withstand
against different attacks over it is collectively termed as its ‘robustness’. While designing a steganographic
system, a trade-off between the above said requirements needs to be achieved.
Steganographic techniques are broadly divided into two categories based on nature of embedding
process; spatial and transform domain (Cheddad, Condell, Curran, & Mc Kevitt, 2010; Kadhim,
Premaratne, Vial, et al., 2018; Provos & Honeyman, 2003). In the spatial domain approach, the information
bits are directly hidden over the cover image pixel values. In the transform domain approach, the message
is embedded after applying a suitable transformation of the image, allowing more bits to be embedded
without altering the spatial domain pixel values of the stego-image. Even though the process is more
complicated than the spatial domain method, the hidden data resides in more robust areas providing a
superior resistance over statistical attacks and is unlikely to be decrypted by unintended recipients (Chu,
You, Kong, & Ba, 2004; Siddharth Singh & Siddiqui, 2014).
There are many transform domain techniques that can be widely used in many image processing areas
including data hiding (Patel & Patel, 2015). Some of the most popular and widely used transforms include
various forms of the discrete cosine transform (DCT), the discrete Wavelet Transform (DWT) and the
Discrete Fourier Transform (DFT). DCT is a widely used transform domain technique in the first
generation transform-based embedding systems and later was replaced by DWT due to its better
embedding capacity and imperceptibility. The DWT has been used as a popular tool for embedding bits in
the frequency coefficients of higher subbands of the cover image. However, DWT has some draw backs as
well. It lacks directional selectivity for diagonal features and shifts invariance. The Complex Wavelet
Transform (CWT) is a complex valued extension to DWT (Mabtoul & Aboutajdine, 2008). The CWT
utilizes complex value filtering that decomposes a signal into real and imaginary parts. The dual-tree
complex wavelet transform (DT-CWT) is another modified version of CWT which employs dual CWT
transformation using two separate sets of filter coefficients (Kadhim, Premaratne, Vial, et al., 2018; S.
Kumar & Muttoo, 2013; Selesnick, Baraniuk, & Kingsbury, 2005). The DT-CWT has a modest amount of
redundancy, but it provides shift invariance and good directional selectivity. Another advantage of DT-
CWT is that the number of transform coefficients are high compared to other transforms and helps in
hiding more secret bits.
Machine Learning became one of the most popular tools in image processing (Han & Huang, 2006;
Kadhim, Premaratne, Vial, et al., 2018; Premaratne, Nguyen, & Premaratne, 2010). In recent years, highly
motivated research in artificial intelligence related research in this field had led to many advanced machine
learning techniques suitable for various kinds of applications on data types. In computer vision, machine
learning was initially implemented for recognition (Zhao, Huang, & Sun, 2004), optimization (Huang &
Du, 2008) and object retrieval purposes. Later on, it was further used for image processing applications
such as enhancement, segmentation, classification etc. (X.-F. Wang & Huang, 2009; X.-F. Wang, Huang,
& Xu, 2010). For standard image steganographic techniques, there is always a tradeoff between embedding
capacity and stego-image distortion. It is also important to maintain minimum retrieval error and resistance
to data manipulation by an intruder. In machine learning based steganographic techniques, advanced
machine learning methods are employed to balance these tradeoffs while maintaining high efficiency in the
entire process (Atee, Ahmad, Noor, Rahma, & Aljeroudi, 2017).
This work aims for an advanced steganographic approach which can maintain high imperceptibility
value of stego-image to cover image with marginally high payload capacity. The main contribution in this
paper is an adaptive embedding procedure over optimal regions in the cover image based on a machine
learning based region selection. In this work, adaptive patch selection from the cover data and expert
embedding process is proposed. The cover image is divided into multiple non-overlapping blocks and
secret data patches are indirectly concealed in the subbands of DT-CWT coefficients of the selected cover
block. Edge detection techniques along with texture features are used to determine high fidelity and smooth
patches. This helps to embed bits only over highly textured regions and smooth regions are discarded from
the embedding process and this helps to reduce the distortion of the stego-image. The indirectly embedding
3

process helps to prevent unauthorized decoding. This paper describes a few related works in section 2
which covers a brief survey of the recent state of the art technologies in image steganography. Section 3
contains a detailed description of the proposed method. Results and performance analysis provides a
detailed overview of the experimental results appear in section 4. Finally, the concluding remarks are
drawn in section 5.

2 Related works

The characteristics of human visual system (HVS) can be considered to model a high imperceptible
image steganographic system. The main idea behind this scheme is that, the changes in edge pixels will be
safer rather than altering the smooth low frequency pixels for the embedding process. Edge detectors such
as Canny, Sobel, Prewitt and Laplacian of Gaussian have been widely used to identify the edge pixels from
images (Atta & Ghanbari, 2018; Dadgostar & Afsari, 2016; Ghosal, Mandal, & Sarkar, 2018; Khan &
Bianchi, 2018; Lee, Chang, Xie, Mao, & Shi, 2018; Smitha & Baburaj, 2018; R. A. Wazirali & Chaczko,
2015).
The authors in the paper (Al-Dmour & Al-Ani, 2016) proposed an edge-based image steganography
using OR (XOR) coding. The cover images are initially divided into 3x3 blocks and secret data bits are
embedded in each block except in the corner and central pixels. In order to utilize human visual sensitivity,
the cover image is classified into different categories based on the sharpness content and high sharp blocks
are preferred for the embedding. In (Islam, Modi, & Gupta, 2014) proposed a mode of steganography
technique, where edges in the cover image exclusively have been chosen to hide messages along with two
Least Significant Bit substitutions (2LSB) scheme. The more the amount of data to be embedded, the
weaker the edges will be for embedding. The method performs better or at least at par with the state-of-the-
art steganography techniques with higher embedding capacity. A spatial domain steganographic approach
using Octonary Pixel Value Differencing (O-PVD) is narrated in an article (Balasubramanian, Selvakumar,
& Geetha, 2014). Under this approach, each pixel is paired with all the 8-neighbourhood locations to boost
the embedding capacity. An adaptive decision making is used here to limit the number of embedding bits
in each pixel locations based on the nature of cover image pixels. To reduce the risk of detecting the cover
image pixel changes, a re-adjustment stage is also employed to keep the modified pixels with its original
pixel intensity levels. The approach shows much improvement in carrier capacity while comparing most of
the PVD based steganographic systems.
Dadgostar and Afsari (2016) proposed a technique that hides secret data into cover media by altering its
most insignificant components such that an unauthorized user will not be aware of the existence of the
secret data. In this scheme, more bits were embedded into the edge areas rather than in the smooth areas.
This scheme uses the interval valued intuitive fuzzy edge detection method along with the modified LSB
substitution (MLSB-IVFE). The modified LSB substitution reduces the distortion in stego-image and
allows more payloads of secret data with minimal distortion in stego-images. Another adaptive embedding
technique based on edge selection and hybrid Hamming coding is discussed in the article (Lee et al., 2018).
Canny edge detection is used here to detect the amount of sharpness in the cover image regions. In another
article (Khan & Bianchi, 2018), an Ant Colony Optimization (ACO) scheme is used to identify the target
region and LSB based embedding is utilized. Wazirali and Chachzo (2015) described a gradient based
edge detection approach which clustered image regions into edge and non-edge classes. The embedding
density per pixel is decided based on the edge content and the image perceptibility is high in these
mentioned systems. Yu (2015) proposed an adaptive edge-based steganography method where median
edge detection is used to identify the flat and complex regions and LSB and the Optimal Pixel Adjustment
(OPA) based embedding is utilized.
In order to improve the payload capacity and security along with high imperceptibility, several
frequency domain techniques have also been discussed here. Atta and Ghanbari (2018) proposed an
adaptive technique based on Wavelet Packet (WP) Decomposition and Neutrosophic Set (NS). Initially,
4

data is decomposed into subbands using WPD. A gradient based edge detection approach is utilized in the
NS domain (NSED). This approach is used to classify each WP tree into edge and non-edge classes. The
main drawback of this work is that the distortion on the stego-image is marginally high for higher payloads
and allows the intruder to easily tamper the stego-image which makes it unfavorable for the actual receiver.
Kumar and Kumar (2017) introduced a modified DWT steganographic system that relies on hiding blocks
of secret into areas of cover image based on matching. This approach resulted in limited payload capacity
although achieved better imperceptibility resulted in overall better visual quality compared to the state-of-
the-art approaches. Singh and Siddiqui (2013) proposed a steganography technique based on Complex
Wavelet Transform (CWT), singular value decomposition and chaotic sequences. The researchers claim
that due to its shift invariance and perfect reconstruction property, complex wavelet transform improves
robustness against geometrical attacks. The secret bits are indirectly embedded over the low frequency
subbands of CWT using singular value decomposition. The secret data is scrambled using the chaotic
sequence for increasing the intruder attacks. The main disadvantage of this method is that the decryption
accuracy degrades for higher payloads. Kumar and Muttoo (2013) explored the applications of Dual-Tree
Complex Wavelet Transforms (DT-CWT) to steganography and compared it with other similar transforms
like DWT, Wavelet-like Transform domain, viz, Slantlet domain and Double Density Dual-Tree Wavelet
(DDDWT) domain. Their work states that DT-CWT domain is better than others while analyzing better
imperceptibly and high embedding capacity. Another similar approach is explained in (Sathisha et al.,
2013). Here a new embedding and retrieval method based on coefficient replacement and adaptive scaling
is explained for hiding secret bits indirectly on the cover image and to prevent data loss of secret image at
the receiver end. The DT-CWT is applied on the cover image and Lifting Wavelet Transformation (LWT
2) is applied on the secret image. In an article by (Kadhim, Premaratne, & Vial, 2018b) a novel
steganographic approach is proposed based on secret data hiding over DT-CWT subbands of the cover
image. The algorithm selected an adaptive block embedding system which separately conceals low and
high frequency information from the secret data into corresponding low or high frequency subbands. The
block embedding helps to increase the payload capacity in a fair high level. In an article by (Muhammad,
Bibi, Mahmood, Akram, & Naqvi, 2017) a blind data hiding reversible methodology is proposed. Firstly,
data is decomposed into subbands using the integer wavelets. Fresnelet transform is then utilized to encrypt
the secret data by choosing a unique key parameter to construct a dummy pattern. This dummy pattern is
then embedded into an approximated subband of the cover image. The main drawback of this work is that
the distortion on the stego-image is marginally high and allows the intruder to easily tamper the stego-
image which makes it useless for the actual receiver.
In order to make the stegano system more imperceptible to visual and computer-based analysis, several
additional techniques were proposed. Selecting good cover images from database and optimal cover region
identification are some of the major concerns in that research area (Hamid, Yahya, Ahmad, & Al-Qershi,
2012; Sajedi & Jamzad, 2008; M S Subhedar & Mankar, 2017).
Thorough review of the above literature lead one to conclude that many researchers are content with use
of edge areas to embed the secret data as the characteristics of human visual system is less sensitive to
changes in these areas. Unfortunately, the edge based steganography has resulted in very limited
embedding capacity. Also, the stego-image may be distorted especially over the smooth areas and becomes
vulnerable to attack. Our proposed method aims for an adaptive embedding procedure over optimal regions
in the cover image based on machine learning based region selection and DT-CWT.

3 Proposed Method

3.1 Motivation

The proposed work is an expansion of the research done by Kadhim, Premaratne, and Vial (2018). The
paper by Kadhim et al. (2018) described a steganographic system over the DT-CWT domain using Canny
5

edge detection and adaptive bit by bit embedding over the subbands. During the survey, even though the
experimental results show that the stego-image achieved is having good PSNR, high capacity and high
similarity index compared with the state-of-the-art, we found a few flaws in it and these issues are listed
below:

 With only canny edge detection, selection of optimum regions will not be perfect.
 Bit by bit embedding over coefficient plane results in higher retrieval error.
 The embedding process degrades the cover image coefficients due to the use of large offsets.
In order to solve above issues, we have used a number of modifications to achieve a high capacity
steganographic system with good imperceptibility and security. To reduce the embedding error, we
introduced a region selection approach before the embedding stage. The machine learning based block
selection approach helps to select the optimal cover image regions with high textural content. For proper
texture region selection, we used K-NN based machine learning algorithm which feeds the texture features
from the cover image blocks and statistical GLCM features from the edge detected block from a trained
model. In addition, the algorithm uses a batch by batch pixel value embedding over the DT-CWT subbands
of the selected cover image blocks using a template matching approach. This will boost the payload
capacity and deliver higher imperceptibility in the stego-image.

3.2 Algorithm

The proposed method includes a training phase as well as a testing phase. During training phase, a K-
NN based machine learning model is generated. During the testing phase, the trained model is used to
classify the cover image patches as favorable and unfavorable by evaluating the textural content in each
block. The stages are detailed below:

3.2.1 Training phase

1. Collection of training data: The ultimate aim of the training phase is to classify the cover patches to
textured or smooth. This results in labeling highly textured image portions as positive class data and
low frequency plain image blocks are used as the negative class training data. Here we used 100
images in both classes for the training purpose (Liu & Chen, 2015; Olmos & Kingdom, 2004). To
generate our smooth image database, we cropped only smooth regions from some images mentioned
in the above database. A few samples of positive and negative class training data are shown in
Figure. 1

2. Feature extraction: Here, a hybrid feature extraction is adopted to evaluate the texture contents in
the image. Segmentation based Fractal Texture Analysis (SFTA) features (Costa, Humpire-Mamani,
& Traina, 2012) from the cover image blocks as well as GLCM based features from the edge image.
Canny edge detection is used here for the edge detection process. The used GLCM features are
energy, entropy, homogeneity and correlation.
3. K-NN training: K-NN is supervised learning system and it is a widely accepted classifier in machine
learning applications especially when the features are linearly separable. The classifier decides the
prediction results based on the K- nearest trained feature samples. Here, the feature space composed
of SFTA and GLCM vectors from the positive and negative class training images are used along
with proper grown truth. These trained features will be used as a reference in testing phase.
6

(a) High frequency (texture) positive images

(b) Low frequency (smooth) negative images

Fig. 1. A sample of Training images

3.2.2 Testing phase

Both the Embedding and Decoding stages come under this phase. The main steps in the embedding and
decoding stages are explained in this section.

3.2.2.1 Secret Data Embedding

The flow diagram in Figure 2 depicts the proposed embedding approach. The steps of embedding
process are described below:

1. Selection of the carrier image and secret data: The cover image can be either a colour image or a
grayscale image. Here, a colour image with 24-bit colour map is chosen as the cover media. The
secret data can be an image, a text message or any sort of binary data. In case of non-image secret
data, it should be converted to a matrix format.

2. Pre-processing: The carrier image needs to be divided into non-overlapping blocks of a specific
size. For example, if using a block size of 32 × 32, the cover image should be resized to 256 ×
256, 384 × 384, 512 × 512 etc. If both the cover image and secret data are colour images, the
individual colour planes are separated and the colour planes of the secret images are embedded
over corresponding colour planes of the cover image. At the same time secret data will be divided
into 2×2 pixel patches and saved as a patch array.

3. Detect edge details of the cover image: The edge details in the cover image are calculated using
Canny edge detector (Canny, 1986) with a suitable threshold. In the Canny algorithm, the edge
detection is based on a set of criteria that maximize the probability of detecting true edges while
7

minimizing the probability of irrelevant edges. The detection merely depends on the threshold
used. The lower the threshold, the weaker the edges will be for detection. Here we use a threshold
of 0.5 to select moderately strong edges from the cover image.

4. Divide the cover image into non-overlapping patches and extract GLCM features: The cover
image is divided into square blocks (i.e. 32 × 32). The GLCM of the edge blocks are then
evaluated to find the statistical features such as entropy, energy, homogeneity and correlation. The
GLCM features from the edge image depend on the pixel arrangements in the image and hence,
smooth image regions can be represented with much different values as that of a high frequency
textured region

5. Extracting SFTA features: Segmentation based Fractal Texture Analysis (SFTA) features (Costa
et al., 2012) are good enough to determine the texture content of an image. Hence SFTA feature
extraction is applied over the gray level version of the cover image. The SFTA features are highly
sensitive to the texture content in an image and hence the feature vector corresponding to a high
textured region will possess remarkable variance corresponding to the features from a smooth
region.

6. Block classification using K-NN classifier: If we embed a data in a low frequency smooth region,
the embedding error creates noticeable changes and will reduce the imperceptibility of the
embedding scheme. The main aim behind the selection of suitable blocks is to avoid low
frequency cover regions from the embedding process. Only, the textured region will be used for
embedding process and such embedding will not create noticeable changes in the stego-image. For
predicting whether a cover patch is smooth or high frequency region, K-NN classifier has been
used. K-NN is fed with two kinds of features from each block. The combined feature vector from
the extracted GLCM and SFTA features are used to classify the cover image blocks into favorable
(textured) or unfavorable (smooth) using a pre-trained K-NN model (Parthasarathy & Chatterji,
1990). A ‘K’ value of 5 is chosen based on the test results.

7. Embedding secret patches on DT-CWT subbands: The DT-CWT employs two separate discrete
wavelet filter bank trees that decompose a signal into real and imaginary parts (Kadhim,
Premaratne, Vial, & Halloran, 2017). Hence this provides more subbands for real and imaginary
coefficients than that of other transforms and helps in hiding more secret bits.
After classifying every cover patch, DT-CWT of individual cover blocks are obtained separately
and the coefficient planes corresponding to the selected patches are saved as a cell array. Here, the
algorithm concentrates on embedding secret data patches on DT-CWT subband coefficients of
individual cover image patches separately. Each secret patch is template matched over the selected
DT-CWT coefficient planes and gets embedded over the position where the embedding error is
minimum. The appropriate locations are vectored together and are used as the key/position vector
to recover the secret data at the receiver side. The position vector stores the row and column index
corresponding to each secret image block and the DT-CWT plane index. The size of this key is
being comparatively small with the secret data size. This position vector is essential in extracting
the embedded secret data and need to be transmitted to the receiver side via a secure channel.

8. Inverse DT-CWT to create the stego-image: The embedded wavelet domain needs to be
transformed back to spatial domain to create the stego-image. This stego-image will be passed to
the receiver side for retrieving the secret data along with the position vector key.
8

Fig. 2. Embedding scheme

3.2.2.2 Secret Data Retrieving

At the receiver side, inverse operations need to be used to recover the embedded secret patches. The
block diagram corresponding to the retrieval scheme is shown in Figure 3. The important steps in the
retrieval algorithm are listed below:
1- Split stego-image into patches as explained in sections 3.2.2.1 step 2.
2- Detect texture details of the stego-image.
3- Divide the stego-image into non-overlapping blocks and extract GLCM and SFTA features as
explained in sections 3.2.2.1 step 3 to 5.
4- Identify the texture patches from the stego-image using K-NN classification.
9

5- Find the DT-CWT subband coefficients of every selected block and extract secret patches from
the subband blocks using the position vector.
6- Merge the recovered patches together and combine the colour planes to build the secret image.

Fig. 3. Secret data retrieval scheme


10

4 Experimental results

The performance is evaluated based on many state-of-the-art benchmark systems (Kong, Chu, Ba,
Zhang, & Yang, 2004; Mansi S Subhedar & Mankar, 2013) and is explained in below sub-section.

4.1 Performance metrics

In evaluation of the performance, it is essential to outline the definitions of metrics so that others can
verify research reported here. Retrieval Bit Error Rate (BER) represents the extent of error while retrieving
the hidden bits from the stego-image. Mathematically, it is represented as shown in Eq. (1).
𝑁𝑠 ― 𝑁𝑐
BER (in %) = 𝑁𝑠 × 100 (1)

Where 𝑁𝑠 is the number of secret bits embedded and 𝑁𝑐 is the number of bits correctly retrieved.
For an efficient steganographic system, the error rate should be as low as possible.

Another metric namely Payload Capacity evaluates the amount of information that can be hidden in the
cover image. It is usually represented in terms of Bits Per Pixel (BPP), where BPP represents the number
of secret bits embedded per pixel of the cover image.
𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑒𝑑 𝑏𝑖𝑡𝑠
𝑝𝑎𝑦𝑙𝑜𝑎𝑑 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 (𝑖𝑛 𝐵𝑃𝑃) = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑐𝑜𝑣𝑒𝑟 𝑖𝑚𝑎𝑔𝑒 (2)

where

𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒 𝑒𝑚𝑏𝑒𝑑𝑑𝑒𝑑 𝑏𝑖𝑡𝑠 = 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑚𝑏𝑒𝑑𝑑𝑒𝑑 𝑠𝑒𝑐𝑟𝑒𝑡 𝑏𝑖𝑡𝑠


― 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑖𝑡𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑓𝑜𝑟 𝑠𝑒𝑐𝑟𝑒𝑡 𝑘𝑒𝑦

Similarly, Peak Signal to Noise Ratio (PSNR) measures of the quality of the stego-image by comparing
with the cover image. This is a measure of the difference between the cover image and stego-image. This
parameter is popular for evaluating image distortions occurred in the stego-image during the embedding
process. For 8-bit images, PSNR value can be evaluated as:
2552
𝑃𝑆𝑁𝑅(𝑖𝑛 𝑑𝐵) = 10 𝑙𝑜𝑔10 ( )
𝑀𝑆𝐸 (3)

In Eq. (3), MSE is the Mean Squared Error between the stego and cover image and can be
mathematically represented as:
𝑁
∑𝑖 = 1(𝐶𝑖 ― 𝐶′𝑖)2
𝑀𝑆𝐸 = 𝑁 (4)

Here, 𝑁 is the number of pixels in the cover image, 𝐶𝑖 is the value of 𝑖𝑡ℎ a pixel in the cover image and
𝐶′𝑖 is the value of 𝑖𝑡ℎ pixel in the stego-image. Many researches have shed light on the difference between
human visual perception and metrics such as MSE in assessing quality of the recovered image. A recently
introduced metric Structural Similarity Index Measure (SSIM) is another kind of metric used for measuring
the similarity between two images (Premaratne & Premaratne, 2012a; Z. Wang, Bovik, Sheikh, &
Simoncelli, 2004). It is calculated using the expression
11

(2𝜇𝑥𝜇𝑦 + 𝑐1)(2𝜎𝑥𝑦 + 𝑐2)


𝑆𝑆𝐼𝑀(𝑥,𝑦) = (𝜇𝑥2 + . (5)
𝜇𝑦2 + 𝑐1)(𝜎𝑥2 + 𝜎𝑦2 + 𝑐2)

Where 𝜇𝑥 and 𝜇𝑦 are the mean of input images x and y. 𝜎𝑥 is the variance of x, 𝜎𝑦 is the variance of y and
𝜎𝑥𝑦 is the covariance of x and y. 𝑐1 and 𝑐2 are the two variables to stabilize the division with weak
parameter. Where 𝑐1 = (𝑘1𝐿)2 and 𝑐2 = (𝑘2𝐿)2 where L = 255 is the dynamic range of pixel values (
2#𝑏𝑖𝑡𝑠 𝑝𝑒𝑟 𝑝𝑖𝑥𝑒𝑙- 1) with 𝑘1 = 0.01and 𝑘2 = 0.03.
Correlation factor (CF) is another versatile metric that calculates the correlation coefficient between two
images as denoted in Eq. (6):
𝑁
∑𝑖 = 1(𝐶′𝑖 ― 𝜇𝑠)(𝐶𝑖 ― 𝜇𝑐)
𝐶𝐹 = 𝑁 𝑁
(6)
(∑𝑖 = 1(𝐶′𝑖 ― 𝜇𝑠)2)(∑𝑖 = 1(𝐶𝑖 ― 𝜇𝑐)2)
Here 𝜇𝑠 and 𝜇𝑐 are the mean value of pixels in stego and cover image respectively.

4.2 Results and performance analysis

The proposed method has been applied to different colour images from USC-SIPI (Viterbi, 1981)
and SEAM CARAVING ORIGINAL Q75 database (Liu & Chen, 2015). The cover images are resized to
256 × 256, 384 × 384, 512 × 512 and then random colour secret images with size 64 × 64, 128 × 128, 192
× 192, 256 × 256 and 384 × 384 etc. are used for performance evaluation process. The random 10 cover
images from SEAM CARAVING ORIGINAL Q75 database are shown in Figure 4.

Image 1 Image 2 Image 3 Image 4 Image 5

Image 6 Image 7 Image 8 Image 9 Image 10

Fig. 4. Sample cover images

In order to test the performance of the proposed method and to compare with other state-of-the-art
systems, we applied the system to different cover images and secret data sizes. Figure 5 represents the
relationship between patch cover size, secret size and retrieval error rate. As per the results obtained from
analysis, the retrieval error rate reduces when the cover patch size increases. This is because of the fact that
the quantization error in forward and reverse DT-CWT is inversely proportional to the number of patches
in the cover region. However, if patch size is too high, the selection of homogeneous regions and textured
regions will be inefficient and may lead to degradation of the stego-image. So, we selected the patch size of
12

128 × 128 to get a better tradeoff between the secret image retrieval accuracy and the stego-image
imperceptibly.

When patch sie is 32x32 When patch sie is 64x64


0.45 0.45
Cover image size =256x256 Cover image size =256x256
0.4 Cover image size =384x384 0.4 Cover image size =384x384
Cover image size =512x512 Cover image size =512x512

0.35 0.35
Error rate (in percentage)

Error rate (in percentage)


0.3 0.3

0.25 0.25

0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05
50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400
Secret image size Secret image size

(a) (b)

When patch sie is 128x128


0.22

0.2

0.18
Error rate (in percentage)

0.16

0.14

0.12

0.1

0.08
Cover image size =256x256
0.06 Cover image size =384x384
Cover image size =512x512

0.04
50 100 150 200 250 300 350 400
Secret image size

(c)
Fig. 5. Various secret image size vs retrieval error rate when patch size is (a) 32 × 32, (b) 64 × 64, and (c) 128 × 128.

The results observed at two different payload capacities (1.9688 and 7.875 BPP) and other quality
metrics over different test images are shown in Table 1 and 2. The analysis of the tables show that both the
stego-image and the recovered secret image quality are much higher while compared to the state-of-the-art
techniques even at a dense embedding of 7.875 BPP. Another thing can be observed from the Tables and
Figure 5 is that the error rate increases with the increase in payload. This is due to the increase in
quantization error with increase in the payload.
13

Table 1. Performance statistics for various secret image (192 × 192) and cover images (256 × 256) and with a cover
block size of (128 × 128).

Cover Stego-image Retrieved secret image


BPP
Image PSNR MSE SSIM CF PSNR SSIM CF BER

Lena 128 54.5826 0.2264 0.9997 1 43.1602 0.9609 0.9996 0.1890

Mandrill 54.5614 0.2275 0.9998 1 43.9582 0.9695 0.9997 0.1694

Pepper 55.1833 0.1971 0.9996 1 47.5505 0.9949 0.9999 0.0760

Airplane 54.2616 0.2437 0.9998 0.9999 36.6290 0.9099 0.9983 0.4949

Lake 55.3342 0.13242 0.9997 1 46.9114 0.9903 0.9998 0.1060

Car 55.5543 0.1810 0.9998 1 45.6677 0.9851 0.9998 0.1140

Splash 52.1817 0.3935 0.9992 1 37.5633 0.9582 0.9986 0.3118

House 52.5004 0.3051 0.9996 0.9999 37.3564 0.9221 0.9986 0.3970

Tree 54.8993 0.2104 0.9997 1 42.6375 0.9525 0.9996 0.2082

Couple 1.9688 52.7335 0.3465 0.9997 0.9998 29.1665 0.8747 0.9903 0.4694

1 53.3812 0.2985 0.9997 0.9999 41.6394 0.9522 0.9995 0.2344

2 53.9404 0.2624 0.9997 0.9999 46.9245 0.9919 0.9998 0.0759

3 52.6658 0.3520 0.9994 0.9999 39.2361 0.9524 0.9990 0.2741

4 53.1171 0.3172 0.9997 0.9999 41.5574 0.9751 0.9994 0.1695

5 53.7246 0.3120 0.9997 1 41.2229 0.9374 0.9994 0.2933

6 54.0038 0.2586 0.9997 1 42.5719 0.9750 0.9995 0.1864

7 52.0763 0.4031 0.9995 0.9999 30.3723 0.8478 0.9932 0.9852

8 53.5282 0.2886 0.9997 0.9999 41.5362 0.9560 0.9994 0.2005

9 52.5818 0.3588 0.9996 0.9999 35.8674 0.9106 0.9981 0.4960

10 53.5217 0.2890 0.9996 0.9999 43.4210 0.9585 0.9997 0.1799

Average 53.7166 0.28019 0.9996 0.9999 40.7474 0.9487 0.9985 0.2815


14

Table 2. Performance statistics for various secret image (384 × 384) and cover images (256 × 256) and with a cover
block size of (128 × 128).

Cover Stego-image Retrieved secret image


BPP
Image PSNR MSE SSIM CF PSNR SSIM CF BER

Lena 51.4018 0.4709 0.9993 0.9999 42.4682 0.9618 0.9996 0.2113

Mandrill 51.8281 0.4268 0.9995 0.9999 42.4333 0.9526 0.9996 0.2622

Pepper 51.7956 0.4300 0.9991 1 45.1870 0.9992 0.9998 0.1284

Airplane 53.3150 0.3031 0.9997 0.9999 34.6919 0.8809 0.9974 0.6499

Lake 52.1711 0.5407 0.9994 1 44.0435 0.9644 0.9997 0.2003

Car 52.4299 0.3716 0.9996 0.9999 42.9163 0.9560 0.9996 0.2474

Splash 51.2588 0.4866 0.9990 1 37.5736 0.9545 0.9986 0.3550

House 51.7072 0.4389 0.9993 0.9999 34.7657 0.8977 0.9975 0.5506

Tree 52.1248 0.3987 0.9995 1 39.2873 0.9264 0.9992 0.3927

Couple 7.875 52.0589 0.4022 0.9995 0.9998 27.2659 0.8256 0.9850 0.7432

1 51.4226 0.4686 0.9995 0.9999 42.7552 0.9519 0.9995 0.2220

2 50.5710 0.5701 0.9993 0.9999 43.8351 0.9597 0.9997 0.2012

3 50.9041 0.5280 0.9991 0.9999 39.2365 0.9505 0.9990 0.2970

4 50.8896 0.57173 0.9994 0.9998 40.5589 0.9579 0.9993 0.2263

5 51.2927 5.5832 0.9995 0.9999 40.6620 0.9307 0.9994 0.3442

6 51.7051 0.4391 0.9994 0.9999 40.2815 0.9352 0.9993 0.3374

7 50.3544 0.5993 0.9993 0.9999 28.1972 0.8130 0.9909 1.6142

8 51.5980 0.4501 0.9994 0.9999 38.1835 0.9231 0.9989 0.4267

9 50.2207 0.6180 0.9993 0.9999 35.0241 0.9002 0.9979 0.6626

10 49.3868 0.4791 0.9993 0.9999 39.2340 0.9312 0.9992 0.3550

Average 51.4218 0.7288 0.9993 0.9999 36.7922 0.9286 0.9979 0.4213

A sample output while using Lena as the cover image for visual quality inspection is shown in Figure 6.
The images clearly indicate good visual quality and offer no clues to the presence of any hidden
information even with statistical analysis. Figure 7 compares the histograms of cover-stego and secret-
retrieved images. It is clear, that the amount of distortion and pixel contrast between cover and stego-image
is not altered in a detectable manner.
15

Cover Stego-image

(a) (b)

Secret Retrieved Secret

(c) (d)

Fig. 6. (a) Cover , (b) Stego-image, (c) Secret and (d) Retrieved secret image

During the experiment, we found while using some smooth cover images, the performances are somewhat
low due to the limited number of optimal cover blocks. Hence, high textured cover images are preferable
while choosing this steganographic method. Even though the payload capacity can be increased by using
deeper levels of DT-CWT, it will cause remarkable degradation on the stego-image quality and retrieval
accuracy. So, in this algorithm, we limit the use of DT-CWT to level 1 of the decomposition.
For comparison of our method with popular methods, Table 3 shows the performance comparison of the
proposed method with various well-known edge-based image steganography schemes mentioned in section
2. The results are analyzed in terms of PSNR and high data capacity (BPP). In Table 3, the proposed
algorithm shows stable high performance both in terms of payload and stego-image quality. Algorithm
mentioned in (Lee et al., 2018) is marginally better in terms of stego-image PSNR. However, the payloads
offered in such algorithm are nearly 1/8th as that of the proposed method and proves the overall
performance of the proposed method. Table 4 shows the performance comparison of the proposed method
with methods proposed by (Dadgostar & Afsari, 2016) and (Yu, 2015) in terms of PSNR and MSE on
randomly selected cover images from SEAM CARVING ORIGINAL Q75 database. Also, the proposed
algorithm is compared with other similar transform based steganographic approaches mentioned in section
2 in terms of full image quality assessment of both ‘Cover image- Stego-image’ and ‘Secret image–
Retrieved image’ and are given in Table 5. The proposed approach is superior to other techniques found in
literature in different perspectives such as payload capacity and stego-image quality measures. Even
though one existing approach (V. Kumar & Kumar, 2017) is showing better results in terms of retrieved
secret image quality, our method still holds a leading edge in overall analysis.
16

Cover Histogram Stego-image Histogram Secret Histogram Retrieved secret Histogram


1200 1200 6000 8000
R R R R
G G G G
7000
1000 B 1000 B 5000 B B

6000
800 800 4000
5000

600 600 3000 4000

3000
400 400 2000

2000
200 200 1000
1000

0 0 0 0
0 100 200 300 0 100 200 300 0 100 200 300 0 100 200 300

(a) (b) (c) (d)


Fig. 7. Histogram of (a) Lena Cover image, (b) Stego-image, (d) Original secret image, (e) Retrieved secret image

Table 3. Comparison with various edge\region based Steganographic schemes

(Ghosal et al., Atta & Ghanbari, Proposed


(Yu, 2015) (Lee et al., 2018)
Cover 2018) 2018) method
Image
BPP PSNR BPP PSNR BPP PSNR BPP PSNR BPP PSNR

Lena 3.09 38.45 4.68 30.02 0.7425 54.9516 3.3577 31.2724 7.875 51.4018

Mandrill 3.13 37.64 4.64 30.19 1.1747 52.6235 3.3438 32.4385 7.875 51.8281

Pepper 3.08 38.58 4.68 29.98 0.7075 55.2946 3.3225 31.7221 7.875 51.7956

Airplane 3.08 38.65 4.67 30.07 0.7705 54.9195 3.3303 31.4595 7.875 53.315

Lake 3.10 38.57 4.66 30.10 0.7985 54.6775 3.3508 31.9394 7.875 52.1711

Table 4. Comparison of PSNR and MSE on random selected cover images

Cover (Yu, 2015) (Dadgostar & Afsari, 2016) Proposed method


Image MSE PSNR MSE PSNR MSE PSNR
1 3.9993 42.1110 1.1885 47.3810 0.4686 51.4226
2 4.7291 41.3830 1.3597 46.7963 0.5701 50.571
3 4.4840 41.6141 1.1409 47.5584 0.528 50.9041
4 4.1675 41.9321 1.6614 45.9261 0.57173 50.8896
5 4.0163 42.0925 1.7416 45.7212 5.5832 51.2927
6 4.1871 41.9117 1.4857 46.4116 0.4391 51.7051
7 4.3362 41.7597 1.2810 47.0554 0.5993 50.3544
8 4.2981 41.7981 1.4588 46.4907 0.4501 51.598
9 4.1800 41.9190 1.7242 45.7650 0.618 50.2207
10 4.1115 41.9908 1.3847 46.7172 0.4791 49.3868
17

Table 5. Comparison with similar transform domain based Steganographic schemes

Methods\ Stego-Image Retrieved Secret image


Benchmarks BPP PSNR SSIM CF PSNR SSIM CF
(S Singh & Siddiqui, 2013) 2 34.83 0.941 0.9983 27.55 0.8989 0.981
(V. Kumar & Kumar, 2017) 2 42.87 0.9625 0.9174 42.46 0.9572 0.8476
(Sathisha et al., 2013) 4.6 40.57 0.945 0.9986 25.2 0.876 0.946
(Kadhim, Premaratne, & Vial, 2018a) 5.4 47.08 0.975 0.9998 31.08 0.9124 0.987
(Kadhim, Premaratne, & Vial, 2018b) 3.24 48.175 0.9825 0.9981 31.8276 0.9295 0.9950
Proposed method 7.875 51.4218 0.9993 0.9999 36.7922 0.9286 0.9979

5 Conclusion and Future work

The proposed algorithm mainly focused on an indirect embedding scheme on DT-CWT subbands with
an intention to get high payload capacity without suffering noticeable degradations in the stego-image. The
proposed template matching based block embedding scheme over the random positions in the coefficient
subbands has been a success in achieving such a higher imperceptibility and payload capacity and also
improves the security of the system. The selection of optimal blocks from the cover data by using machine
learning helps to maintain the high visual quality even when the embedding density is high. The machine
learning based cover block selection yields much better results rather than using generic edge detection
systems. Experimental result also shows that the performance is much better in different perspectives like,
stego-image imperceptibility, payload capacity, retrieval accuracy etc. This make the stego-image
undetectable to an unauthorized intruder and hence hidden secret data will remain safe. Future works are
planned to incorporate more security measures to defend attacks from intruders.

Acknowledgments

The first author would like to acknowledge higher committee of education development in Iraq (HCED)
for the scholarship funding.

References

Al-Dmour, H., & Al-Ani, A. (2016). A steganography embedding method based on edge identification and XOR
coding. Expert Systems with Applications, 46, 293–306. https://doi.org/10.1016/j.eswa.2015.10.024
Atee, H. A., Ahmad, R., Noor, N. M., Rahma, A. M. S., & Aljeroudi, Y. (2017). Extreme learning machine based
optimal embedding location finder for image steganography. PloS One, 12(2), e0170329.
Atta, R., & Ghanbari, M. (2018). A high payload steganography mechanism based on wavelet packet transformation
and neutrosophic set. Journal of Visual Communication and Image Representation, 53, 42–54.
Balasubramanian, C., Selvakumar, S., & Geetha, S. (2014). High payload image steganography with reduced distortion
using octonary pixel pairing scheme. Multimedia Tools and Applications, 73(3), 2223–2245.
Canny, J. (1986). A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine
18

Intelligence, PAMI-8(6), 679–698. https://doi.org/10.1109/TPAMI.1986.4767851


Cheddad, A., Condell, J., Curran, K., & Mc Kevitt, P. (2010). Digital image steganography: Survey and analysis of
current methods. Signal Processing, 90(3), 727–752. https://doi.org/10.1016/j.sigpro.2009.08.010
Chu, R., You, X., Kong, X., & Ba, X. (2004). A DCT-based image steganographic method resisting statistical attacks.
In Acoustics, Speech, and Signal Processing, 2004. Proceedings.(ICASSP’04). IEEE International Conference
on (Vol. 5, pp. V–953). IEEE.
Costa, A. F., Humpire-Mamani, G., & Traina, A. J. M. (2012). An efficient algorithm for fractal analysis of textures. In
Graphics, Patterns and Images (SIBGRAPI), 2012 25th SIBGRAPI Conference on (pp. 39–46). IEEE.
Dadgostar, H., & Afsari, F. (2016). Image steganography based on interval-valued intuitionistic fuzzy edge detection
and modified LSB. Journal of Information Security and Applications, 30, 94–104.
https://doi.org/10.1016/j.jisa.2016.07.001
Ghosal, S. K., Mandal, J. K., & Sarkar, R. (2018). High payload image steganography based on Laplacian of Gaussian
(LoG) edge detector. Multimedia Tools and Applications, 1–16.
Hamid, N., Yahya, A., Ahmad, R. B., & Al-Qershi, O. (2012). Characteristic region based image steganography using
Speeded-Up Robust Features technique. In 2012 International Conference on Future Communication Networks,
ICFCN 2012 (pp. 141–146). https://doi.org/10.1109/ICFCN.2012.6206858
Han, F., & Huang, D.-S. (2006). Improved extreme learning machine for function approximation by encoding a priori
information. Neurocomputing, 69(16–18), 2369–2373.
Huang, D.-S., & Du, J.-X. (2008). A constructive hybrid structure optimization methodology for radial basis
probabilistic neural networks. IEEE Transactions on Neural Networks, 19(12), 2099–2115.
Islam, S., Modi, M. R., & Gupta, P. (2014). Edge-based image steganography. Eurasip Journal on Information
Security, 2014. https://doi.org/10.1186/1687-417X-2014-8
Joseph, P., & Vishnukumar, S. (2015). A study on steganographic techniques. In Global Conference on
Communication Technologies, GCCT 2015 (pp. 206–210). https://doi.org/10.1109/GCCT.2015.7342653
Kadhim, I. J. (2012). A New Audio Steganography System Based on Auto-Key Generator. AL-Khwarizmi Engineering
Journal, 8(1).
Kadhim, I. J., Premaratne, P., & Vial, P. J. (2018a). Adaptive Image Steganography Based on Edge Detection Over
Dual-Tree Complex Wavelet Transform. In International Conference on Intellgent Computing Methodologies
(pp. 544–550). Springer, Cham. https://doi.org/10.1007/978-3-319-95957-3_57
Kadhim, I. J., Premaratne, P., & Vial, P. J. (2018b). Secure Image Steganography Using Dual-Tree Complex Wavelet
Transform Block Matching. 2018 Second International Conference on Electronics, Communication and
Aerospace Technology (ICECA), 41–47. https://doi.org/10.1109/ICECA.2018.8474616
Kadhim, I. J., Premaratne, P., Vial, P. J., & Halloran, B. (2017). A Comparative Analysis Among Dual Tree Complex
Wavelet and Other Wavelet Transforms Based on Image Compression. In International Conference on
Intelligent Computing (pp. 569–580). Springer.
Kadhim, I. J., Premaratne, P., Vial, P. J., & Halloran, B. (2018). Comprehensive Survey of Image Steganography:
Techniques, Evaluations, and Trends in Future Research. Neurocomputing.
Khan, S., & Bianchi, T. (2018). Ant Colony Optimization (ACO) based Data Hiding in Image Complex Region.
International Journal of Electrical and Computer Engineering (IJECE), 8(1), 379–389.
Kong, X., Chu, R., Ba, X., Zhang, T., & Yang, D. (2004). A perception evaluation scheme for steganography. Lecture
Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics). Retrieved from https://www.scopus.com/inward/record.uri?eid=2-s2.0-
35048869660&partnerID=40&md5=657aa6df6309999f96f8aaa39d81ab99
Kumar, S., & Muttoo, S. K. (2013). Image Steganogaraphy Based on Complex Double Dual Tree Wavelet Transform.
International Journal of Computer and Electrical Engineering, 5(2), 147.
Kumar, V., & Kumar, D. (2017). A modified DWT-based image steganography technique. Multimedia Tools and
Applications, 1–30. https://doi.org/10.1007/s11042-017-4947-8
Lee, C.-F., Chang, C.-C., Xie, X., Mao, K., & Shi, R.-H. (2018). An Adaptive High-Fidelity Steganographic Scheme
Using Edge Detection and Hybrid Hamming Codes. Displays.
Liu, Q., & Chen, Z. (2015). Improved approaches with calibrated neighboring joint density to steganalysis and seam-
19

carved forgery detection in JPEG images. ACM Transactions on Intelligent Systems and Technology (TIST),
5(4), 63.
Mabtoul, S., & Aboutajdine, D. (2008). A Robust Digital Image Watermarking Method Using Dual Tree Complex
Wavelet Transform. In Proceedings of the IEEE Symposium on Computers and Communications (pp. 1000–
1004).
Muhammad, N., Bibi, N., Mahmood, Z., Akram, T., & Naqvi, S. R. (2017). Reversible integer wavelet transform for
blind image hiding method. PLoS One, 12(5), e0176979. Retrieved from
https://doi.org/10.1371/journal.pone.0176979
Ogihara, T., Nakamura, D., & Yokoya, N. (1996). Data embedding into pictorial images with less distortion using
discrete cosine transform. In Proceedings - International Conference on Pattern Recognition (Vol. 2, pp. 675–
679). https://doi.org/10.1109/ICPR.1996.546908
Olmos, A., & Kingdom, F. A. A. (2004). A biologically inspired algorithm for the recovery of shading and reflectance
images. Perception, 33(12), 1463–1473. Retrieved from http://tabby.vision.mcgill.ca/
Parthasarathy, G., & Chatterji, B. N. (1990). A class of new KNN methods for low sample problems. IEEE
Transactions on Systems, Man, and Cybernetics, 20(3), 715–718.
Patel, P., & Patel, Y. (2015). Secure and authentic DCT image steganography through DWT - SVD based digital
watermarking with RSA encryption. In Proceedings - 2015 5th International Conference on Communication
Systems and Network Technologies, CSNT 2015 (pp. 736–739). https://doi.org/10.1109/CSNT.2015.193
Premaratne, P., Nguyen, Q., & Premaratne, M. (2010). Human computer interaction using hand gestures. In
International Conference on Intelligent Computing (pp. 381–386). Springer.
Premaratne, P., & Premaratne, M. (2012a). Image similarity index based on moment invariants of approximation level
of discrete wavelet transform. Electronics Letters, 48(23), 1465–1467.
Premaratne, P., & Premaratne, M. (2012b). Key-based scrambling for secure image communication. In International
Conference on Intelligent Computing (pp. 259–263). Springer.
Provos, N., & Honeyman, P. (2003). Hide and seek: An introduction to steganography. IEEE Security & Privacy,
99(3), 32–44.
Sajedi, H., & Jamzad, M. (2008). Cover selection steganography method based on similarity of image blocks. In
Proceedings - 8th IEEE International Conference on Computer and Information Technology Workshops, CIT
Workshops 2008 (pp. 379–384). https://doi.org/10.1109/CIT.2008.Workshops.34
Sathisha, N., Priya, R., Babu, K. S., Raja, K. B., Venugopal, K. R., & Patnaik, L. M. (2013). DTCWT based high
capacity steganography using coefficient replacement and adaptive scaling. In Proceedings of SPIE - The
International Society for Optical Engineering (Vol. 9067). https://doi.org/10.1117/12.2051889
Selesnick, I. W., Baraniuk, R. G., & Kingsbury, N. G. (2005). The dual-tree complex wavelet transform. IEEE Signal
Processing Magazine, 22(6), 123–151. https://doi.org/10.1109/MSP.2005.1550194
Singh, S, & Siddiqui, T. J. (2013). Robust image steganography using complex wavelet transform. In IMPACT 2013 -
Proceedings of the International Conference on Multimedia Signal Processing and Communication
Technologies (pp. 56–60). https://doi.org/10.1109/MSPCT.2013.6782087
Singh, Siddharth, & Siddiqui, T. J. (2014). Transform domain techniques for image steganography. Information
Security in Diverse Computing Environments, 245–259.
Smitha, G. L., & Baburaj, E. (2018). Sobel edge detection technique implementation for image steganography analysis.
Biomedical Research, 29.
Subhedar, M S, & Mankar, V. H. (2017). Curvelet transform and cover selection for secure steganography. Multimedia
Tools and Applications. https://doi.org/10.1007/s11042-017-4706-x
Subhedar, Mansi S, & Mankar, V. H. (2013). Performance Evaluation of Image Steganography based on Cover
Selection and Contourlet Transform. In Cloud & Ubiquitous Computing & Emerging Technologies (CUBE),
2013 International Conference on (pp. 172–177). IEEE.
Viterbi, U. (1981). USP-SIPI image database, USC University of Southren California. Retrieved from
http://sipi.usc.edu/database/database.php?volume=misc
Wang, X.-F., & Huang, D.-S. (2009). A novel density-based clustering framework by using level set method. IEEE
Transactions on Knowledge and Data Engineering, 21(11), 1515–1531.
20

Wang, X.-F., Huang, D.-S., & Xu, H. (2010). An efficient local Chan–Vese model for image segmentation. Pattern
Recognition, 43(3), 603–618.
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to
structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.
Wazirali, R. A., & Chaczko, Z. (2015). Data hiding based on intelligent optimized edges for secure multimedia
communication. Journal of Networks.
Wazirali, R., & Chachzo, Z. (2015). Hyper edge detection with clustering for data hiding. Journal of Information
Hiding and Multimedia Signal Processing (JIHMSP).
Yu, W. (2015). The lsb-based high payload information steganography. Methods, 1(2), 2.
Zhao, Z.-Q., Huang, D.-S., & Sun, B.-Y. (2004). Human face recognition based on multi-features using neural
networks committee. Pattern Recognition Letters, 25(12), 1351–1358.

You might also like