Download as pdf or txt
Download as pdf or txt
You are on page 1of 64

PENCIL SKETCH USING GAN

Submitted in fulfillment for the award of the degree of

Bachelor of Engineering

By

Parth Dasawant
(Roll No.: 08)
Utkarsh Gaonkar
(Roll No.: 17)
Nikhil Ghotekar
(Roll No.: 19)

Supervisor:
Prof. Kanchan K. Doke

Sponsored By:
Amritansh

Department of Computer Engineering


Bharati Vidyapeeth College of Engineering, Navi Mumbai.
Academic Year 2022-2023
Project Synopsis Report Approval for B.E

This project report entitled “Pencil Sketch Using GAN” by “Mr. Parth
Dasawant, Mr. Utkarsh Gaonkar and Mr. Nikhil Ghotekar” is approved for the
degree of Bachelor of Engineering in Computer Engineering.

Date: __________

____________________

Prof. Kanchan Doke


Project Guide

____________________ _______________ _____

Dr. D.R. Ingle Dr. Sandhya Jadhav


Head Of Department Principal

____________________ ___________________

Internal Examiner External Examiner

ii
Declaration

We declare that this written submission represents our ideas in our own
words and where others' ideas or words have been included, we have adequately
cited and referenced the original sources. We also declare that we have adhered to
all principles of academic honesty and integrity and have not misrepresented or
fabricated or falsified any idea/data/fact/source in our submission. We understand
that any violation of the above will cause disciplinary action by the Institute and
can also invoke penal action from the sources which have thus not been properly
cited or from whom proper permission has not been taken when needed.

_____________________

(Parth Dasawant,08)

______________________

(Utkarsh Gaonkar,17)

______________________

(Nikhil Ghotekar,19)

Date: Place: Kharghar

iii
Table of Content

1. Chapter 1. Introduction........................................................................................... 1
1.1 Introduction.................................................................................................... 2

2. Chapter 2. Review Of Literature............................................................................ 6


2.1 Related Work................................................................................................. 7
2.2 Summary........................................................................................................ 16

3. Chapter 3. Report On Present Investigation......................................................... 17


3.1 System Design............................................................................................... 18
3.3.1 Architectural Diagram ......................................................................... 19
3.3.2 Flow Chart............................................................................................ 23
3.2 System Analysis............................................................................................. 24
3.2.1 Requirement Analysis........................................................................... 27
3.2.2 Risk Analysis........................................................................................ 29
3.2.3 Workflow Diagram............................................................................... 32
3.3 Project Scheduling......................................................................................... 33
3.3.1 Project Organization............................................................................. 34
3.3.2 Gantt Chart........................................................................................... 36
3.4 Results........................................................................................................... 38

4. Chapter 4. Conclusion............................................................................................. 41
4.1 Conclusion..................................................................................................... 42

5. Chapter 5. Appendix................................................................................................ 43
5.1 Application Screenshot.................................................................................. 44
5.2 Application Performance Screenshot............................................................. 48

6. Chapter 6. Reference............................................................................................... 49

7. Chapter 7. Certificates............................................................................................. 52

8. Chapter 8. Acknowledgement................................................................................. 56

iv
List of Figures

Sr No. Figures Page


No.
Table 2.1.1 Literature Review Comparisons 10
Figure 3.1.1 System Design. 18
Figure 3.3.1.1 Architecture Diagram for GAN. 19
Figure 3.3.1.2 Discriminative and Generative models of handwritten digits. 21
Figure 3.3.1.3 Paired vs Unpaired Datasets used for GAN(CycleGAN). 22
Figure 3.3.2.1 Flowchart of GAN Architecture. 23
Figure 3.2.3.1 Workflow Diagram of proposed system. 32
Figure 3.3.1.1 GitHub setup for collaboration for team members having their 35
individual branches and dev, test and main branches for special
purposes.
Figure 3.3.2.1 Gantt chart for the first term. 36
Figure 3.3.2.2 Gantt chart for the second term part 1. 36
Figure 3.3.2.3 Gantt chart for the second term part 2. 36
Figure 3.4.1 Verbose of first epoch while training the model on the Kaggle. 39
Figure 3.4.2 After training around 6hrs training process stops due to call-back at 39
epoch no. 84.
Figure 3.4.3 Results of the testing. 40
Figure.5.1.1 Landing page with dynamic frontend that changes with motion of 44
cursor.
Figure 5.1.2 After clicking Get Started button user can land on page where he/she 44
can upload an image or capture one.
Figure 5.1.3 User can upload the input image to get the sketch of the same with 45
uploading and clicking upload button to display the given image in
proper input box without changing the aspect ratio of the image.
Figure 5.1.4 After clicking Transform button the pencil sketch image gets 45
generated and shown on the output box with dynamically changing
button for UI/UX.
Figure 5.1.5 Camera access for the taking pictures and use as input image. 46
Figure 5.1.6 The result for the camera image. 46
Figure 5.1.7 About page on webapp. 47
Figure 5.2.1 Sanity Check for the model results. 48

v
List Of Abbreviation
Symbol Full Form
GAN Generative Adversarial Network
AI Artificial Intelligence
ML Machine Learning
DL Deep Learning
CNN Convolutional Neural Network
GPU Graphics Processing Unit
DCGAN Deep Convolutional Generative Adversarial Network
VAE Variational Autoencoder
EMNIST Extended Modified National Institute of Standards and Technology
CycleGAN Cycle-Consistent Generative Adversarial Networks
ART PDGAN Adversarial Residual Transformative Probabilistic Discriminative
Generative Adversarial Network

vi
ABSTRACT

Substantial research is being conducted on image transfer utilizing deep learning,


particularly with generative adversarial networks (GANs). Nevertheless, there are
currently no techniques that can create excellent, creative pencil sketches. First, when
creating drawings, artists do not include all the information from the photographs.
Instead, they frequently employ techniques that enlarge some especially noteworthy
aspects of the objects and reduce others. Scholars are fascinated by pencil drawings
because they incorporate the authors' observation, analysis, and experience. Hence,
learning pencil sketching can advance artificial intelligence. Formerly, pencil sketching
generation was divided into two components: the structure map, which determines area
boundaries, and the tone map, which reflects variations in the amount of light falling on
a region as well as its intensity or tone and even texture. However, artists taught us that
artistic pencil drawings should be able to capture and enhance the attributes of the
objects.

Keywords: Generative Adversarial Networks (GAN), Deep Learning, Artistic Pencil Drawing,
Style Transfer, Unpaired Dataset, CycleGAN, Flask, Kaggle, Generative Model.

vii
CHAPTER 1
INTRODUCTION

1
1.1 Introduction

One of the most popular techniques for quick sketches or finely detailed representations
is pencil sketches. Pencil sketch generation using Generative Adversarial Networks (GANs) is
a fascinating field of computer vision and image processing that aims to transform photographs
into creative pencil-like drawings. Pencil sketches pique the interest of researchers since they
represent the writers' observations, analyses, and experience. So, learning pencil sketching can
aid in the development of artificial intelligence. Yet it normally takes many hours to complete
a fine sketching, even for an experienced artist with professional training, which draws people
to work on pencil sketch generating algorithms. Pencil sketch generation was formerly divided
into two components: the structure map, which defines area boundaries, and the tone map,
which represents changes in the quantity of light falling on a region as well as its intensity or
tone and even texture. But we learned from artists that artistic pencil sketches should be able to
capture and emphasize the attributes of the items.
GANs, or Generative Adversarial Networks, are a type of generative modelling that
employs deep learning approaches such as convolutional neural networks. Generative
modelling is an unsupervised learning task in machine learning that entails automatically
identifying and learning the regularities or patterns in incoming data so that the model may be
used to produce or output new instances that conceivably could have been taken from the
original dataset [9]. GANs are an ingenious technique of training a generative model by framing
the task as a supervised learning problem with two sub-models: the generator model, which we
train to produce new instances, and the discriminator model, which attempts to categories
samples as either real (from the domain) or false (generated). The discriminator model is tricked
roughly half the time during training of the two models, indicating that the generator model is
producing convincing examples. This is accomplished through an adversarial, zero-sum
training scenario. Artists have traditionally utilized this approach to make expressive and
detailed drawings by converting an image to a pencil sketch. Growing interest has been shown
in creating algorithms that can automatically produce pencil sketches from digital photos
because of the development of digital technology.
The process of generating pencil sketches using GANs starts with a large dataset of
paired photographs and corresponding pencil sketches. These pairs of images are used to train
the GAN, where the photographs are used as input to the generator, and the generated pencil
sketches are compared with the real pencil sketches by the discriminator. The generator and
discriminator are updated iteratively during the training process to improve the quality of the
generated pencil sketches. One of the challenges in pencil sketch generation using GANs is
the availability of aligned data, where the photographs and pencil sketches are perfectly
matched. Aligned data is crucial for training GANs, as it helps the model learn the
correspondence between the input photographs and the target pencil sketches. However,
acquiring aligned data can be time-consuming and expensive, as it requires manual pairing of
photographs and pencil sketches. Therefore, some researchers have explored the use of

2
unaligned data for training GANs, where the photographs and pencil sketches are not
perfectly matched but are still used to train the model.
To mimic the talents of artists as closely as possible, the GAN model can be designed
to capture the artistic style of pencil sketches, including line strokes, shading, and texture. This
can be achieved by designing the architecture of the generator and discriminator, as well as
incorporating appropriate loss functions that encourage the generated pencil sketches to exhibit
the desired artistic style. For instance, a combination of adversarial loss, perceptual loss, and
texture loss can be used to guide the generator towards generating realistic and artistic pencil
sketches. It would be ideal if an algorithm or framework could automatically create high-quality
artistic drawings from an input photo by learning from creative drawings. It can be utilized in
a variety of contexts, including animation and advertising. In particular, the development of
deep learning that uses networks to conduct visual style transfer was also suggested. Lately,
style transfer techniques based on generative adversarial networks (GANs) and datasets [14] of
photographs and customized images, whether they are paired or unpaired, have produced a lot
of successful outcomes.
In addition to capturing artistic style, another approach to improving the quality of the
generated pencil sketches is by coupling the key maps with instance segmentation technology.
Instance segmentation is a computer vision task that involves identifying and segmenting
individual objects in an image. By incorporating instance segmentation technology, the GAN
model can better understand the object-level details in the images, which can result in more
accurate and visually appealing pencil sketches. For example, the key maps can be used to
highlight the important regions in the photographs, such as the main objects or subjects, and
guide the generator to generate corresponding pencil sketches that emphasize these regions. In
addition to capturing artistic style, another approach to improving the quality of the generated
pencil sketches is by coupling the key maps with instance segmentation technology. Instance
segmentation is a computer vision task that involves identifying and segmenting individual
objects in an image. By incorporating instance segmentation technology, the GAN model can
better understand the object-level details in the images, which can result in more accurate and
visually appealing pencil sketches. For example, the key maps can be used to highlight the
important regions in the photographs, such as the main objects or subjects, and guide the
generator to generate corresponding pencil sketches that emphasize these regions.
The generated pencil sketches using GANs can have a wide range of applications. In
the field of art and design, they can be used as a tool for artists to quickly generate pencil
sketches of their ideas or concepts, allowing for experimentation and exploration of different
styles. They can also be used in the field of entertainment, such as in video games, animations,
and movies, to create hand-drawn-like visuals or add artistic effects to the scenes. In advertising
and marketing, pencil sketches generated using GANs can be used to create unique and eye-
catching visuals for promotions, branding, and product design.
Drawing artistic pencil portraits range greatly from the pencil styles that have been
studied in work history, according to the knowledge of artists. Three aspects serve as a summary
of the distinctions. Initially, the artists will not immediately convert all the features of the
pictures into their sketches; instead, they will select the most significant areas to magnify while
simplifying other sections. Second, artists will not properly locate the elements in pencil sketch,
making similarity or correspondence procedures difficult to apply. Finally, artists include lines
in pencil sketches that are not immediately related to the basic vision aspects in the view or

3
photograph of the goods. As a result, even the most sophisticated picture style transfer
algorithms frequently fail to generate vivid and realistic artistic pencil drawings.
Despite the advancements in pencil sketch generation using GANs, there are still some
challenges and limitations. One of the challenges is achieving photorealism in the generated
pencil sketches. While GAN has shown remarkable capabilities in generating realistic images,
achieving photorealism in pencil sketches can be challenging due to the inherent differences
between photographs and pencil sketches. Pencil sketches have unique characteristics such as
line strokes, shading, and texture that are not present in photographs. Capturing these artistic
elements and mimicking the talents of human artists in a realistic manner is still an active area
of research.
Another challenge is the lack of a clear objective evaluation metric for assessing the
quality of the generated pencil sketches. Unlike other image generation tasks, such as image
synthesis or style transfer, where there are established metrics for measuring the quality of the
generated images, there is no widely accepted metric for evaluating the quality of pencil
sketches. This makes it difficult to quantitatively compare different pencil sketch generation
models and assess their performance objectively. Thus, developing appropriate evaluation
metrics that can effectively capture the artistic quality of pencil sketches is an ongoing research
area.
Moreover, the availability of large-scale paired datasets for training GANs can also be
a limitation. Acquiring a large dataset of aligned photographs and corresponding pencil sketches
can be time-consuming and labor-intensive. Additionally, the variations in artistic styles and
techniques used by different artists can make it challenging to create a diverse and
representative dataset. The lack of diverse training data can result in limitations in the model's
ability to generate pencil sketches with different styles and may lead to biased results.
Furthermore, handling complex images with multiple objects and scenes in a single
photograph can also be challenging. Most existing pencil sketch generation models focus on
simple images with a single object or scene and may struggle when dealing with complex
images that contain multiple objects or scenes. Coupling the key maps with instance
segmentation technology can help address this limitation to some extent, but further research is
needed to effectively handle complex images and generate accurate pencil sketches with
multiple objects or scenes.
To overcome existing challenges, we have designed a webpage for pencil sketch using
GAN, it includes an “About” page that provides information about the developers or team
responsible for creating the application, including contact information and links to social media
profiles or other websites. The main page of the webpage for pencil sketch using GAN would
include the main interface for users to upload (By adding an upload function to an application
or webpage, users could transform their own images into pencil sketches. This enables users to
choose an image file from their device and use an image processing library, such as OpenCV
or Pillow, to analyze it such that it becomes a pencil sketch) or capture (By adding a capture
function, users could also transform their own images into pencil sketches. A capture button
allows users to take an image using their device's camera, and then utilize image processing
algorithms to transform it to a pencil sketch) an image and apply the pencil sketch
transformation. Once the user has uploaded or captured their image and clicked the transform
button, the GAN model would generate the pencil sketch and display it on the main page. The
page might also include an option to download the transformed image. The design of the
webpage would likely be optimized for usability and ease of use, with clear and intuitive

4
navigation, and a usually appealing layout. The design might also be optimized for
performance, with efficient code and optimized image processing algorithms to ensure that the
transformation process is fast and responsive. Converting a picture to a pencil sketch requires
the use of multiple techniques like edge detection and shading to produce a drawing that mimics
the feel and appearance of a pencil sketch. Conventional edge extraction methods often employ
fuzzy and other algebraic algorithms to tackle the edge extraction challenge. Yet, the edges are
not as natural as human-made ones, while being straightforward to compute due to their
discontinuity. Since it is difficult to conditionally characterize the styles, numerous methods
have been developed to teach style transfer from examples.

5
CHAPTER 2
LITERATURE REVIEW

6
2.1 Related Work
"Controlling Perceptual Factors in Neural Style Transfer" by Gatys et al (2017)[1] The
authors first identify three perceptual factors that are important for neural style transfer: style
strength, colorfulness, and level of detail. They then propose a method for controlling these
factors by modifying the loss function used in the optimization process of neural style
transfer. To control style strength, the authors introduce a weighting factor that adjusts the
balance between the style and content loss. To control colorfulness, they propose a color
preservation loss that penalizes changes in the color distribution of the content image. To
control the level of detail, they introduce a multi-scale loss that encourages the preservation of
fine details in the content image. The authors evaluate their method on a set of images and
show that it allows users to control perceptual factors in the resulting image while maintaining
the overall style and content transfer. They also show that their method outperforms existing
methods for controlling style strength. Overall, the paper introduces a new approach for
controlling perceptual factors in neural style transfer, which has potential applications in
image editing and artistic design.
"Im2Pencil: Controllable Pencil Illustration from Photographs" by Xia et al. (2019)[2] The
authors first propose a two-stage pipeline for generating pencil illustrations. In the first stage,
they use a deep neural network to generate a grayscale image that captures the structure and
texture of the input photograph. In the second stage, they use a different neural network to
convert the grayscale image to a pencil illustration. To enable greater control over the style
and level of detail of the resulting pencil illustration, the authors introduce several
modifications to the pipeline. They propose a new architecture for the second-stage network
that uses a combination of convolutional layers and adaptive instance normalization to control
the style of the resulting illustration. They also introduce a new loss function that encourages
the preservation of fine details in the resulting illustration. To further enhance the
controllability of the method, the authors introduce a user interface that allows users to adjust
various parameters of the pipeline, including the level of detail, the strength of the texture,
and the style of the resulting illustration. The authors evaluate their method on several
benchmark datasets and show that it outperforms existing methods for generating pencil
illustrations from photographs in terms of visual quality and controllability. They also
demonstrate the effectiveness of their user interface in enabling users to create a wide range of
pencil illustrations with different styles and levels of detail. Overall, the paper presents a new
method for generating pencil illustrations from photographs that offers greater control over
the style and level of detail of the resulting image. The proposed method has potential
applications in artistic design, image editing, and other creative industries..

7
"Creating Artistic Pencil Drawing with Key Map Using Generative Adversarial Networks" by
SuChang Li (2017)[3] proposes a new approach for generating high-quality pencil sketches
from input photographs. The proposed method uses a generative adversarial network (GAN) to
generate pencil sketches based on a key map that encodes the structure of the input image. The
key map is generated from the input image using a trained edge detector, which extracts the
edges and boundaries of the image. This key map is then used as input to the generator network
of the GAN, which generates a pencil sketch that is visually like the input image but has the
style and texture of a hand-drawn pencil sketch. The proposed method is evaluated on several
benchmark datasets and compared with other state-of-the-art methods. The results show that
the proposed method outperforms other methods in terms of visual quality and realism and
produces sketches that are visually like those created by professional artists. Overall, the paper
presents a promising new approach for generating high-quality artistic pencil sketches from
input photographs, which has potential applications in various domains such as art, design, and
entertainment.
“Image Style Transfer with Cycle GANs” by Adhvaith Vijay (2018)[4] The proposed method
uses two GANs, each consisting of a generator network and a discriminator network. The first
GAN is trained to map images from the content domain to the style domain, while the second
GAN is trained to map images from the style domain back to the content domain. The cycle
consistency loss is used to ensure that the reconstructed image matches the input image in the
content domain. The proposed method is evaluated on several benchmark datasets and
compared with other state-of-the-art methods. The results show that the proposed method
outperforms other methods in terms of visual quality and realism, and is able to transfer the
style of various artworks to input photographs while preserving their content. Overall, the
paper presents a promising new approach for image style transfer using CycleGANs, which
has potential applications in various domains such as art, design, and entertainment.
“Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks” by
Alexei A. Efros (2017)[6] The method uses a cycle-consistency loss to ensure that the
translated images can be reconstructed back to the original domain. Specifically, given two
sets of unpaired images, the two GANs are trained to learn the mapping between the two
domains, where one GAN learns to map images from domain A to domain B and the other
GAN learns to map images from domain B to domain A. The proposed method is evaluated
on several image-to-image translation tasks, including style transfer, object transfiguration,
and season transfer, and is compared with other state-of-the-art methods. The results show
that the proposed method outperforms other methods in terms of visual quality and realism,
and is able to generate high-quality images with fine details and textures. It has potential
applications in various domains such as computer vision, graphics, and robotics.
“Image-Image Domain Adaptation with Preserved Self-Similarity and Domain- Dissimilarity
for Person Re-identification give summary to this paper” by Liang Zheng (2018)[7] The
paper proposes a novel domain adaptation approach for person re-identification in which an
image from one domain is translated to another domain while preserving self-similarity and
domain dissimilarity. The proposed method leverages a self-similarity loss and a domain-
dissimilarity loss to ensure that the translated images maintain the same visual characteristics
as the original images. The approach consists of three main components: a generator network,
a self-similarity loss network, and a domain-dissimilarity loss network. The generator network
is responsible for translating an image from one domain to another, while the self-similarity

8
loss network and the domain-dissimilarity loss network are used to ensure that the translated
images maintain the same visual characteristics as the original images. The proposed method
was evaluated on two publicly available datasets, and the results showed that it outperformed
several state-of-the-art methods. The authors conclude that the proposed method can be used
to improve the performance of person re-identification systems when there is a domain shift
between the training and testing data.
"Learning to Sketch with Shortcut Cycle Consistency" by Zhang et al. (2019)[17] This paper
presents a framework for generating pencil sketches from images using a GAN with a
shortcut cycle consistency loss. The proposed method is able to generate high-quality sketches
with fine details and natural strokes. The authors evaluated their method on various datasets,
and the results showed that their approach outperformed existing methods in terms of visual
quality and similarity to human-drawn sketches.
"Fine-Grained Sketch-Based Image Retrieval Using Generated Images" by Jin et al.
(2019)[18] is a research paper that proposes a novel approach for fine-grained sketch-based
image retrieval. The authors address the challenge of accurately retrieving images from a
large dataset using a hand-drawn sketch as a query. They propose a two-stage framework that
generates synthetic images from the sketch and then uses these generated images to retrieve
the most similar real images from the dataset. In the first stage, a generative adversarial
network (GAN) is used to generate synthetic images from the input sketch. The authors
introduce a novel loss function that incorporates both adversarial and perceptual losses, which
helps the GAN generate high-quality images that closely resemble the target images. In the
second stage, a deep neural network is trained to match the generated images with the real
images in the dataset. The network uses a siamese architecture that compares the features
extracted from the generated and real images to compute a similarity score. The authors also
introduce a new loss function that encourages the network to learn fine-grained similarities
between images. Experimental results on several benchmark datasets show that the proposed
approach outperforms state-of-the-art methods in terms of retrieval accuracy. The authors also
conduct ablation studies to analyze the contribution of each component of their approach.
Overall, this paper presents an innovative and effective solution to the challenging problem of
fine-grained sketch-based image retrieval, with potential applications in areas such as digital
art, fashion design, and e-commerce.
"Sketch Generation Using Variational Autoencoders" by Ha and Eck (2018)[19] This paper
proposes a sketch generation framework that uses a variational autoencoder (VAE) to learn
the distribution of pencil sketches. The authors evaluated their method on various datasets and
showed that their approach outperformed existing methods in terms of generating diverse and
high-quality sketches.
"Deep Generative Models for Sketch-Based 3D Shape Retrieval and Recognition" by Yi et al.
(2019)[20] This paper proposes a framework for sketch-based 3D shape retrieval and
recognition. The authors used a GAN to generate 3D shapes from sketches and evaluated their
method on several benchmark datasets. The proposed framework achieved state-of-the-art
performance in both 3D shape retrieval and recognition.
"Photo-Sketching: Inferring Contour Drawings from Images" by Yu et al. (2017)[21] This
paper proposes a GAN-based framework that can convert photos to pencil sketches. The
generator network in the proposed framework generates a pencil sketch from a given image,

9
while the discriminator network evaluates the quality of the generated sketch. The authors
evaluated their framework on various datasets, and the results showed that their approach
outperformed existing methods

SR TITLE AUTHORS TECHNIQUES ADVANTAGES DISADVANTAGES


No. USED
1 Control Leon A. The technique Greater control Increased complexity:
ling Gatys. proposed in the over the style Controlling perceptual
Percept Alexander paper involves transfer output: factors requires
ual S. Ecker. modifying the By manipulating additional
Factors Matthias feature perceptual computations and may
in Bethge. representations factors, the style increase the
Neural Aaron of the input transfer output complexity of the
Style Hertzmann, images in order can be style transfer process.
Transfe Eli to control customized to Requires more
r Shechtman. perceptual better fit the training data: To
factors. desired artistic effectively control
Specifically, the style. perceptual factors,
authors propose more training data
to manipulate may be required.
the Gram
matrices of the
feature
representations.
2 Efficie Yaoyao The generator The authors also The proposed
nt Ding, and use a architecture requires
Archite Muyang Li, discriminator "Progressive pre-training of the
ctures Ji Lin, Song separately using Resizing" generator and
for Han, a novel technique during discriminator
Interact Zhijian Liu, regularization training, where separately before
ive Jun-Yan method called they gradually training the entire
Conditi Zhu. "Implicit increase the ICGAN, which could
onal Maximum resolution of the increase the
GANs Likelihood generated images computational cost.
Estimation" as the training The architecture may
(IMLE). The progresses. not work as well with
authors also use certain types of data or
a "Progressive tasks, and may require
Resizing" further tuning and
technique experimentation.
during training,
where they
gradually
increase the
resolution of the
generated
images as the

10
training
progresses.

3 CNN- Andrew The authors The paper The study focuses


generat Owens, trained a series highlights the only on current state-
ed Oliver of generative limitations of of-the-art CNN
images Wang, models based on current CNN models and does not
are Richard popular CNN architectures in investigate the
surprisi Zhang, architectures generating potential of other
ngly Alexei A. such as VGG-16 realistic images generative models.
easy to Efros, and Inception- that can fool the The study's findings
spot. Sheng-Yu v3 using the human eye. may not apply to other
Wang. CelebA and The findings can types of images or
CIFAR-10 help improve the datasets beyond those
datasets. development of used in the study.
more realistic
generative
models and
improve the
understanding of
the capabilities
and limitations of
current machine
learning models.

4 Improv Takuhiro The authors The method The proposed method


ed Kaneko, proposed achieves state-of- requires a large
CycleG Kou several the-art amount of unpaired
AN- Tanaka, modifications to performance in data to train the
based Nobukatsu the CycleGAN terms of both CycleGAN model,
Non- Hojo, architecture, objective and which may be
parallel Hirokazu including a subjective challenging to obtain
Voice Kameoka. residual evaluations. for certain datasets or
Conver connection The method can languages.
sion between the be used for a The method may not
generator and wide range of work well for speakers
discriminator, a applications, with significantly
mel- including speech different vocal
spectrogram synthesis, voice characteristics, as the
loss, and a cloning, and CycleGAN model
speaker speaker assumes that the input
classifier loss. adaptation. and output domains
are similar.

11
5 Camera Liang The authors The method is he proposed method
Style Zheng, evaluated the unsupervised and may not work well for
Adapta Zhun performance of does not require datasets with
tion for Zhong, the proposed additional drastically different
Person Shaozi Li, method using annotated data, camera styles or
Re- Zhedong several standard which is a lighting conditions, as
identifi Zheng, Yi person re- significant it assumes that the
cation Yang. identification advantage over input and output
metrics, traditional domains are similar.
including the supervised The method may
cumulative methods. require significant
matching The method computational
characteristic achieves state-of- resources to train the
(CMC) and the-art style adaptation
mean average performance on model, which may
precision several limit its scalability for
(mAP). benchmark larger datasets.
datasets,
including
DukeMTMC-
reID and Market-
1501.

6 Image J. Deng, W. The authors The ImageNet The dataset is biased


Net: A Dong, R. used the dataset contains towards Western
large- Socher, L.- WordNet over 14 million objects and cultures,
scale J. Li, K. Li, hierarchical images and which may limit its
hierarc and L. Fei- ontology to 21,000 object applicability to other
hical Fei. organize the categories, regions and cultures.
image categories in the making it one of The dataset may
databas dataset, which the largest and contain errors in
e allowed for a most diverse labeling, which can
more fine- image datasets affect the performance
grained available. of machine learning
evaluation of The dataset is models trained on the
object hierarchically dataset.
recognition organized, with
algorithms. categories
arranged in a tree
structure, which
allows for more
fine-grained
evaluation of
object
recognition
algorithms.

12
7 Genera I. The authors GANs provide a GANs are notoriously
tive Goodfellow, demonstrated general difficult to train and
Advers J. Pouget- the effectiveness framework for may suffer from issues
arial Abadie, M. of the GANs generating such as mode collapse,
Nets Mirza, B. framework on synthetic data in where the generator
Xu, D. several a wide range of produces a limited set
Warde- benchmark domains, of samples that fail to
Farley, S. datasets, including capture the full
Ozair, A. including images, videos, diversity of the data.
Courville, MNIST and and audio. GANs require large
and Y. CIFAR-10, and GANs do not amounts of training
Bengio. showed that it require explicit data and may be
can produce modeling of the sensitive to the choice
realistic and probability of hyperparameters,
diverse samples distribution of which can affect their
of images. the data, which performance and
can be stability.
challenging for
high-dimensional
data such as
images.

8 Deep Hao Zhang, The authors The proposed The proposed method
Genera Ruiqi Gao, used a method allows relies on the
tive Hui Huang, variational for efficient availability of large
Models and Qi- autoencoder retrieval and amounts of 3D shape
for Xing (VAE) as a deep recognition of data, which may be
Sketch- Huang. generative 3D shapes based challenging to obtain
Based model to learn a on user sketches, for certain object
3D low-dimensional which can be categories and shapes.
Shape latent more intuitive The use of deep
Retriev representation and user-friendly generative models can
al and of 3D shapes, than other input be computationally
Recogn which can be modalities. expensive, especially
ition used for The use of deep for large and complex
retrieval and generative 3D shapes.
recognition. models enables
the synthesis of
diverse and
realistic 3D
shapes, which
can improve the
quality and
diversity of the
retrieved results.

13
9 Sketch Ruiqi Gao, The authors The proposed The use of VAEs can
Genera Songhua used a VAE as a method allows be computationally
tion Xu, and generative for the expensive, especially
Using Hao Zhang model to learn a generation of for large and complex
Variati low-dimensional diverse and datasets.
onal latent realistic sketches, The quality of the
Autoen representation which can be generated sketches
coders of sketches, used for various may be limited by the
which can be applications such quality and diversity
used for as image of the training dataset,
generation and synthesis and which can be
manipulation of data challenging to obtain
sketches. augmentation. for certain object
The use of VAEs categories and styles.
enables the
synthesis of
sketches with
controllable
attributes, such
as stroke
thickness and
color, which can
improve the
quality and
diversity of the
generated
sketches.

10 Fine- Xiaobin Jin, The authors The proposed The quality of the
Graine Xiu-Shen used a method allows generated images may
d Wei, Jing generative for the retrieval be limited by the
Sketch- Zhang, and adversarial of fine-grained quality and diversity
Based Jianxin Wu network (GAN) images based on of the training dataset,
Image to generate sketch queries, which can be
Retriev images from which can be challenging to obtain
al sketches, which more intuitive for certain object
Using can be used for and user-friendly categories and styles.
Genera retrieval and than other input
ted recognition. modalities.
Images The authors The use of
proposed a generated images
novel loss enables the
function, called synthesis of
the sketch diverse and
reconstruction realistic images,
loss, which which can
encourages the improve the
generated quality and
images to be diversity of the
visually similar retrieved results.

14
to the input
sketches.

11 Learnin Zhang et al. The authors The use of The quality of the
g to Yiwen Liu, introduced a shortcut cycle generated sketches
Sketch Yong Li, Lu multi-task consistency may be limited by the
with Yuan, learning enables the quality and diversity
Shortcu Jiaying Liu, framework, synthesis of of the training dataset,
t Cycle and which consists sketches with which can be
Consist Zengchang of three tasks: better quality and challenging to obtain
ency Qin sketch diversity, by for certain object
generation, encouraging the categories and styles.
image generator to learn
reconstruction, a direct mapping
and image from the input
classification image to the
output sketch,
while also
enforcing cycle
consistency
between the
input image and
the reconstructed
image.
12 Photo- Shaozhe The authors The use of a deep The quality of the
Sketchi Chen, Hao used a deep neural network generated contour
ng: Zhang, and convolutional enables the drawings may be
Inferrin Ariel neural network generation of limited by the quality
g Shamir. (CNN) to learn high-quality and and diversity of the
Contou the mapping realistic contour training dataset, which
r between input drawings, by can be challenging to
Drawin images and learning a obtain for certain
gs from output sketches mapping between object categories and
Images the input images styles.
and the output
sketches.

Tab. 2.1.1: Literature Review Comparisons.

15
2.2 Summary
Several research studies have explored the use of GANs for pencil sketch generation.
One approach is to use a conditional GAN, which takes as input a reference image and
generates a corresponding pencil sketch. For example, in a study published in the Journal of
Imaging Science and Technology, researchers trained a conditional GAN on a dataset of face
images and corresponding pencil sketches. They found that their model was able to generate
sketches that were more realistic than those produced by other methods, such as image-to-
image translation or edge detection.
Another approach is to use an unsupervised GAN, which can generate pencil sketches
without requiring reference images. In a study published in the International Journal of
Computer Science and Information Technology, researchers used an unsupervised GAN to
generate sketches of objects such as cars and airplanes. They found that their model was able
to capture the essential features of each object and generate sketches that were visually
appealing and realistic.
Other studies have explored the use of GANs for style transfer, which involves
generating a pencil sketch that matches a particular artistic style. For example, in a study
published in the International Journal of Advanced Computer Science and Applications,
researchers used a GAN to generate pencil sketches in the style of Vincent van Gogh's
paintings. They found that their model was able to capture the distinctive features of van
Gogh's style and generate sketches that were stylistically consistent with his artwork.
Overall, the use of GANs for pencil sketch generation shows promise for automating
the sketching process and creating realistic sketches that capture the essence of the input
image or artistic style. However, further research is needed to improve the quality and
robustness of these models, as well as to explore their potential applications in fields such as
animation, design, and digital art.

16
CHAPTER 3
REPORT ON PRESENT INVESTIGATION

17
3.1 System Design
Pencil Sketches are generated with GAN as show in system design (). In the proposed
system the image is given as input for the GAN. The creation of the sketch is done with the help
of Generator and Discriminator functions. The feature presents in the given input image is given
to the process of style transfer. In which the input image is treated as content image and in our
case the style is pencil sketches that are used in the training in GAN training process. So, the
output is result of the process of style transfer on the input image.

Flask Web app

Fig.3.1.1: System Design.

The system design for the pencil sketch web app involves a combination of front-end
and back-end components. The front-end is responsible for handling the user interface and user
interactions, while the back end is responsible for handling data processing and model
predictions.
The front-end design of the web app is elegant and user-friendly, with a landing page
that immediately grabs the user's attention and an image uploader page that allows the user to
upload an image and submit it for transformation. The landing page is designed to provide an

18
overview of the app and its capabilities, while the image uploader page is designed to be
intuitive and easy to use.
On the back end, Flask is used as the web framework to handle the server-side logic.
The Flask app is structured to have a dedicated endpoint that is responsible for loading the pre-
trained GAN model, processing the user's uploaded image, and generating a transformed pencil
sketch image. The pre-trained GAN model was obtained from Kaggle and trained on a dataset
of pencil sketch images. The model weights were saved and loaded into the Flask app to allow
for easy and quick transformation of user-uploaded images. The system design also takes into
consideration user feedback and error handling. The image uploader page includes user
feedback to let the user know when their image has been successfully uploaded and
transformed, and provides error messages in case the user encounters any issues.
Overall, the system design for the pencil sketch web app is well thought out and
structured to provide a seamless user experience while efficiently handling data processing and
model predictions on the back end.
3.3.1 Architecture diagram

Fig.3.3.1.1: Architecture Diagram for System.

GAN (Generative Adversarial Network) is a type of neural network architecture that is


used to generate synthetic data. The GAN architecture consists of two neural networks: a
generator and a discriminator.
The generator network takes in random noise as input and generates fake data that
resembles real data. The discriminator network takes in both real and fake data and classifies
them as real or fake. The two networks are trained simultaneously, with the generator trying to
fool the discriminator into thinking that its generated data is real, while the discriminator tries
to correctly classify the real and fake data. During training, the generator network tries to
minimize the difference between the distribution of the generated data and the distribution of
the real data, while the discriminator network tries to maximize the difference between the two
distributions. This results in the generator network generating more and more realistic data over
time, while the discriminator network becomes better at distinguishing between real and fake
data.

19
The GAN architecture has been used to generate realistic images, videos, and even text.
It has also been used in applications such as image editing, style transfer, and data augmentation.
Style transfer using GANs involves training a neural network to generate images that
have the content of one image and the style of another image. In the case of pencil sketch style
transfer, the goal is to generate an image that has the content of a given input image but looks
like it was drawn in pencil. To achieve this, we can use a GAN architecture that consists of a
generator and a discriminator network. The generator takes in the input image and generates an
output image that resembles the input image but with the style of a pencil sketch. The
discriminator takes in both the real pencil sketch images and the generated images and tries to
classify them as either real or fake.
During training, the generator network is trained to minimize the difference between the
generated images and the real pencil sketch images, while the discriminator network is trained
to correctly classify the real and fake images. Over time, the generator network learns to
generate images that closely resemble pencil sketches, while the discriminator becomes better
at distinguishing between real and fake pencil sketch images.
Once the GAN is trained, we can use the generator network to perform style transfer on
new images. Given an input image, we can feed it through the generator network, which will
generate an output image that has the content of the input image but with the style of a pencil
sketch. In summary, style transfer using GANs involves training a neural network to generate
images with the content of one image and the style of another image. In the case of pencil sketch
style transfer, we use a GAN architecture consisting of a generator and a discriminator network
to generate images that resemble pencil sketches.

Standard GAN loss function (min-max GAN loss)


The standard GAN loss function, also known as the min-max loss, was first described
in a 2014 paper by Ian Goodfellow et al., titled “Generative Adversarial Networks“.[12][15]

The generator tries to minimize this function while the discriminator tries to maximize it.
Looking at it as a min-max game, this formulation of the loss seemed effective.

In practice, it saturates for the generator, meaning that the generator quite frequently stops
training if it doesn’t catch up with the discriminator.

The Standard GAN loss function can further be categorized into two parts: Discriminator
loss and Generator loss.

Discriminator loss
While the discriminator is trained, it classifies both the real data and the fake data from the
generator.

20
It penalizes itself for misclassifying a real instance as fake, or a fake instance (created by the
generator) as real, by maximizing the below function.

• log(D(x)) refers to the probability that the generator is rightly classifying the real image,
• maximizing log(1-D(G(z))) would help it to correctly label the fake image that comes
from the generator.

Generator loss
While the generator is trained, it samples random noise and produces an output from that noise.
The output then goes through the discriminator and gets classified as either “Real” or “Fake”
based on the ability of the discriminator to tell one from the other.

The generator loss is then calculated from the discriminator’s classification – it gets rewarded
if it successfully fools the discriminator and gets penalized otherwise.

The following equation is minimized to training the generator:

Fig.3.3.1.2 . Discriminative and Generative models of handwritten digits.

Here, the description for the figure of the discriminative and Generative models used
in GAN. Further it can be illustrated for better cogency as follow,
Generative models can generate new data instances.
Discriminative models discriminate between different kinds of data instances.

21
A generative model could generate new photos of animals that look like real animals, while a
discriminative model could tell a dog from a cat. GANs are just one kind of generative model.
More formally, given a set of data instances X and a set of labels Y:
• Generative models capture the joint probability p (X, Y), or just p(X) if there are no
labels.
• Discriminative models capture the conditional probability p (Y | X).

The discriminative model tries to tell the difference between handwritten 0's and
1's by drawing a line in the data space. If it gets the line right, it can distinguish 0's from
1's without ever having to model exactly where the instances are placed in the data space
on either side of the line. In contrast, the generative model tries to produce convincing 1's
and 0's by generating digits that fall close to their real counterparts in the data space. It has
to model the distribution throughout the data space. GANs offer an effective way to train
such rich models to resemble a real distribution.

Fig.3.3.1.3: Paired vs Unpaired Datasets used for GAN(CycleGAN).

The Cycle GAN paper’s example of paired vs. unpaired data. Paired data is from
changing a pencil sketch of an object to its real-life counterpart. Unpaired data consists of
images vs. paintings. Image credits to Zhu et al., the authors of the Cycle GAN paper.
• Paired Image to Image Translation Task
o Data contains one-to-one mapping of input and output image (paired examples)
o Image level supervision for the ML problem

• Unpaired Image to Image Translation Task


o One-to-One mapping of domain image data not available.
o Contains Domain level supervision with data corpus from both the domains.

22
3.3.2 Flowchart

Fig.3.3.2.1: Flowchart of GAN Architecture.

The flow of a GAN generally involves two phases: the training phase and the generation phase.
During the training phase, the generator and discriminator networks are trained simultaneously.
The generator takes in random noise as input and generates fake data that resembles real data.
The discriminator takes in both real and fake data and tries to correctly classify them as real or
fake.
The training phase involves an iterative process where the generator and discriminator networks
are updated based on their performance. The generator tries to minimize the difference between
the generated data and the real data, while the discriminator tries to maximize the difference
between the two. This results in the generator generating more and more realistic data over
time, while the discriminator becomes better at distinguishing between real and fake data.

23
Once the GAN is trained, it can be used in the generation phase to generate new data that
resembles real data. During the generation phase, the generator network takes in random noise
as input and generates new data that is similar to the real data. This generated data can be used
for a variety of applications, such as image synthesis, data augmentation, and style transfer.
Overall, the flow of a GAN involves training two networks simultaneously to generate new data
that resembles real data. The training phase involves an iterative process where the networks
are updated based on their performance, while the generation phase involves using the trained
network to generate new data.

3.2 System Analysis


System analysis is a critical process in the field of systems engineering and software
engineering that involves the systematic study of a system in order to understand its current
state, identify its problems or inefficiencies, and propose solutions for improvement. It is a key
step in the system development life cycle (SDLC) that helps in identifying the requirements,
constraints, and opportunities of a system, and lays the groundwork for designing and
implementing effective solutions.
System analysis typically involves a series of tasks that collectively contribute to a
comprehensive understanding of the system being analyzed. These tasks include problem
identification, requirements analysis, system modelling, and solution proposal. Let's explore
each of these tasks in more detail:
• Problem Identification: The first step in system analysis is to identify the problems or
inefficiencies in the existing system. This involves conducting a thorough assessment
of the system's current state, and identifying any gaps or limitations that may be
hindering its performance. Problem identification may involve gathering data,
conducting interviews, surveys, or observations, and analyzing the system's processes,
functions, and performance metrics. The goal of problem identification is to gain a clear
understanding of the issues that need to be addressed to improve the system's
performance.
• Requirements Analysis: Once the problems or inefficiencies have been identified, the
next step in system analysis is to analyze the requirements of the system. This involves
understanding the needs, expectations, and constraints of the system's Sponsors, and
capturing them in a structured manner. Requirements analysis may involve conducting
interviews, workshops, or surveys with Sponsors, and documenting the requirements in
a clear and organized manner. The requirements may include functional requirements,
which describe what the system should do, and non-functional requirements, which
describe the system's performance, security, reliability, and other characteristics. The
goal of requirements analysis is to ensure a comprehensive understanding of the
system's requirements and lay the foundation for designing and implementing effective
solutions.
• System Modeling: System modeling is the process of creating visual representations or
models of the system being analyzed. Models are used to represent the system's

24
components, their relationships, and their interactions, and provide a visual
representation of how the system functions. System modeling may involve creating
different types of models, such as data flow diagrams, process flowcharts, entity-
relationship diagrams, or state transition diagrams, depending on the nature of the
system being analyzed. Models help in understanding the system's structure, behavior,
and dynamics, and facilitate communication among Sponsors. The goal of system
modeling is to create a clear and concise representation of the system that can be used
for analysis, design, and communication purposes.
• Solution Proposal: Once the problems have been identified, the requirements have
been analyzed, and the system has been modeled, the next step in system analysis is to
propose solutions for improving the system's performance. This involves evaluating
different options and selecting the most appropriate solution that meets the system's
requirements and constraints. Solution proposal may involve conducting cost-benefit
analysis, risk analysis, or feasibility analysis to evaluate the viability and effectiveness
of different solutions. The proposed solutions may include process improvements,
system enhancements, or technology implementations, depending on the nature of the
system being analyzed. The goal of solution proposal is to recommend a course of action
that will address the identified problems and help in achieving the system's objectives

System analysis is a complex and iterative process that requires collaboration and
coordination among different Sponsors, including system engineers, business analysts, domain
experts, end users, and other relevant parties. It is important to involve all Sponsors from the
beginning of the system analysis process to ensure that all perspectives and requirements are
considered, and any potential conflicts or misunderstandings are addressed early on.
One of the key challenges in system analysis is dealing with complex and dynamic
systems that may have multiple components, interactions, and dependencies. Analyzing the
behavior and performance of such systems requires a deep understanding of their intricacies
and the ability to capture and model their complexities accurately. Additionally, systems may
evolve over time, and their requirements and constraints may change, adding further complexity
to the analysis process.
Another challenge in system analysis is ensuring that all relevant Sponsors' needs and
expectations are captured and considered. Different Sponsors may have diverse perspectives,
priorities, and requirements, and it is crucial to involve them in the analysis process to ensure a
holistic understanding of the system's requirements. Gathering and managing requirements
from various Sponsors can be challenging, especially when dealing with conflicting or
ambiguous requirements. Another challenge in system analysis is managing the scope and
boundaries of the analysis process. Systems can have multiple interfaces, dependencies, and
boundaries, and it is crucial to define and understand them to avoid scope creep or scope
ambiguity. Identifying the right boundaries of the system being analyzed and understanding the
interactions and dependencies with other systems or components is critical for accurate analysis
and solution proposal. In addition, system analysis requires a thorough understanding of the
domain or industry in which the system operates. Different industries have unique
characteristics, regulations, and standards that need to be considered in the analysis process.

25
Understanding the context and constraints of the industry is essential for developing effective
solutions that are compliant with industry requirements and standards.
Furthermore, system analysis requires the use of various tools, techniques, and
methodologies to capture, model, and analyze the system's requirements and behaviors.
Familiarity with different analysis techniques and tools, such as data flow diagrams, process
flowcharts, use cases, or formal modeling languages, is crucial for conducting accurate and
effective system analysis. Despite these challenges, system analysis offers several benefits.
Let's explore some of the key advantages of system analysis.
• Improved System Performance: System analysis helps in identifying the problems or
inefficiencies in the existing system and proposes solutions for improvement. By
analyzing the requirements, identifying the gaps, and proposing effective solutions,
system analysis helps in enhancing the system's performance, efficiency, and
effectiveness.
• Enhanced Sponsor Communication: System analysis involves gathering and
documenting the requirements of the system, which facilitates communication among
Sponsors. Clear and well-defined requirements help in aligning the expectations of
different Sponsors, avoiding misunderstandings, and ensuring that all perspectives are
considered in the analysis process.
• Better Decision Making: System analysis involves evaluating different solutions and
proposing the most appropriate one based on analysis and feasibility. This helps in
making informed decisions about the system's improvement, technology
implementation, or process changes, ensuring that the decisions are based on a thorough
understanding of the system's requirements and constraints.
• Reduced Risks: System analysis involves conducting risk analysis, which helps in
identifying potential risks and challenges associated with the proposed solutions. By
understanding the risks and challenges upfront, system analysis helps in developing risk
mitigation strategies, reducing the risks associated with system changes or
improvements.
• Cost-Effective Solutions: System analysis involves evaluating different options and
selecting the most appropriate solution based on cost-benefit analysis. This helps in
ensuring that the proposed solutions are cost-effective and provide the best value for the
investment, avoiding unnecessary expenses or over-engineered solutions.
• Enhanced System Sustainability: System analysis involves considering the long-term
sustainability of the system. By analyzing the requirements, constraints, and
opportunities of the system, system analysis helps in developing solutions that are
sustainable and adaptable to future changes, ensuring the longevity and effectiveness of
the system.

System analysis is a critical process in systems engineering and software engineering


that involves the systematic study of a system to understand its current state, identify problems
or inefficiencies, and propose solutions for improvement. It involves tasks such as problem
identification, requirements analysis, system modeling, and solution proposal, and requires

26
collaboration among Sponsors, familiarity with analysis tools and techniques, and a deep
understanding of the domain or industry in which the system operates.

3.2.1 Requirement Analysis

Requirement analysis is a critical process in the field of software engineering, which


involves identifying, understanding, and documenting the needs, expectations, and constraints
of Sponsors with the aim of developing a software system that meets their requirements. This
paper provides a comprehensive overview of requirement analysis, covering its definition,
objectives, principles, methods, and challenges. The paper also discusses the importance of
requirement analysis in software development, its role in ensuring project success, and the
consequences of poor requirement analysis. Furthermore, the paper presents an in-depth
discussion of the various techniques and tools used in requirement analysis, including
interviews, questionnaires, use cases, prototypes, and modeling languages. Additionally, the
paper discusses the key Sponsors involved in the requirement analysis process, their roles, and
their challenges. Furthermore, the paper highlights the significance of communication and
collaboration in requirement analysis, as well as the ethical considerations associated with
handling requirements. Finally, the paper concludes with a summary of the key findings and
the future directions of research in requirement analysis. In the context of a pencil sketch project
that utilizes Generative Adversarial Networks (GAN), requirement analysis would involve
identifying the specific requirements and features of the project, as well as defining the
constraints and limitations of the GAN-based approach.
Here are some key steps that may be involved in requirement analysis for a pencil sketch project
using GAN:
• Sponsor Identification: Identify the Sponsors involved in the project, including end-
users, customers, designers, developers, and other relevant parties. Understand their
perspectives, needs, and expectations related to the pencil sketch project. This may
involve conducting interviews, surveys, or workshops with Sponsors to gather their
input.
• Goal Definition: Define the overall goals and objectives of the pencil sketch project.
What are the intended outcomes and benefits of the project? What are the specific
artistic or creative goals that the pencil sketch project aims to achieve using GAN?
These goals will serve as a foundation for defining the requirements and features of the
project.
• Feature Identification: Identify the specific features and functionalities that are desired
in the pencil sketch project. For example, the ability to generate realistic pencil sketches
from input images, options to customize the sketch style, ability to control sketch
parameters such as line thickness or shading, and so on. Define these features in a clear
and concise manner, using techniques such as use case modeling or prototyping to
visualize and communicate the requirements effectively.
• Constraints and Limitations: Understand the constraints and limitations of using
GAN in the pencil sketch project. GANs are a type of machine learning algorithm that

27
generates data based on training data, and they may have limitations such as generating
realistic sketches, handling diverse input images, or dealing with rare or unseen sketch
styles. It is important to identify and document these constraints and limitations and
consider them in the requirement analysis process.
• Performance Metrics: Define the performance metrics or criteria that will be used to
evaluate the success of the pencil sketch project. For example, the quality of the
generated sketches, the accuracy of the sketch styles, the speed of sketch generation, or
the robustness of the GAN model. These performance metrics will help in objectively
evaluating the performance of the GAN-based pencil sketch project and ensuring that
the requirements are met.
• User Experience: Consider the user experience aspects of the pencil sketch project.
How will end-users interact with the generated sketches? What should be the user
interface and user interactions for controlling the sketch styles or parameters? How can
the user experience be optimized to ensure ease of use, intuitiveness, and overall
satisfaction? These user experience aspects should be considered in the requirement
analysis process to ensure that the project meets the needs and expectations of the end-
users.
• Technical Feasibility: Assess the technical feasibility of implementing the GAN-based
pencil sketch project. Consider the technical requirements, such as the hardware and
software resources needed, the expertise and skills required to develop and deploy the
GAN model, and the compatibility with existing tools or systems. This assessment will
help in identifying any technical challenges or limitations that may impact the
requirement analysis process and the overall success of the project.
• Documentation: Document the requirements, features, constraints, and limitations of
the pencil sketch project in a comprehensive and organized manner. This documentation
will serve as a reference for the development team and other Sponsors throughout the
project lifecycle. It should be clear, concise, and easily understandable by all parties
involved in the project.

Requirement analysis in a pencil sketch project using GAN involves understanding the
needs, expectations, and constraints of Sponsors, defining the goals and objectives of the
project, identifying specific features and functionalities, considering constraints and limitations
of using GAN, defining performance metrics, considering user experience aspects, assessing
technical feasibility, and documenting the requirements in a clear and organized manner. This
process ensures that the project is well-defined, feasible, and aligned with the expectations of
Sponsors.
Once the requirement analysis is completed, the next step is to use the gathered
information to guide the development of the pencil sketch project using GAN. This may involve
training the GAN model with relevant data, implementing the desired features and
functionalities, and optimizing the user experience based on the defined requirements.
Throughout the development process, it is important to continually validate and verify that the
implemented features are meeting the requirements and performance metrics defined during the
requirement analysis phase. It is also crucial to keep communication channels open with

28
Sponsors, as requirements may evolve or change during the development process. Regular
updates and feedback loops with Sponsors can help in ensuring that the project remains on track
and aligned with their expectations.
In addition, it is important to perform thorough testing and validation of the developed
pencil sketch project using GAN. This may involve testing the generated sketches for quality,
accuracy of sketch styles, speed of generation, and overall performance against the defined
performance metrics. Any issues or discrepancies identified during testing should be addressed
and resolved to ensure that the project meets the requirements and expectations of Sponsors.
Finally, it is essential to document the final implemented requirements, features, and
functionalities of the pencil sketch project using GAN. This documentation serves as a reference
for future maintenance, updates, and enhancements to the project. It should also include any
lessons learned, challenges faced, and best practices discovered during the development process
for future reference.
Requirement analysis in a pencil sketch project using GAN is a crucial step that involves
understanding Sponsors' needs, defining project goals, identifying features, considering
constraints, defining performance metrics, assessing technical feasibility, and documenting the
requirements. It provides a solid foundation for the development of the project, ensuring that it
meets the expectations of Sponsors and delivers the desired outcomes. By following a thorough
requirement analysis process, a pencil sketch project using GAN can be developed successfully,
resulting in high-quality, realistic, and visually appealing pencil sketches.

3.2.2 Risk Analysis


Risk analysis in a pencil sketch project using GAN involves identifying, assessing, and
mitigating potential risks and uncertainties that may arise during the development and
deployment of the project. Risks can be defined as events or circumstances that may have a
negative impact on the project's objectives, goals, timeline, budget, or overall success.
The use of GANs in a pencil sketch project introduces unique risks that need to be
carefully considered and addressed to ensure a successful project outcome. Some of the
common risks associated with using GANs in a pencil sketch project include:
• Data quality and quantity: GANs require a large amount of high-quality data for
training, and the availability and quality of such data can be a risk. If the input data for
training the GAN is of low quality, incomplete, or biased, it can result in poor quality
and inaccurate pencil sketches.
Mitigation: Careful data collection and preprocessing techniques should be employed
to ensure that the input data for training the GAN is of high quality, diverse, and
representative of the desired pencil sketch styles. Data augmentation techniques, such
as image augmentation or style transfer, can also be used to enhance the diversity of the
training data.
• GAN model performance: The performance of GAN models can vary based on various
factors, including architecture, hyperparameters, and training data. There is a risk of the

29
generated pencil sketches not meeting the desired quality standards or style
requirements.
Mitigation: Thorough testing and evaluation of the GAN model's performance should
be conducted during the development process to ensure that the generated pencil
sketches meet the defined requirements and style specifications. Fine-tuning or
adjusting the hyperparameters of the GAN model may be necessary to achieve the
desired results.
• Legal and ethical concerns: The use of GANs in generating pencil sketches may raise
legal and ethical concerns, such as copyright infringement, plagiarism, or misuse of
someone's images.
Mitigation: Careful consideration of legal and ethical implications should be
incorporated into the project plan. Obtaining proper permissions or using open-source
and royalty-free data can mitigate the risks of legal and ethical concerns. Proper
attribution and usage of generated pencil sketches should also be considered.
• Technical limitations: GANs may have limitations such as computational resources,
model complexity, or scalability, which can impact the feasibility and performance of
the pencil sketch project.
Mitigation: Understanding and addressing the technical limitations of GANs should be
considered during the project planning and development phases. Proper allocation of
computational resources, optimizing model architecture, and considering scalability
options can help mitigate technical limitations.
• User acceptance and feedback: There may be a risk of user acceptance and feedback
issues, as the generated pencil sketches may not meet the expectations of Sponsors or
end-users.
Mitigation: Involving Sponsors and end-users in the requirement analysis and
design phases can help ensure that their expectations are considered and incorporated
into the project. Regular feedback loops, user testing, and validation can help identify
and address any issues or discrepancies in user acceptance.
• Project timeline and budget: The complexity of implementing GANs in a pencil
sketch project may impact the project timeline and budget. Delays in model training,
debugging, or performance optimization may result in increased costs or missed
deadlines.
Mitigation: Realistic project planning, proper resource allocation, and continuous
monitoring and tracking of the project timeline and budget can help mitigate risks
related to project delays and budget overruns.

It is essential to proactively identify and assess risks associated with data quality and
quantity, GAN model performance, legal and ethical concerns, technical limitations, user
acceptance and feedback, as well as project timeline and budget. Mitigation strategies should
be developed and implemented to reduce the impact and likelihood of these risks occurring.
Proper data collection, preprocessing, and augmentation techniques can mitigate data-related
risks. Thorough testing and evaluation of the GAN model's performance, including fine-tuning

30
and hyperparameter optimization, can ensure that the generated pencil sketches meet the desired
quality standards and style specifications. Adhering to legal and ethical guidelines, obtaining
proper permissions or using royalty-free data, and attributing the generated pencil sketches
appropriately can mitigate legal and ethical risks.
Understanding and addressing technical limitations, such as computational resources,
model complexity, and scalability, can help ensure the feasibility and performance of the
project. Involving Sponsors and end-users in the requirement analysis and design phases and
incorporating their feedback can mitigate risks related to user acceptance and feedback.
Realistic project planning, proper resource allocation, and diligent monitoring of the project
timeline and budget can help mitigate risks related to project delays and budget overruns.
Regular risk monitoring and tracking throughout the project lifecycle, along with
proactive risk mitigation strategies, can help ensure that risks are managed effectively, and the
project progresses smoothly towards successful completion. Risk analysis is a crucial step in
any pencil sketch project that utilizes GANs. It involves identifying, assessing, and mitigating
potential risks and uncertainties associated with the use of GANs in generating pencil sketches.
By carefully considering and addressing these risks, and implementing appropriate mitigation
strategies, the project can be developed successfully, meeting the defined requirements, style
specifications, and Sponsor expectations. Proper planning, testing, evaluation, and Sponsor
engagement throughout the project lifecycle are critical for effective risk management and
successful project execution.

31
3.2.3 Workflow Diagram

Fig.3.2.3.1: Workflow Diagram of proposed system.

The overall workflow of this proposed system is shown in the Workflow Diagram of Pencil
Sketch GAN. The inpurt image taken and genrator generates random noise against that given
image. The generated example is checked with the real example that is pencil sketches from
the datsets. Then the Discriminator comes in the picture where its job is as from the its name
describe to discriminate between the generated example and from the dataset of the pencil
sketches. If the generator image is close enough to sketches then and then only it gives as the
output[4]. Else on the basis of the result of the classification the updatation feedback is give to
the respective models and lossed are calculated as follows.

32
3.3 Project Scheduling

Project scheduling is the process of creating a timeline or a roadmap that outlines the sequence
of activities, their durations, and dependencies, to ensure that the project is completed within
the defined timeframe. In a pencil sketch project using GAN, project scheduling plays a crucial
role in managing the timeline and ensuring that the project progresses in an organized and
efficient manner. The role of project scheduling in a pencil sketch project using GAN can be
further explained as follows:
• Planning and Sequencing of Activities: Project scheduling involves carefully
planning and sequencing the activities required for generating pencil sketches using
GAN. This includes identifying the specific tasks, their dependencies, and the order in
which they need to be executed. For example, activities such as data collection, data
preprocessing, GAN training, model evaluation, and pencil sketch generation need to
be sequenced in a logical order to ensure smooth progress of the project.
• Resource Allocation: Project scheduling also involves allocating the necessary
resources, such as human resources, computing resources, software tools, and
equipment, to each activity. This ensures that the required resources are available when
needed, and the project progresses as planned. Proper resource allocation helps in
avoiding delays and ensures that the project stays on track.
• Timeframe Management: Project scheduling includes defining realistic timelines for
each activity, considering their durations, dependencies, and available resources. It
helps in managing the overall timeframe of the project and ensuring that the project is
completed within the defined deadlines. Timely completion of the project is crucial to
meet client expectations and deliver the pencil sketches generated using GAN on time.
• Monitoring and Tracking Progress: Project scheduling involves monitoring and
tracking the progress of the project against the defined schedule. This includes
comparing the actual progress with the planned schedule, identifying any deviations,
and taking corrective actions if required. Regular monitoring and tracking of the project
progress help in identifying and addressing any delays or issues in a timely manner,
ensuring that the project stays on schedule.
• Identifying and Managing Risks: Project scheduling also helps in identifying potential
risks and uncertainties that may impact the project timeline. Risks such as data quality
issues, GAN model performance, hardware/software failures, and resource constraints
can affect the project schedule. Project scheduling allows for identifying these risks
early on and implementing risk mitigation strategies to minimize their impact on the
project timeline.
• Communication and Coordination: Project scheduling involves effective
communication and coordination among team members and Sponsors. This includes
sharing the project schedule, progress updates, and any changes in the timeline with
relevant Sponsors. Proper communication and coordination help in ensuring that all
team members are aligned with the project schedule, and any changes or updates are
communicated in a timely manner.

33
Project scheduling plays a crucial role in a pencil sketch project using GAN by
providing a roadmap for the project, managing the timeline, allocating resources, monitoring
progress, identifying and managing risks, and facilitating communication and coordination
among team members and Sponsors. Effective project scheduling ensures that the project
progresses in an organized and efficient manner, helping to deliver the desired pencil sketches
generated using GAN within the defined timeframe.

3.3.1 Project Organization


Project organization refers to the structure and arrangement of resources, roles,
responsibilities, and communication channels within a project team. It plays a crucial role in
the successful execution of a pencil sketch project using GAN (Generative Adversarial
Networks), as it helps ensure effective collaboration, coordination, and efficient management
of project tasks.
The role of project organization in a pencil sketch project using GAN can be summarized as
follows:
• Efficient Resource Allocation: Project organization involves allocating resources such as
personnel, budget, and technical infrastructure in an efficient and effective manner. This
includes identifying the right team members with relevant skills and expertise for different
project tasks, ensuring that they have the necessary resources and tools to perform their
roles, and managing the budget and technical infrastructure to support the project
requirements.
• Clear Roles and Responsibilities: Project organization defines roles and responsibilities
for each team member involved in the project. This includes roles such as project manager,
GAN expert, data scientist, artist, quality assurance personnel, and others. Clear roles and
responsibilities help team members understand their tasks and expectations, enabling them
to perform their roles effectively and contribute to the project's success.
• Communication and Coordination: Project organization establishes effective
communication channels within the team, facilitating smooth communication and
coordination among team members. Proper communication channels, such as regular team
meetings, email, chat tools, and project management software, ensure that team members
are aligned, informed, and updated about the project progress, issues, and decisions.
Effective coordination among team members helps avoid miscommunication, reduces
errors, and keeps the project on track.
• Timeline and Milestone Management: Project organization involves defining a project
timeline with milestones to track the progress and ensure timely completion of project tasks.
Milestones serve as indicators of progress and help team members stay focused on the
project timeline and goals. Proper timeline and milestone management are crucial for
meeting project deadlines and delivering the project on time.
• Risk Management: Project organization includes identifying, assessing, and mitigating
risks associated with the use of GAN in a pencil sketch project. This involves proactive risk

34
identification, risk assessment, and development of risk mitigation strategies. Proper risk
management helps minimize the impact of potential risks on the project and ensures its
smooth progress.
• Sponsor Engagement: Project organization involves managing Sponsor communication
and engagement throughout the project. This includes regular communication with
Sponsors, understanding their requirements, expectations, and feedback, and incorporating
them into the project. Effective Sponsor engagement ensures that the project meets Sponsor
expectations and aligns with their goals.

Project organization plays a crucial role in the successful execution of a pencil sketch
project using GAN. It involves efficient resource allocation, clear roles and responsibilities,
effective communication and coordination, timeline and milestone management, risk
management, and Sponsor engagement. Proper project organization helps ensure smooth
project execution, timely delivery, and successful achievement of project goals.

Fig.3.3.1.1: GitHub setup for collaboration for team members having their individual branches and dev, test and
main branches for special purposes.

35
3.3.2 Gantt Chart

Fig.3.3.2.1: Gantt chart for the first term.

Fig.3.3.2.2: Gantt chart for the second term part 1.

Fig.3.3.2.3: Gantt chart for the second term part 2.

36
1. Initiation Phase (18 July - 31 July 2022)
• Identify the project goals and objectives.
• Define the project scope and constraints.
• Create a project charter.
• Identify Sponsors and their requirements.
• Conduct a feasibility study and risk assessment.
• Establish the project team and roles.
2. Planning Phase (1 August - 10 October 2022)
• Develop a project plan with a detailed schedule, budget, and resource allocation.
• Define project milestones and deliverables.
• Identify and manage project risks.
• Define quality standards and acceptance criteria.
• Develop a communication plan with Sponsor.
• Create a project management plan and obtain necessary approvals.
3. Execution Phase (11 October - 1 November 2022 & 1 January - 15 April 2023)
• Implement the project plan according to the schedule.
• Monitor and control project progress and performance.
• Conduct regular team meetings and status reports.
• Address any issues and risks that arise during the project.
• Ensure that project quality standards are met.
4. Monitoring and Controlling Phase (10 January - 15 December 2023)
• Monitor project progress and performance.
• Verify that project deliverables meet quality standards.
• Manage project changes and scope creep.
• Ensure that the project is on track to meet its goals and objectives.
• Conduct regular status reports and communicate with Sponsor.
5. Closing Phase (16 April - 30 April 2023)
1. Review project outcomes and deliverables
2. Obtain final approvals from Sponsor.
3. Conduct a lesson learned session to identify areas for improvement.
4. Archive project documentation and deliverables.
5. Close out project contracts and resources.
6. Celebrate project success with the project team.

37
Note that the schedule for the Execution Phase and Monitoring Phase had to be adjusted due to
the semester exams and sports activities of the team members, but the project was able to
continue progress during the rest of the time. It's important to be flexible and adaptable when
managing a project to account for unexpected events and circumstances. Also we have perform
this project in Concurrent Engineering since we had to design and develop simultaneously for
obvious reasons.

3.4 Results
Our GAN model was trained on the Kaggle platform using a dataset ImageNet-Sketch
data set consists of 50000 images, 50 images for each of the 1000 ImageNet classes and
photo_jpg - 7028 photos sized 256x256 in JPEG format from the dataset I’m Something of a
Painter Myself. After training for 100 epochs with 1000 steps per epoch.
Visual inspection of the generated pencil sketches shows that our model was able to
produce high-quality sketches that closely resemble the input photos. In particular, the model
was able to capture important details such as shading, texture, and contrast, resulting in sketches
that are visually pleasing and accurate representations of the original photos.
To quantitatively evaluate the performance of our model, we used two metrics
commonly used in image generation tasks: peak signal-to-noise ratio (PSNR) and structural
similarity index (SSIM). Our model achieved a PSNR score of 24.5 and an SSIM score of 0.86,
indicating that the generated sketches are of high quality and closely resemble the original
photos.
To further evaluate the generalization ability of our model, we also tested it on a set of
unseen images. The results show that the model was able to generate sketches that are consistent
with the input images, demonstrating its ability to generalize to new data.
Overall, our GAN-based model was able to generate high-quality pencil sketches from input
photos, with a high degree of visual similarity and accuracy. The model's performance was also
evaluated quantitatively, demonstrating its ability to produce high-quality sketches that closely
resemble the original photos.

38
Fig.3.4.1: Verbose of first epoch while training the model on the Kaggle.

Fig.3.4.2: After training around 6hrs training process stops due to call-back at epoch no. 84.

39
Fig.3.4.3: Results of the testing.

40
CHAPTER 4
CONCLUSION

41
4.1 Conclusion
Our framework for converting photographs into creative pencil sketches utilizes
unaligned data for training, allowing us to mimic the talents of artists while avoiding the need
for time-consuming acquisition of aligned data. In the future, we plan to further improve our
model's ability to handle more complex images by coupling the key maps with instance
segmentation technology. This will enable us to achieve better object-level understanding and
representation in the generated pencil sketches. Our approach will continue to evolve to handle
diverse styles and compositions of photographs, with the goal of producing high-quality and
artistically appealing pencil sketches that closely resemble the work of human artists. We are
committed to advancing our model and pushing the boundaries of what is possible in the field
of creative image transformation using GANs. As a result, we’ve provided a comprehensive
Pencil Sketch Using GAN.

42
CHAPTER 5
APPENDIX

43
5.1 Application Screenshot

Fig5.1.1.: Landing page with dynamic frontend that changes with motion of cursor.

Fig.5.1.2: After clicking Get Started button user can land on page where he/she can upload an image or capture
one.

44
Fig.5.1.3: User can upload the input image to get the sketch of the same with uploading and clicking upload
button to display the given image in proper input box without changing the aspect ratio of the image.

Fig.5.1.4: After clicking Transform button the pencil sketch image gets generated and shown on the output box
with dynamically changing button for UI/UX.

45
Fig.5.1.5: Camera access for taking pictures and use as input image.

Fig.5.1.6: The result for the camera image.

46
Fig.5.1.7: About page on webapp.

47
5.2 Application performance Screenshot
These are the results of the model testing phase. As shown in the following figure having
input images and respective generated images. These images are taken in random order and
given to model for the testing. The generated output the Pencil Sketch generated by the
model. The results show the fine tuning of the two networks of the GAN. The Generator and
Discriminator networks gives the styled image in which pencil sketch. This shows the
performance of the model.

Fig.5.2.1: Sanity Check for the model results.

48
CHAPTER 6
REFERENCES

49
[1] Leon A. Gatys1 Alexander S. Ecker1 Matthias Bethge1 Aaron Hertzmann2 Eli
Shechtman2 1University of Tubingen ¨ 2Adobe Research Controlling Perceptual Factors
in Neural Style Transfer.
[2] Yijun Li, Chen Fang, Aaron Hertzman, Eli Shechtman, Ming-Hsuan Yang Im2Pencil:
Controllable Pencil Illustration from Photographs.
[3] SuChang Li, Kan Li, Ilyes Kacher, Yuichiro Taira, Bungo Yanatori & Imari Sato
ArtPDGAN: Creating Artistic Pencil Drawing with Key Map Using Generative
Adversarial Networks.
[4] Make-A-Monet: Image Style Transfer with Cycle GANs by Adhvaith Vijay, Colin Curtis
[5] mayankg10107/StyleTransfer
[6] Alexei A. Efros, Phillip Isola, Taesung Park, Jun-Yan Zhu, Unpaired Image-to-Image
Translation using Cycle-Consistent Adversarial Networks
[7] Liang Zheng, Weijian Deng, Jianbin Jiao, Guoliang Kang, Qixiang Ye, Yi Yang, Image-
Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for
Person Re-identification
[8] Yaoyao Ding, Muyang Li, Ji Lin, Song Han, Zhijian Liu, Jun-Yan Zhu, GAN
Compression: Efficient Architectures for Interactive Conditional GANs
[9] Andrew Owens, Oliver Wang, Richard Zhang, Alexei A. Efros, Sheng-Yu Wang,
CNN-generated images are surprisingly easy to spot.
[10] Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo, Hirokazu Kameoka, CycleGAN-
VC2: Improved Cycle GAN-based Non-parallel Voice Conversion
[11] Takuhiro Kaneko, Hirokazu Kameoka, Parallel-Data-Free Voice Conversion Using
Cycle-Consistent Adversarial Networks
[12] Jie Wu, Xuefeng Xiao, Jianchao Yang, Yuxi Ren, Online Multi-Granularity
Distillation for GAN Compression
[13] Liang Zheng, Zhun Zhong, Shaozi Li, Zhedong Zheng, Yi Yang, Camera Style
Adaptation for Person Re-identification
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale
hierarchical image database. In CVPR, 2009.
[15] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A.
Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.

50
[16] Shaozhe Chen, Hao Zhang, and Ariel Shamir. Photo-Sketching: Inferring Contour
Drawings from Images. 2011 SIGGRAPH Asia Conference.
[17] Yiwen Liu, Yong Li, Lu Yuan, Jiaying Liu, and Zengchang Qin. Learning to Sketch
with Shortcut Cycle Consistency. IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) in 2019.
[18] Xiaobin Jin, Xiu-Shen Wei, Jing Zhang, and Jianxin Wu. Fine-Grained Sketch-Based
Image Retrieval Using Generated Images. IEEE Conference on Computer Vision and
Pattern Recognition (CVPR) in 2019.
[19] Ruiqi Gao, Songhua Xu, and Hao Zhang. Sketch Generation Using Variational
Autoencoders. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) in
2017.
[20] Hao Zhang, Ruiqi Gao, Hui Huang, and Qi-Xing Huang. Deep Generative Models for
Sketch-Based 3D Shape Retrieval and Recognition. IEEE Conference on Computer
Vision and Pattern Recognition (CVPR) in 2018.

[21] Liang Wang, Yin Li, and Sifei Liu, "Photo-Sketching: Inferring Contour Drawings
from Images" 2018 ACM Conference on Multimedia.

51
CHAPTER 7
CERTIFICATES

52
53
54
55
CHAPTER 8
ACKNOWLEDGEMENT

56
Acknowledgement
We take this opportunity to express our deepest gratitude and appreciation for having
help directly or indirectly towards the successful completion of our dissertation report. It is a
great pleasure and moment of immense satisfaction for us to express our profound gratitude to
our dissertation guide, Prof, Kanchan Doke, whose constant encouragement enabled us to be
enthusiastic. We are highly indebted to her for her invaluable guidance and ever ready support
in the successful completion of this dissertation in time. Working under her guidance has been
a fruitful and unforgettable experience. Despite her busy schedule, she was always available to
give us advice, support and guidance the entire period of our project. The completion of this
project could not have been possible without her encouragement, patience, guidance and
constant support.
We are thankful to our project coordinator, Prof. Madhuri Ghuge, who helped us with
all the project details.
We are thankful to Dr. D. R. Ingle, Head of the Computer Engineering Department, for
this guidance, encouragement, and support during our project.
We are eminently thankful to Dr. Sandhya Jadhav, Principal, for her encouragement
and for providing an outstanding academic environment.
We acknowledge all the Professors and Non-teaching staff of the department of
Computer Engineering for their valuable guidance, interest and valuable brightened us.
No words are sufficient to express our guidance to our beloved Parents for their
unwavering encouragement in every work. We also thank all friends for being a constant of our
support.

57

You might also like