Professional Documents
Culture Documents
Veera Pandi Project Document PDF
Veera Pandi Project Document PDF
CHAPTER 1
1. INTRODUCTION
1.1 ORGANIZATION PROFILE
The Francis Xavier Engineering College popularly known as FX Engineering College
was established in the year 2000 with the vision to empower budding engineers in technical
and entrepreneurial training and to contribute to the socio-economic augmentation of the
nation. The college is located in the heart of the city of Tirunelveli heralded worldwide as the
Oxford of South India, and is well connected by Road, Rail, and Air. Students of FX
Engineering College are given the opportunity to pursue first-rate and advanced technical
education regardless of background, gender, or financial constraints. The necessity to promote
the most advantageous learning and service is well harmonized and clearly expressed by
allowing students to successfully plan their education and competently achieve the education
they need. Curricular and co-curricular programs form an integral part of the curriculum and
help students to have commanding incorporation of theory and practical knowledge. Each
individual is motivated to increase the gravity of responsibility and be committed to serving
the nation. The college is full of opportunities where students' talents can thrive and it
systematically transforms a graduate into a future-ready professional. The institution is open to
the exchange of ideas, where discovery, creativity, and personal and professional development
can flourish. It is a responsive, student-oriented institution that is committed to the creation,
dissemination, and acquisition of knowledge through teaching, research and service.
• Vision
To provide education in Engineering with excellence and ethics and to reach the
unreached.
• Mission
To create innovative and vibrant young leaders and entrepreneurs in Engineering and
Technology for building India as a super knowledge power and blossom into a university of
excellence recognized globally.
2
CHAPTER 2
LITERATURE REVIEW
2.1 INTRODUCTION
As a medium of information transmission, photos record people’s meaningful time and
carry many people’s beautiful memories. But decades ago, people could only save the few
project-based photos in photo frames and photo albums [1][2]. With the passage of time, many
old photos have turned yellow due to oxidation, and they are even easily torn apart by accident
[5]. With the advent of image restoration technology, photos that have been yellowed or even
damaged can be virtually restored on a computer to preserve people’s memories of the past.
Deep learning is a branch of machine learning [6]. It is an algorithm that uses artificial neural
networks as the architecture to characterize and learn data. In recent years, deep learning has
been widely used in computer vision tasks due to its powerful representation of image data [3].
Some researchers have tried to apply deep learning to image tasks to improve the high-quality
content structure and structure of image restoration algorithms [4]. Texture capabilities,
especially deep convolutional neural networks and generative adversarial networks, perform
well in feature extraction and image generation and have good image restoration effects on old
photos [8]. Therefore, more and more people use deep neural networks for image restoration
of old photos.
Traditionally, image restoration can use diffusion-based methods to deal with missing
scene information and perceptual information [7]. This method spreads the local structure to
the location part or constructs one pixel of the missing part every time, while maintaining
surrounding pixels’ consistency [10]. Although traditional algorithms can solve some of the
problems of hole repair and crease repair in old photos, these methods only focus on defect
filling and are having difficulty solving the fading and blur problems of old photos[11].
Therefore, an additional component is needed to provide reasonable imagination (the illusion
from the machine). This additional information may be provided by high-level models of
natural images, such as deep neural network calculations and other neural network methods,
which mainly rely on the illusion of a pretrained neural network to fill in the missing parts of
the image. Repairing technology based on deep neural network saves time and effort [6, 9]. As
one of the branches of image processing technology, this technology can be used to repair
damaged photo paintings and remove redundant texts.
The innovation of this project is proposed a new image repair model; the design of this
model is based on the convolutional neural network in the deep neural network and the
4
generative adversarial network; the model is the result of the two improvements.[4] Two
experiments are designed, which are, namely, blur repair and damage repair. The algorithm in
this project is compared with other algorithms, and it is verified that the algorithm in this project
is superior to other algorithms in image blur repair and damage repair.
➢ X. Wang et.al [8] is Non-local operations are presented as a generic family of building
blocks for capturing long-range dependencies, according to the authors. This
component may be used in a variety of computer vision architectures. On the Kinetics
and Charades datasets, our nonlocal models can rival or surpass existing competition
winners on the objective of video categorization, even without any bells and whistles.
➢ T.-Y. Lin et.al [9] On driving videos acquired by self-driving vehicles, our suggested
network performs well in terms of rain removal and object recognition.
➢ R. Zhang et.al [10] finding is not limited to ImageNet-trained VGG features, but
applies to a variety of deep architectures and supervision levels. Their findings imply
that perceptual similarity is an emergent trait that may be found in deep visual
representations.
➢ Mittal et.al [11] In the spatial domain, they offer a natural scene statistic-based
distortion-generic blind/no-reference (NR) image quality assessment (IQA) model. The
new model, dubbed BRISQUE (blind/reference less image spatial quality evaluator),
instead of computing distortion-specific features like ringing, blur, or blocking, uses
scene statistics of locally normalised luminance coefficients to quantify possible losses
of "naturalness" in the image due to the presence of distortions, resulting in a holistic
measure of quality.
CHAPTER 3
SYSTEM SPECIFICATION
3.1 HARDWARE REQUIREMENTS
Hardware : intel i5 and above
Speed : 2.4 GHz
RAM : Minimum 4 GB
Hard Disk : Minimum 500 GB
GPU : AMD or NVIDIA
3.2 SOFTWARE REQUIREMENTS
Language : Python 3.8 AND ABOVE
Operating System : Windows/ Linux / macOS
Tools : Anaconda Navigator / Google Colab Notebook
3.3 SOFTWARE REQUIREMENT SPECIFICATION
A Software requirements specification (SRS), a requirements specification for a
software system, is a description of the behaviour of a system to be developed and may include
a set of use cases that describe interactions the users will have with the software.
3.3.1 PROGRAMMING LANGUAGE - PYTHON
Python is a high-level programming language that is dynamically semantic, interpreted,
and object-oriented. Its high-level built-in data structures, together with dynamic typing and
dynamic binding, making it ideal for rapid application development as well as use as a scripting
or glue language to link together existing components. Python supports modules and packages,
which help with programme flexibility and code reuse. The Python interpreter and its extensive
standard library are available in source and binary formats for all major systems.
Python is garbage collected and dynamically typed. Multiple programming paradigms
are supported, including structured, object-oriented, and functional programming. Because of
its extensive standard library, it is frequently referred to as a battery included language. Guido
van Rossum started working on python in the late 1980s as a replacement for the ABC
programming language, and it was originally released in 1991 as python, with additional
capabilities like list comprehension and a garbage collection mechanism that detects cycles.
Python 3.0, which was published in 2008, was a substantial change of the language that was
not fully backward compatible. Python 2 was retired in 2020 with version 2.7.18.Python is one
of the most widely used programming languages.
9
CHAPTER 4
PROJECT DESCRIPTION
4.1 OVERVIEW OF THE PROJECT
Images have become an essential medium for communicating information as deep -
learning technology continues to advance. It's crucial to figure out how to extract high-quality
photos from huge image data. Image restoration has been one of the most significant study
issues in the world of feature extraction as a crucial technique in image processing. Because
ancient images frequently include priceless historical recollections, it's critical to apply image
restoration techniques while restoring them. Traditional approaches such as dispersion
methods, pattern nanoparticles were synthesized methods, and information methods are among
the various quality of picture restoration methods available. People have adopted deep neural
networks for picture restoration this year as a result of the advancements in artificial
intelligence and deep learning and a number of picture restoration models have been presented,
with promising results. The goal of this project is to enhance the deep neural network and offer
a novel image restoration technique.
4.2 MODULES DESCRIPTION
• Upload a Damaged Image
• Scratch Detection
• Global Restoration
• Face Enhancement
• Getting Result
4.2.1 UPLOAD A DAMAGED IMAGE
This module is used to user to work their damaged images to uploading our process.
And the image has be moving the requirements gathering process to run the basic level of the
project. Then the damage image has moved the next step of restoring process.
4.2.2 SCRATCH DETECTION
This module used currently we don't plan to release the scratched old photos dataset
with labels directly. If we want to get the paired data, we could use our pretrained model to test
the collected images to obtain the labels. And we are importing some libraries to use the process
of scratch detection concept.
12
CHAPTER 5
SYSTEM IMPLEMENTATION
Implementation is the realization of an application, or execution of a plan, idea, model,
design, specification, standard, algorithm, or policy.
5.1 PROGRAM DESIGN LANGUAGE
❖ INPUT DATA:
If user input image == damaged image
{
If damaged image was input
{
While “damages” is not detected
{
Improve image quality rate
}
}
Else
{
Request for damaged image
}
}
❖ IMAGE TRAINING:
While new” damaged image” is uploaded in process
{
Extract the damaged parts from images
Classify from damaged part
Train the model
}
❖ UPLOAD THE DAMAGED IMAGES:
If damaged part is detected
17
{
Extract the image’s damage part
While” damage” is not matching
{
Check the training set
}
Else
{
Return the upload process
}
❖ DATA STORAGE:
While “data” is incoming
{
If data from upload process
{
Store the data in variable
Store the image in array
}
}
❖ RETRIEVE OUTPUT IMAGES:
If retrieve image is occurs
{
Display the damaged image
Display the retrieve image
}
Else
{
Download the retrieve image
}
18
CHAPTER 6
SYSTEM TESTING
6.1 SOFTWARE TESTING
Testing is usually performed for the following purposes:
➢ To improve quality.
➢ For Verification & Validation (V&V)
➢ For reliability estimation
6.1.1 TO IMPROVE QUALITY
Quality means the conformance to the specified design requirement. Being correct, the
minimum requirement of quality, means performing as required under specified circumstances.
6.1.2 FOR VERIFICATION & VALIDATION (V&V)
Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers
can make claims based on interpretations of the testing results, which either the product works
under certain situations, or it does not work. We can also compare the quality among different
products under the same specification, based on results from the same test.
➢ Verification
Verification is the process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions imposed at the start of
that phase.
➢ Validation
Validation is the process of evaluating a system or component during or at the end of
the development process to determine whether it satisfies specified requirements.
6.1.3 FOR RELIABILITY ESTIMATION
Software reliability has important relations with many aspects of software, including
the structure, and the amount of testing it has been subjected to the system. Based on an
operational profile an estimate of the relative frequency of use of various inputs to the program,
testing can serve as a statistical sampling method to gain failure data for reliability estimation.
Each program is tested individually at the time of development using the data and has verified
that this program linked together in the way specified in the programs specification. The final
stage is to document the entire system which provides components and the operating
procedures of the system.
21
CHAPTER 7
CONCLUSION & FUTURE ENHANCEMENTS
7.1 CONCLUSION
Image restoration technology is the core technology for restoring old photos and
images. Old photos are often blurred and all type of damaged, so it is necessary to propose a
suitable restoration method to reproduce the scene of the old photos. Based on the deep neural
network, this project proposes a new method for repairing old photos. This project first
introduces the background and significance of the topic. Then the general convolutional neural
network and the nearest neighbour classification are introduced, and its structure and principle
are explained. Then, this project designs a new image restoration method, which is based on
the convolutional neural network and the nearest neighbour classification in the deep neural
network, and the loss function is designed. Finally, this project does an experiment to test the
performance of this algorithm in image restoration. The results show that the peak signal-to-
noise ratio and structural similarity of the algorithm in this project are higher than those of
other algorithms. Therefore, compared with other algorithms, the algorithm in this project not
only has better performance in blur repair but also has better performance in damage repair and
etc. The model in this report is more suitable for the field of restoration of old photos. From
the experimental point of view, the restoration effect is good, but there is still improvement in
the restoration effect, especially the restoration of damaged photos which is not ideal enough,
so the focus of future work will be put on repairing broken photos.
The future of image processing will involve scanning the heavens for other intelligent
life out in space. Also new intelligent, digital species created entirely by research scientists in
various nations of the world will include advances in image processing applications. Due to
advances in image processing and related technologies there will be millions and millions of
robots in the world in a few decades time, transforming the way the world is managed.
Advances in image processing and artificial intelligences will involve spoken
commands, anticipating the information requirements of governments, translating languages,
recognizing and tracking people and things, diagnosing medical conditions, performing
23
surgery, reprogramming defects in human DNA, and automatic driving all forms of transport.
With increasing power and sophistication of modern computing, the concept of computation
can go beyond the present limits and in future, image processing technology will advance and
the visual system of man can be replicated.
The future trend in remote sensing will be towards improved sensors that record the
same scene in many spectral channels. Graphics data is becoming increasingly important in
image processing applications. The future image processing applications of satellite-based
imaging ranges from planetary exploration to surveillance applications. Future research can
investigate whether this architecture can be adapted to other imaging problems such as
deblurring or impainting through old images restoration.
24
CHAPTER 8
APPENDICES
8.1 APPENDIX 1-SOURCE CODE
input_folder = "test_images/old"
output_folder = "output"
import os
basepath = os.getcwd()
input_path = os.path.join(basepath, input_folder)
output_path = os.path.join(basepath, output_folder)
os.mkdir(output_path)
import io
import IPython.display
import numpy as np
import PIL.Image
if I1.ndim >= 3:
I2 = np.asarray(I2.resize((W,H)))
I_combine = np.zeros((H,W*2,3))
I_combine[:,:W,:] = I1[:,:,:3]
I_combine[:,W:,:] = I2[:,:,:3]
else:
I2 = np.asarray(I2.resize((W,H)).convert('L'))
I_combine = np.zeros((H,W*2))
I_combine[:,:W] = I1[:,:]
I_combine[:,W:] = I2[:,:]
25
I_combine = PIL.Image.fromarray(np.uint8(I_combine))
W_base = 600
if resize:
ratio = W_base / (W*2)
H_new = int(H * ratio)
I_combine = I_combine.resize((W_base, H_new), PIL.Image.LANCZOS)
return I_combine
filenames = os.listdir(os.path.join(input_path))
filenames.sort()
display(make_grid(image_original, image_restore))
input_folder = "test_images/old_w_scratch"
output_folder = "output"
input_path = os.path.join(basepath, input_folder)
output_path = os.path.join(basepath, output_folder)
filenames = os.listdir(os.path.join(input_path))
filenames.sort()
display(make_grid(image_original, image_restore))
from google.colab import files
import shutil
if os.path.isdir(upload_output_path):
shutil.rmtree(upload_output_path)
if os.path.isdir(upload_path):
shutil.rmtree(upload_path)
os.mkdir(upload_output_path)
os.mkdir(upload_path)
uploaded = files.upload()
26
display(make_grid(image_original, image_restore))
print("")
output_folder = os.path.join(upload_output_path, "final_output")
print(output_folder)
os.system(f"zip -r -j download.zip {output_folder}/*")
files.download("download.zip")
CHAPTER 9
REFERENCES
9.1 BOOKS REFERRED
[1] F. Stanco, G. Ramponi, and A. De Polo, “Towards the automated restoration of old
photographic prints: a survey,” in The IEEE Region 8 EUROCON 2003. Computer
as a Tool., vol. 2. IEEE, 2019, pp. 370–374
[2] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for
image restoration,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2018.
[3] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint
arXiv:1312.6114, 2019
[4] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations
over learned dictionaries,” IEEE Transactions on Image processing, vol. 15, no. 12,
pp. 3736– 3745, 2020
[5] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural
networks,” in Advances in neural information processing systems, 2019.
[6] K. Yu, C. Dong, L. Lin, and C. Change Loy, “Crafting a toolchain for image
restoration by deep reinforcement learning,” in Proceedings of the IEEE conference
on computer vision and pattern recognition, 2018, pp. 2443–2452
[7] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation
using cycle-consistent adversarial networks,” in Proceedings of the IEEE
International Conference on Computer Vision, 2020, pp. 2223–2232
[8] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2018, pp. 7794– 7803.
[9] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Fo- ´ cal loss for dense
object detection,” in Proceedings of the IEEE international conference on computer
vision, 2019.
[10] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable
effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2018, pp. 586– 595
30