Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

1

CHAPTER 1
1. INTRODUCTION
1.1 ORGANIZATION PROFILE
The Francis Xavier Engineering College popularly known as FX Engineering College
was established in the year 2000 with the vision to empower budding engineers in technical
and entrepreneurial training and to contribute to the socio-economic augmentation of the
nation. The college is located in the heart of the city of Tirunelveli heralded worldwide as the
Oxford of South India, and is well connected by Road, Rail, and Air. Students of FX
Engineering College are given the opportunity to pursue first-rate and advanced technical
education regardless of background, gender, or financial constraints. The necessity to promote
the most advantageous learning and service is well harmonized and clearly expressed by
allowing students to successfully plan their education and competently achieve the education
they need. Curricular and co-curricular programs form an integral part of the curriculum and
help students to have commanding incorporation of theory and practical knowledge. Each
individual is motivated to increase the gravity of responsibility and be committed to serving
the nation. The college is full of opportunities where students' talents can thrive and it
systematically transforms a graduate into a future-ready professional. The institution is open to
the exchange of ideas, where discovery, creativity, and personal and professional development
can flourish. It is a responsive, student-oriented institution that is committed to the creation,
dissemination, and acquisition of knowledge through teaching, research and service.

• Vision

To provide education in Engineering with excellence and ethics and to reach the
unreached.

• Mission

To create innovative and vibrant young leaders and entrepreneurs in Engineering and
Technology for building India as a super knowledge power and blossom into a university of
excellence recognized globally.
2

1.2 OBJECTIVE OF THE PROJECT


The objective of image restoration is to "compensate for" or "undo" defects which
degrade an image. Degradation comes in many forms such as motion blur, noise, and camera
misfocus. In cases like motion blur, it is possible to come up with a very good estimate of the
actual blurring function and "undo" the blur to restore the original image. In cases where the
image is corrupted by noise, the best we may hope to do is to compensate for the degradation
it caused. In this project, we will introduce and implement several of the methods used in the
image processing world to restore images.
The goal of this project is to learn the methods of repairing old or damaged photographs.
There is a large demand in today’s photography market for fixing old, damaged photographs.
In this project we will explore the methods and techniques to make a damaged photograph look
like “new.”
Experiment with the cloning tool, brush size, opacity–it is important to get comfortable
with switching between all those tools to get the desired effect. We may have to use the magic
wand and rectangular marquee selection tools as well, depending on the severity of the repair.
The key in this assignment is to make your restored image look like it did when it was in a non-
damaged state. For the turn in, submit a jpg or pdf of the final restored image.
In this project demonstrates superior performance over state-of-the-art methods visual
quality for old photos restoration using Convolutional Neural Network in deep learning
concept.
3

CHAPTER 2
LITERATURE REVIEW
2.1 INTRODUCTION
As a medium of information transmission, photos record people’s meaningful time and
carry many people’s beautiful memories. But decades ago, people could only save the few
project-based photos in photo frames and photo albums [1][2]. With the passage of time, many
old photos have turned yellow due to oxidation, and they are even easily torn apart by accident
[5]. With the advent of image restoration technology, photos that have been yellowed or even
damaged can be virtually restored on a computer to preserve people’s memories of the past.
Deep learning is a branch of machine learning [6]. It is an algorithm that uses artificial neural
networks as the architecture to characterize and learn data. In recent years, deep learning has
been widely used in computer vision tasks due to its powerful representation of image data [3].
Some researchers have tried to apply deep learning to image tasks to improve the high-quality
content structure and structure of image restoration algorithms [4]. Texture capabilities,
especially deep convolutional neural networks and generative adversarial networks, perform
well in feature extraction and image generation and have good image restoration effects on old
photos [8]. Therefore, more and more people use deep neural networks for image restoration
of old photos.
Traditionally, image restoration can use diffusion-based methods to deal with missing
scene information and perceptual information [7]. This method spreads the local structure to
the location part or constructs one pixel of the missing part every time, while maintaining
surrounding pixels’ consistency [10]. Although traditional algorithms can solve some of the
problems of hole repair and crease repair in old photos, these methods only focus on defect
filling and are having difficulty solving the fading and blur problems of old photos[11].
Therefore, an additional component is needed to provide reasonable imagination (the illusion
from the machine). This additional information may be provided by high-level models of
natural images, such as deep neural network calculations and other neural network methods,
which mainly rely on the illusion of a pretrained neural network to fill in the missing parts of
the image. Repairing technology based on deep neural network saves time and effort [6, 9]. As
one of the branches of image processing technology, this technology can be used to repair
damaged photo paintings and remove redundant texts.
The innovation of this project is proposed a new image repair model; the design of this
model is based on the convolutional neural network in the deep neural network and the
4

generative adversarial network; the model is the result of the two improvements.[4] Two
experiments are designed, which are, namely, blur repair and damage repair. The algorithm in
this project is compared with other algorithms, and it is verified that the algorithm in this project
is superior to other algorithms in image blur repair and damage repair.

2.2 RESEARCH METHODOLGOY

Figure 2.1: Research Methodology


5

2.3 EXISTING SYSTEM


Old photos retain precious historical image information, but today’s existing old photos
often have varying degrees of damage. Although these old photos can be digitally processed
and then restored, the restoration of old photos involves multiple areas of image restoration and
has various types of degradation. Currently, there is no unified model for repairing multiple
types of degradation of old photos. Photo restoration technology still has a lot of developments.
Traditional image restoration technology mainly repairs the missing areas of the image based
on mathematical formulas or thermal diffusion. This old image restoration process can only
repair images with simple structures and small damaged areas and is difficult to apply in
people’s daily lives.
2.3.1 DISADVANTAGE .
• Edge artifacts in the form of horizontal and vertical stripes on the image and lack of
information about the optimal number of algorithm iterations.
• It's very costly depending on the system used, the number of detectors purchased.
• Time consuming.
• Lack of qualified professional.
2.4 PROPOSED SYSTEM
The emergence of deep learning technology has accelerated the pace of research on
image restoration. This project will discuss the methods of repairing old photos based on deep
neural networks. It is aimed at proposing an image restoration method based on deep neural
network to enhance the effect of image restoration of old photos and provide more possibilities
for restoration of old photos. This project discusses the background significance of image
restoration methods, designs an image restoration model based on deep neural networks, and
introduces the structure, principle, and loss function of the model.
2.4.1 ADVANTAGES
• Improving image quality to the maximum possible extent far outweigh the cost and
complexity
• This method does not require prior knowledge for the removal of noise and blur.
• It efficiently removes the blurred and it is mostly like to the weiner filter and it
overcome the problem of inverse filter.
• Low-cost restoration process while comparing other tools and concepts.
6

2.5 LITERATURE SURVEY


➢ F. Stanco et.al [1] presents a list of the most common faults that may be found in
antique photographs; their various sources and characteristics propose various
restoration approaches.
➢ K. Zhang et.al [2] goal is to train a collection of quick and effective CNN denoisers
and incorporate them into a model-based optimization strategy to handle various
inverse issues. Experiments show that the learnt set of denoisers may not only yield
promising Gaussian denoising results, but can also be employed as a pre-processor to
offer high performance in a variety of low-level vision applications.
➢ D. P. Kingma et.al [3] introduce a stochastic variational inference and learning
algorithm that scales to large datasets and, under some mild differentiability conditions,
even works in the intractable case. Our contributions is two-fold. First, they show that
a reparameterization of the variational lower bound yields a lower bound estimator that
can be straightforwardly optimized using standard stochastic gradient methods. Second,
they show that for i.i.d. datasets with continuous latent variables per datapoint, posterior
inference can be made especially efficient by fitting an approximate inference model.
➢ M. Elad et.al [4] demonstrate how a Bayesian approach results in a simple and
effective denoising technique. This results in cutting-edge denoising performance that
is on par with, if not better than, previously published leading alternative denoising
algorithms.
➢ J. Xie et.al [5] suggested technique performs well in picture denoising and blind
inpainting tasks, as evidenced by experimental findings. They also show that our new
DA training strategy is more effective and can increase unsupervised feature learning
performance.
➢ K. Yu, C et.al [6] formulate a step-wise reward function proportional to how well the
image is restored at each step to learn the action policy. They also devise a joint learning
scheme to train the agent and tools for better performance in handling uncertainty.
➢ J.-Y. Zhu et.al [7] In the absence of paired instances, provide a method for learning to
translate a picture from a source domain X to a target domain Y. Using an adversarial
loss, they want to learn a mapping G : X → Y such that the distribution of pictures from
G(X) is indistinguishable from the distribution Y. They link it with an inverse mapping
F : Y → X and create a cycle consistency loss to push F(G(X)) ≈ X since this mapping
is extremely under-constrained.
7

➢ X. Wang et.al [8] is Non-local operations are presented as a generic family of building
blocks for capturing long-range dependencies, according to the authors. This
component may be used in a variety of computer vision architectures. On the Kinetics
and Charades datasets, our nonlocal models can rival or surpass existing competition
winners on the objective of video categorization, even without any bells and whistles.
➢ T.-Y. Lin et.al [9] On driving videos acquired by self-driving vehicles, our suggested
network performs well in terms of rain removal and object recognition.
➢ R. Zhang et.al [10] finding is not limited to ImageNet-trained VGG features, but
applies to a variety of deep architectures and supervision levels. Their findings imply
that perceptual similarity is an emergent trait that may be found in deep visual
representations.
➢ Mittal et.al [11] In the spatial domain, they offer a natural scene statistic-based
distortion-generic blind/no-reference (NR) image quality assessment (IQA) model. The
new model, dubbed BRISQUE (blind/reference less image spatial quality evaluator),
instead of computing distortion-specific features like ringing, blur, or blocking, uses
scene statistics of locally normalised luminance coefficients to quantify possible losses
of "naturalness" in the image due to the presence of distortions, resulting in a holistic
measure of quality.

2.6 FUTURE SCOPE OF THE PROJECT


With the continuous development of computer vision technology, images have become
an important medium for transmitting information. How to obtain high-quality images that
people need from massive image data is particularly important. As a key technology in image
processing, image restoration has become one of the important research topics in the field of
computer vision. Old photos often save precious historical memories, so it is of great
significance to use image restoration technology in the restoration of old photos. There are
many image restoration methods, including traditional methods, such as diffusion-based
methods, texture synthesis-based methods, and data-driven methods. This year, due to the
development of artificial intelligence and deep learning, people have used deep neural networks
for image restoration, and various image restoration models have been proposed, and good
results have been achieved. This project is performed to improve the deep neural network and
propose a new image restoration method.
8

CHAPTER 3
SYSTEM SPECIFICATION
3.1 HARDWARE REQUIREMENTS
Hardware : intel i5 and above
Speed : 2.4 GHz
RAM : Minimum 4 GB
Hard Disk : Minimum 500 GB
GPU : AMD or NVIDIA
3.2 SOFTWARE REQUIREMENTS
Language : Python 3.8 AND ABOVE
Operating System : Windows/ Linux / macOS
Tools : Anaconda Navigator / Google Colab Notebook
3.3 SOFTWARE REQUIREMENT SPECIFICATION
A Software requirements specification (SRS), a requirements specification for a
software system, is a description of the behaviour of a system to be developed and may include
a set of use cases that describe interactions the users will have with the software.
3.3.1 PROGRAMMING LANGUAGE - PYTHON
Python is a high-level programming language that is dynamically semantic, interpreted,
and object-oriented. Its high-level built-in data structures, together with dynamic typing and
dynamic binding, making it ideal for rapid application development as well as use as a scripting
or glue language to link together existing components. Python supports modules and packages,
which help with programme flexibility and code reuse. The Python interpreter and its extensive
standard library are available in source and binary formats for all major systems.
Python is garbage collected and dynamically typed. Multiple programming paradigms
are supported, including structured, object-oriented, and functional programming. Because of
its extensive standard library, it is frequently referred to as a battery included language. Guido
van Rossum started working on python in the late 1980s as a replacement for the ABC
programming language, and it was originally released in 1991 as python, with additional
capabilities like list comprehension and a garbage collection mechanism that detects cycles.
Python 3.0, which was published in 2008, was a substantial change of the language that was
not fully backward compatible. Python 2 was retired in 2020 with version 2.7.18.Python is one
of the most widely used programming languages.
9

3.3.2 TOOL – PYTHON NOTE BOOK


A notebook interface (also called a computational notebook) is a
virtual notebook environment used for literate programming, a method of writing computer
programs. Some notebooks are WYSIWYG environments including executable calculations
embedded in formatted documents; others separate calculations and text into separate sections.
Modular notebooks may connect to a variety of computational back ends, called "kernels".
Notebook interfaces are widely used for statistics, data science, machine learning,
and computer algebra.
At the notebook core is the idea of literate programming tools which can be described
as "tools let you arrange the parts of a program in any order and extract documentation and
code from the same source file, the notebook takes this approach to a new level extending it
with some graphic functionality and a focus on interactivity. According to Stephen Wolfram:
"The idea of a notebook is to have an interactive document that freely mixes code, results,
graphics, text and everything else, and according to the Jupyter Project Documentation: "The
notebook extends the console-based approach to interactive computing in a qualitatively new
direction, providing a web-based application suitable for capturing the whole computation
process: developing, documenting, and executing code, as well as communicating the results.
3.3.3 DATASET
A data set is a collection of related, discrete items of related data that may be accessed
individually or in combination or managed as a whole entity.
A data set is organized into some type of data structure. In a database, for example, a
data set might contain a collection of business data (names, salaries, contact information, sales
figures, and so forth). The database itself can be considered a data set, as can bodies of data
within it related to a particular type of information, such as sales data for a particular corporate
department.
The term data set originated with IBM, where its meaning was similar to that of file. In
an IBM mainframe operating system, a data set s a named collection of data that contains
individual data units organized (formatted) in a specific, IBM-prescribed way and accessed by
a specific access method based on the data set organization. Types of data set organization
include sequential, relative sequential, indexed sequential, and partitioned. Access methods
include the Virtual Sequential Access Method (VSAM) and the Indexed Sequential Access
Method (ISAM).A data set is also an older and now deprecated term for modem.
10

3.3.4 BENIFITS OF DATASET


• Memory based. Client datasets reside completely in memory, making them
useful for temporary tables.
• Fast. Because client datasets are RAM based, they are extremely fast.
• Efficient.
• On-the-fly indexing.
• Automatic undo support.
• Maintained aggregates
11

CHAPTER 4
PROJECT DESCRIPTION
4.1 OVERVIEW OF THE PROJECT
Images have become an essential medium for communicating information as deep -
learning technology continues to advance. It's crucial to figure out how to extract high-quality
photos from huge image data. Image restoration has been one of the most significant study
issues in the world of feature extraction as a crucial technique in image processing. Because
ancient images frequently include priceless historical recollections, it's critical to apply image
restoration techniques while restoring them. Traditional approaches such as dispersion
methods, pattern nanoparticles were synthesized methods, and information methods are among
the various quality of picture restoration methods available. People have adopted deep neural
networks for picture restoration this year as a result of the advancements in artificial
intelligence and deep learning and a number of picture restoration models have been presented,
with promising results. The goal of this project is to enhance the deep neural network and offer
a novel image restoration technique.
4.2 MODULES DESCRIPTION
• Upload a Damaged Image
• Scratch Detection
• Global Restoration
• Face Enhancement
• Getting Result
4.2.1 UPLOAD A DAMAGED IMAGE
This module is used to user to work their damaged images to uploading our process.
And the image has be moving the requirements gathering process to run the basic level of the
project. Then the damage image has moved the next step of restoring process.
4.2.2 SCRATCH DETECTION
This module used currently we don't plan to release the scratched old photos dataset
with labels directly. If we want to get the paired data, we could use our pretrained model to test
the collected images to obtain the labels. And we are importing some libraries to use the process
of scratch detection concept.
12

4.2.3 GLOBAL RESTORATION


This module is used to a triplet domain translation network is proposed to solve both
structured degradation and unstructured degradation of old photos. we are importing some
libraries to use the process of global restoration concept.
4.2.4 FACE ENHANCEMENT
This module is used to we use a progressive generator to refine the face regions of old
photos. More details could be found in our journal submission for our dataset folder. When we
are importing some libraries to use the process of face enhancement-oriented concept.
4.2.5 GETTING RESULT
This module is using CNN algorithm, this algorithm which is widely used for
image/object recognition and classification. Deep Learning thus recognizes objects in an image
by using a CNN. In the restoration process classify the damaged image and restoring the
missing part of the image through the modules of the restoration concept. When the model has
been fully trained and retrieve the quality image for the result and improve the quality of the
images of the image processing concept.

4.3 ARCHITECTURE DIAGRAM

Figure 4.1 Architecture Diagram


13

4.4 USE CASE DIAGRAM

Figure 4.2 Use Case Diagram


14

4.5 ACTIVITY DIAGRAM

Figure 4.3 Activity Diagram


15

4.6 SEQUENCE DIAGRAM

Figure 4.4 Sequence Diagram

4.7 DATA FLOW DIAGRAM (DFD)


4.7.1 DFD LEVEL O

Figure 4.5 DFD Level 0

4.7.2 DFD - LEVEL 1

Figure 4.6 DFD –Level 1


16

CHAPTER 5
SYSTEM IMPLEMENTATION
Implementation is the realization of an application, or execution of a plan, idea, model,
design, specification, standard, algorithm, or policy.
5.1 PROGRAM DESIGN LANGUAGE
❖ INPUT DATA:
If user input image == damaged image
{
If damaged image was input
{
While “damages” is not detected
{
Improve image quality rate
}
}
Else
{
Request for damaged image
}
}
❖ IMAGE TRAINING:
While new” damaged image” is uploaded in process
{
Extract the damaged parts from images
Classify from damaged part
Train the model
}
❖ UPLOAD THE DAMAGED IMAGES:
If damaged part is detected
17

{
Extract the image’s damage part
While” damage” is not matching
{
Check the training set
}
Else
{
Return the upload process
}
❖ DATA STORAGE:
While “data” is incoming
{
If data from upload process
{
Store the data in variable
Store the image in array
}
}
❖ RETRIEVE OUTPUT IMAGES:
If retrieve image is occurs
{
Display the damaged image
Display the retrieve image
}
Else
{
Download the retrieve image
}
18

5.2 FEASIBILITY STUDY


• The feasibility study is an evaluation and analysis of the potential of a proposed project
which is based on extensive investigation and research to support the process of
decision making.
• Feasibility studies aim to objectively and rationally uncover the strengths and
weaknesses of an existing business or proposed venture, opportunities and threats
present in the environment, the resources required to carry through, and ultimately the
prospects for success.
• A well-designed feasibility study should provide a historical background of the
business or project, a description of the product or service, accounting statements,
details of the operations and management, marketing research and policies, financial
data, legal requirements and tax obligations.
• A feasibility study evaluates the project's potential for success; therefore, perceived
objectivity is an important factor in the credibility of the study for potential investors
and lending institutions.
Three essential considerations are involved in the feasibility analysis.
▪ Economical feasibility
▪ Operational feasibility
▪ Technical feasibility
▪ Social feasibility
5.2.1 ECONOMICAL FEASIBILITY
Previous system is justified by cost and time criteria to ensure that it will return the best
at the earliest opportunity. The factors used for evaluation are:
• Cost of operation of the existing system and proposed system.
• Cost of development of the proposed system.
Since, hardware and software resources are already available with the organization and
organization can afford to allocate the required resources. There is no need to recruit separate
persons to maintain the system. So less money is spent for maintenance.
This study is being conducted to determine the system's damaged images impact on the
restoration process. The amount of money the corporation has to invest in system research and
development is limited. It is necessary to justify spending. As a result, the produced system
came in under budget, which was possible because the majority of the technologies used were
freely available. Only customizable items have to be bought for our restoring process.
19

5.2.2 OPERATIONAL FEASIBILITY


The resources that are required to implement are already available with the
organization. Old images restoration can repair all images with complex structures and damaged
areas and is easy to apply in people's daily lives. This must be built in such a way that the real
world can retrieve image oriented process associated with the development, design, and
maintenance of the image restoration. They should also ensure that they can cope with any
disruptions or flaws in the planned of restoration method.
5.2.3 TECHNICAL FEASIBILITY
Technical feasibility centres on the existing computer system and to what extent, it can
support the proposed addition. The goal of this study is to figure out the old image restoration.
Many systems have made it difficult to retrieve the old damaged images. As a result,
the damaged images restored to deep learning model can be used to retrieve and convert the
good quality rate.
5.2.4 SOCIAL FEASIBLITY
It will serve the society and real world in future for retrieve the damaged images in the
restoration to taken by the whole people. Our image restoration process has been taken from
our real time solution based on developments.
20

CHAPTER 6
SYSTEM TESTING
6.1 SOFTWARE TESTING
Testing is usually performed for the following purposes:
➢ To improve quality.
➢ For Verification & Validation (V&V)
➢ For reliability estimation
6.1.1 TO IMPROVE QUALITY
Quality means the conformance to the specified design requirement. Being correct, the
minimum requirement of quality, means performing as required under specified circumstances.
6.1.2 FOR VERIFICATION & VALIDATION (V&V)
Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers
can make claims based on interpretations of the testing results, which either the product works
under certain situations, or it does not work. We can also compare the quality among different
products under the same specification, based on results from the same test.
➢ Verification
Verification is the process of evaluating a system or component to determine whether
the products of a given development phase satisfy the conditions imposed at the start of
that phase.
➢ Validation
Validation is the process of evaluating a system or component during or at the end of
the development process to determine whether it satisfies specified requirements.
6.1.3 FOR RELIABILITY ESTIMATION
Software reliability has important relations with many aspects of software, including
the structure, and the amount of testing it has been subjected to the system. Based on an
operational profile an estimate of the relative frequency of use of various inputs to the program,
testing can serve as a statistical sampling method to gain failure data for reliability estimation.
Each program is tested individually at the time of development using the data and has verified
that this program linked together in the way specified in the programs specification. The final
stage is to document the entire system which provides components and the operating
procedures of the system.
21

6.2 TESTING METHODS


The Testing Methods are integrated the following:
➢ Unit testing
➢ Integration testing
➢ System testing
➢ Validation testing
6.2.1 UNIT TESTING
Unit testing verification efforts on the smallest unit of software design, module. This is
known as “Module Testing”. The modules are tested separately. This testing is carried out
during programming stage itself. In these testing steps, each module is found to be working
satisfactorily as regard to the expected output from the module.
6.2.2 INTEGRATION TESTING
Integration testing is a systematic technique for constructing tests to uncover error
associated within the interface. In the project, all the modules are combined and then the entire
programme is tested as a whole. In the integration-testing step, all the error uncovered is
corrected for the next testing steps.
6.2.3 SYSTEM TESTING
Testing is done for each module. After testing all the modules, the modules are
integrated and testing of the final system is done with the test data, specially designed to show
that the system will operate successfully in all its aspects conditions. The procedure level
testing is made first. By giving improper inputs, the errors occurred are noted and eliminated.
Thus the system testing is a confirmation that all is correct and an opportunity to show the user
that the system works. The final step involves Validation testing, which determines whether
the software function as the user expected. The end-user rather than the system developer
conduct this test most software developers as a process called “Alpha and Beta test” to uncover
that only the end user seems able to find.
6.2.4 VALIDATION TESTING
Validation Testing, the validation requirements established during the planning phase
is tested. Validation testing provides final assurance that software needs all functional and
performance requirements. To uncover functional errors, that is, to check whether functional
characteristics confirm to specification or not specified. This is the final step in system life
cycle. Here we implement the tested error-free system into real-life environment and make
necessary changes, which runs in an online fashion.
22

CHAPTER 7
CONCLUSION & FUTURE ENHANCEMENTS

7.1 CONCLUSION
Image restoration technology is the core technology for restoring old photos and
images. Old photos are often blurred and all type of damaged, so it is necessary to propose a
suitable restoration method to reproduce the scene of the old photos. Based on the deep neural
network, this project proposes a new method for repairing old photos. This project first
introduces the background and significance of the topic. Then the general convolutional neural
network and the nearest neighbour classification are introduced, and its structure and principle
are explained. Then, this project designs a new image restoration method, which is based on
the convolutional neural network and the nearest neighbour classification in the deep neural
network, and the loss function is designed. Finally, this project does an experiment to test the
performance of this algorithm in image restoration. The results show that the peak signal-to-
noise ratio and structural similarity of the algorithm in this project are higher than those of
other algorithms. Therefore, compared with other algorithms, the algorithm in this project not
only has better performance in blur repair but also has better performance in damage repair and
etc. The model in this report is more suitable for the field of restoration of old photos. From
the experimental point of view, the restoration effect is good, but there is still improvement in
the restoration effect, especially the restoration of damaged photos which is not ideal enough,
so the focus of future work will be put on repairing broken photos.

7.2 FUTURE ENHANCEMENTS

The future of image processing will involve scanning the heavens for other intelligent
life out in space. Also new intelligent, digital species created entirely by research scientists in
various nations of the world will include advances in image processing applications. Due to
advances in image processing and related technologies there will be millions and millions of
robots in the world in a few decades time, transforming the way the world is managed.
Advances in image processing and artificial intelligences will involve spoken
commands, anticipating the information requirements of governments, translating languages,
recognizing and tracking people and things, diagnosing medical conditions, performing
23

surgery, reprogramming defects in human DNA, and automatic driving all forms of transport.
With increasing power and sophistication of modern computing, the concept of computation
can go beyond the present limits and in future, image processing technology will advance and
the visual system of man can be replicated.
The future trend in remote sensing will be towards improved sensors that record the
same scene in many spectral channels. Graphics data is becoming increasingly important in
image processing applications. The future image processing applications of satellite-based
imaging ranges from planetary exploration to surveillance applications. Future research can
investigate whether this architecture can be adapted to other imaging problems such as
deblurring or impainting through old images restoration.
24

CHAPTER 8
APPENDICES
8.1 APPENDIX 1-SOURCE CODE
input_folder = "test_images/old"
output_folder = "output"

import os
basepath = os.getcwd()
input_path = os.path.join(basepath, input_folder)
output_path = os.path.join(basepath, output_folder)
os.mkdir(output_path)
import io
import IPython.display
import numpy as np
import PIL.Image

def imshow(a, format='png', jpeg_fallback=True):


a = np.asarray(a, dtype=np.uint8)
data = io.BytesIO()
PIL.Image.fromarray(a).save(data, format)
im_data = data.getvalue()
try:
disp = IPython.display.display(IPython.display.Image(im_data))
except IOError:
if jpeg_fallback and format != 'jpeg':
print(('Warning: image was too large to display in format "{}"; '
'trying jpeg instead.').format(format))
return imshow(a, format='jpeg')
else:
raise
return disp

def make_grid(I1, I2, resize=True):


I1 = np.asarray(I1)
H, W = I1.shape[0], I1.shape[1]

if I1.ndim >= 3:
I2 = np.asarray(I2.resize((W,H)))
I_combine = np.zeros((H,W*2,3))
I_combine[:,:W,:] = I1[:,:,:3]
I_combine[:,W:,:] = I2[:,:,:3]
else:
I2 = np.asarray(I2.resize((W,H)).convert('L'))
I_combine = np.zeros((H,W*2))
I_combine[:,:W] = I1[:,:]
I_combine[:,W:] = I2[:,:]
25

I_combine = PIL.Image.fromarray(np.uint8(I_combine))

W_base = 600
if resize:
ratio = W_base / (W*2)
H_new = int(H * ratio)
I_combine = I_combine.resize((W_base, H_new), PIL.Image.LANCZOS)

return I_combine
filenames = os.listdir(os.path.join(input_path))
filenames.sort()

for filename in filenames:


print(filename)
image_original = PIL.Image.open(os.path.join(input_path, filename))
image_restore = PIL.Image.open(os.path.join(output_path, 'final_output', filename))

display(make_grid(image_original, image_restore))
input_folder = "test_images/old_w_scratch"
output_folder = "output"
input_path = os.path.join(basepath, input_folder)
output_path = os.path.join(basepath, output_folder)

filenames = os.listdir(os.path.join(input_path))
filenames.sort()

for filename in filenames:


print(filename)
image_original = PIL.Image.open(os.path.join(input_path, filename))
image_restore = PIL.Image.open(os.path.join(output_path, 'final_output', filename))

display(make_grid(image_original, image_restore))
from google.colab import files
import shutil

upload_path = os.path.join(basepath, "test_images", "upload")


upload_output_path = os.path.join(basepath, "upload_output")

if os.path.isdir(upload_output_path):
shutil.rmtree(upload_output_path)

if os.path.isdir(upload_path):
shutil.rmtree(upload_path)

os.mkdir(upload_output_path)
os.mkdir(upload_path)

uploaded = files.upload()
26

for filename in uploaded.keys():


shutil.move(os.path.join(basepath, filename), os.path.join(upload_path, filename))
filenames_upload = os.listdir(os.path.join(upload_path))
filenames_upload.sort()

filenames_upload_output = os.listdir(os.path.join(upload_output_path, "final_output"))


filenames_upload_output.sort()

for filename, filename_output in zip(filenames_upload, filenames_upload_output):


image_original = PIL.Image.open(os.path.join(upload_path, filename))
image_restore = PIL.Image.open(os.path.join(upload_output_path, "final_output", filenam
e_output))

display(make_grid(image_original, image_restore))
print("")
output_folder = os.path.join(upload_output_path, "final_output")
print(output_folder)
os.system(f"zip -r -j download.zip {output_folder}/*")
files.download("download.zip")

8.2 APPENDIX 2-SCREEN SHOTS

Figure A 2.1 MODEL TRAINING


27

Figure A 2.2 UPLOAD IMAGE

Figure A 2.3 OUTPUT IMAGE


28

8.3 APPENDIX 3-USER MANUAL


The “Old image Restoration” is produced as a good software package. To run the “Real
time oriented” the simple steps to be follow:
• Open the any Browser.
• Search the Google Colab Note Book.
• The New Note Book was created by our process.
• Run the Notebook for Model Training
• Enter the Damaged Image to upload option.
• Model has been processing to user input image.
• Getting result to Quality Image from Damaged Image
• Download the Quality image locally.
• Click Exit from this process.
29

CHAPTER 9
REFERENCES
9.1 BOOKS REFERRED
[1] F. Stanco, G. Ramponi, and A. De Polo, “Towards the automated restoration of old
photographic prints: a survey,” in The IEEE Region 8 EUROCON 2003. Computer
as a Tool., vol. 2. IEEE, 2019, pp. 370–374
[2] K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for
image restoration,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2018.
[3] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint
arXiv:1312.6114, 2019
[4] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations
over learned dictionaries,” IEEE Transactions on Image processing, vol. 15, no. 12,
pp. 3736– 3745, 2020
[5] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with deep neural
networks,” in Advances in neural information processing systems, 2019.
[6] K. Yu, C. Dong, L. Lin, and C. Change Loy, “Crafting a toolchain for image
restoration by deep reinforcement learning,” in Proceedings of the IEEE conference
on computer vision and pattern recognition, 2018, pp. 2443–2452
[7] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation
using cycle-consistent adversarial networks,” in Proceedings of the IEEE
International Conference on Computer Vision, 2020, pp. 2223–2232
[8] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
2018, pp. 7794– 7803.
[9] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Fo- ´ cal loss for dense
object detection,” in Proceedings of the IEEE international conference on computer
vision, 2019.
[10] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable
effectiveness of deep features as a perceptual metric,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2018, pp. 586– 595
30

[11] Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment


in the spatial domain,” IEEE Transactions on image processing, vol. 21, no. 12, pp.
4695– 4708, 2019
[12] M. Mäkitalo and A. Foi, "Optimal Inversion of the Anscombe Transformation in
Low-Count Poisson Image Denoising," IEEE Transactions on Image Processing,
vol. 20, no. 1, pp. 99- 109, 2021.
[13] B. Zhang, J. M. Fadili and J.-L. Starck, "Wavelets, ridgelets, and curvelets for
Poisson noise removal," IEEE Transactions on Image Processing, vol. 17, no. 7, pp.
1093-1108, 2018.
[14] F. J. Anscombe, "The transformation of Poisson, binomial and negative-binomial
data," Biometrika, vol. 35, no. 3/4, pp. 246- 254, 1948. [33] M. Fisz, "The Limiting
Distribution of a Function of Two Independent Random Variables and its Statistical
Application," Colloquium Mathematicum, vol. 3, no. 2, p. 138–146, 2019.
[15] P. Fryzlewicz and G. P. Nason, "A Haar-Fisz Algorithm for Poisson Intensity
Estimation," Journal of Computational and Graphical Statistics, vol. 13, no. 3, pp.
621-638, 2018.
[16] W. Dong, G. Shi, Y. Ma and X. Li, "Image Restoration via Simultaneous Sparse
Coding: Where Structured Sparsity Meets Gaussian Scale Mixture," International
Journal of Computer Vision, vol. 114, no. 2-3, pp. 217- 232, 2020.
[17] W. Dong, L. Zhang, G. Shi and X. Li, "Nonlocally Centralized Sparse
Representation for Image Restoration," IEEE Transactions on Image Processing,
vol. 22, no. 4, pp. 1620-1630, 2018.
[18] J. Mairal, F. R. Bach, J. Ponce, G. Sapiro and A. Zisserman, "Non-local sparse
models for image restoration," in 2009 IEEE 12th International Conference on
Computer Vision, 2019, pp. 2272-2279.
[19] U. Schmidt, Q. Gao and S. Roth, "A generative perspective on MRFs in low-level
vision," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2020, pp. 1751-1758.
[20] Graves, A.-r. Mohamed and G. E. Hinton, "Speech recognition with deep recurrent
neural networks," in IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), 2018, pp. 6645-6649.
[21] Y. LeCun, Y. Bengio and G. Hinton, "Deep learning," Nature, vol. 521, no. 7553,
p. 436, 2017.
31

[22] K. Fukushima, "Neocognitron: A Selforganizing Neural Network Model for a


Mechanism of Pattern Recognition Unaffected by Shift in Position," Biological
Cybernetics, vol. 36, no. 4, pp. 193-202, 2020.
[23] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image
Recognition," in Proceedings of the IEEE conference on computer vision and
pattern recognition, 2019, pp. 770-778.
[24] K. He, X. Zhang, S. Ren and J. Sun, "Identity Mappings in Deep Residual
Networks," in European Conference on Computer Vision, 2017, pp. 630-645.
[25] D. L. Snyder, A. M. Hammoud and R. L. White, "Image recovery from data
acquired with a charge-coupled-device camera," Journal of the Optical Society of
America A, vol. 10, no. 5, pp. 10
9.2 WEBSITES REFERRED
1. http:// www.kaggle.com
2. https://analyticsindiamag.com
3. https://www.i-programmer.info
4. https://blog.floydhub.com
5. https://www.hindawi.com

You might also like