Vocational Project Report

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Industrial training report 2013

VOCATIONAL PROJECT REPORT


ON
LEMPEL-ZIV-WELCH (LZW)
SUBMITTED IN PARTIAL FULFILLMENT OF THE INDUSTRIAL
TRAINING PROJECT

BACHELOR OF TECHNOLOGY
(COMPUTER SCIENCE AND ENGINEERING)
Submitted To:

Submitted by:

Miss. Swati Srivastva

Ajay Thakur(11ucs005)

(H.O.D-MCA)

Ashish Thakur(11ucs020)
Litesh Thakur (11ucs035)
Prannoy Singh (11ucs053)
Vibhuti Chambyal(11ucs082)

DEPARTMENT OF COMPUTER SCIENCE AND TECHNOLOGY


SCHOOL OF ENGINEERING & EMERGING TECHNOLOGIES
VILLAGE: MAKHNUMAJRA, BADDI, DISTT:SOLAN(H.P)-173205
1

Industrial training report 2013


JULY,2013

CANDIDATES DECLARATION

We hereby certify that the work which is being presented in the report entitled
LEMPEL-ZIV-WELCH in the fulfillment of the requirement for the award of Degree of
Bachelor of Technology with specialization in Computer Science and Engineering, submitted in
the Department of Computer Science and Engineering of School of Engineering and Emerging
Technologies, Baddi University of Emerging Science & Technologies, Baddi(H.P) is an
authentic record of our original work carried out under the guidance and supervision of Er.
Saurabh Sharma(Assitant Professor-C.S.E) Baddi.
Ajay Thakur(11ucs005)
Ashish Thakur(11ucs020)
Litesh Thakur(11ucs035)
Prannoy Singh(11ucs053)
Vibhuti Chambyal(11ucs082)

Industrial training report 2013

ACKNOWLEDMENT

On every step there is need of proper guidance, support & motivation . The encouragement
enables the person to give his best performance & thus to achieve his goal.
A special word of thanks for persons who have been of great help during the development of
this project LEMPEL-ZIV-WELCH.
We feel honored to express our sincere gratitude to Er. Rohit Handa Project Guide at School
Of Engineering and Emerging Technologies, Baddi whose guidance and keen interest gave the
manuscript its present shape.
We extend our regards to entire faculty of the C.S.E at SCHOOL OF ENGINEERING AND
EMERGING TECHNOLOGIES from where we have learnt the basics of C.S.E and whose
informal discussions and able guidance was the light for us in the entire duration of this work.
We express our deep sense of gratitude to our parents and thank all our friends for their constant
help during our study.

Industrial training report 2013

CHAPTER 1
INTRODUCTION
C++
C++ is a programming language that is a general purpose, statically typed, free-form, multiparadigm and compiled. It is regarded as intermediate/middle-level language, as it comprises
both high-level and low level language features.
C++ is one of the most popular programming languages and is implemented on a wide variety of
hardware and operating system platforms. As an efficient compiler to native code, its application
domains include systems software, application software, device drivers, embedded software,
high-performance server and client applications, and entertainment software such as video
games.

MAIN FEATURES OF C++:


1.
2.
3.
4.
5.
6.

Classes
Inheritance
Data abstraction and Encapsulation
Polymorphism
Dynamic Binding
Message Passing
4

Industrial training report 2013

1. Classes: By using classes we can create user defined data types. In other words the class
is the collection of set of data and code. The class allows us to do some things which are
polymorphism, inheritance, abstraction, encapsulation which are our next features. The
objects are the instances of classes.

The syntax for class is :


Class <class-name>
{
//Body of class;
};

2. Data Abstraction and Encapsulation: Encapsulation means hiding of data from the data
structures or in other words wrapping up of data in single entity is known as
Encapsulation. In this the data is not accessible to outside world and only the functions
are allowed to access it. When we want to write the class in which we dont have the
knowledge about the arguments used to instantiate it then we can use templates in C++.
Abstraction can be defined as the act of representing essential features without including
background details.

3. Polymorphism: it means that the one interface can be used for many implementation so
that object can behave differently for each implementation. The different types of
polymorphism are static (Compile time) and dynamic (Run time).
4. Dynamic Binding: It means that the linking of a procedure call to code to be executed in
response to the call. A function call associated with a polymorphic reference depends on
5

Industrial training report 2013


the dynamic type that reference. And at run-time the code matching the object under
current reference will be called.
5. Message Passing: An object oriented program consists of the set of objects that
communicate with each other. objects communicate with one another by sending and
receiving information much the same way as people pass messages to one another. The
concept of message passing makes it easier to direct model or simulate their real world
counterparts.

6.

Inheritance: Inheritance allows one data type to acquire properties of other data types.
Inheritance from a base class may be declared as public, protected, or private. If the
access specifier is omitted, a class inherits privately, while a struct inherits publicly.
This provides the idea of reusability that means we can add the new features to an
existing class without modifying it.

INTRODUCTION TO DATA COMPRESSION


compression the need of the day, may it be multimedia applications, transmission applications or
for that other application all need data to be stored in a compact way for minimum cost of
storage and transmission. There are a large number of compression techniques available which
are capable of application specific to large compression ratios. Here we limit our scope to the
LZW algorithm, which can be considered as the industry standard for the loss less data
compression.

What is 'compression'?
Data compression is the removal of redundant data this therefore, reduces the number of binary
bits necessary to represent the information contained with in that data. to achieve the best
possible compression requires not only an understanding of the nature of data in its binary
representation but also how we as humans interpret the information that the data represents.

So why is it necessary?
"Compression is the key to the future expansion of the web; it's certainly the key to emerging
multimedia and 3-d technology."( brown, Honeycutt,etal 1998) although we currently exist in a
world of rapidly expanding computing and communication capabilities, with the increase in
computer awareness and, in particular, multimedia, the demand for computer systems and their
application to meet people's needs is also rising. Since every bit incurs a cost when being
transmitted or stored, any technology that can be introduced into our existing systems that can be
introduced into our existing systems that can reduce these costs is essential. When considering
6

Industrial training report 2013


raw data that may contain over50% redundancy, it raises the question- why pay for that
redundant information?

INTRODUCTION TO ENCRYPTION
In cryptography, encryption is the process of encoding messages (or information) in such a way that
third parties cannot read it, but only authorized parties can. Encryption doesn't prevent hacking but it
prevents the hacker from reading the data that is encrypted. In an encryption scheme, the message or
information (referred to as plaintext) is encrypted using an encryption algorithm, turning it into an
unreadable cipher text (ibid.). This is usually done with the use of an encryption key, which specifies how
the message is to be encoded. Any adversary that can see the cipher text should not be able to
determine anything about the original message. An authorized party, however, is able to decode the
cipher text using a decryption algorithm that usually requires a secret decryption key that adversaries do
not have access to. For technical reasons, an encryption scheme usually needs a key-generation
algorithm to randomly produce keys.

SO LWZ IS A TECHNIQUE IN WHICH BOTH ENCRYPTION AND COMPRESSION IS


USED

LempelZivWelch (LZW) INTRODUCTION


LempelZivWelch (LZW) is a universal lossless data compression algorithm created
by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an
improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The
algorithm is simple to implement, and has the potential for very high throughput in hardware
implementations. It was the algorithm of the widely used UNIX file compression
utility compress, and is used in the GIF image format.

1.1 Overview of Project


This project (LZW) performs a complete application of modest size that uses MS Visual C++.
with .Net framework. Lempel-Ziv-Welch (LZW) is a universal lossless data compression
algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in
1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978.
The algorithm is designed to be fast to implement but is not usually optimal because it performs only
limited analysis of the data. The first compression algorithm to be described was LZ77.The two
different types of compression loss & lossless differs in one respect: Loss compression accepts a
slight compression of data to achieve compression. Loss compression is done on analog data stored

Industrial training report 2013


digitally,

with

primary

applications

being

graphics

and

sound

files.

As Data Compression seeks to reduce the number of bits used to store or transmit information. It
encompasses a wide variety of software and hardware compression techniques. Data
compression consists of taking a stream of symbols and transforming them into codes. For
effective compression, the resultant stream of codes will be smaller than than the original
symbol. Data compression is often referred to as coding, where coding is general term showing
any special representation of data which satisfies a given need. Information theory is defined as
the study of efficient coding. Data compression may be viewed as a branch of information theory
in which the primary objective is to minimize the amount of data to be transmitted. Data
compression has an important role in the area of transmission and storage. It plays a key role in
information technology. The reduction of redundancies in data representation in order to
decrease data storage requirement is defined as data compression. It used less usage of resources
such as memory space or transmission capacity. Data compression is classified as lossless and
loss compression. Lossless compression is used for text and loss compression for image.
Here the problem is that it uses an integral number of bits & also, one must have the prior
information of probabilities. The problem of statistical model is solved by using adaptive dictionary.

Industrial training report 2013

1.2 OBJECTIVES
Making A user-Friendly System: The proposed system aims at providing a userfriendly system which helps the user to easily browse the website. The new software
provides easy to use windows graphical user interface with various options providing the
options for compression and decompression.
More reliability: The proposal system, it aims at providing more reliability of data. It is
somewhat time consuming process. User can get the entire output file in the specified
location.
Reduce The Cost of Maintaining The System: The proposed system aims at reducing
the cost of maintenance of the system as information related to various aspects is stored
on just one system.

Industrial training report 2013


Chapter 2
System Analysis
FEASIBILITY STUDY
Feasibility study describes and evaluates candidate systems and provides for the
selection of the best candidate system that meets the system performance
requirements. Three key considerations are involved in the feasibility analysis:
1. Economic feasibility
2. Technical feasibility
3. Behavioral feasibility
1. Economic Feasibility
Economic feasibility determines the benefits and saving that are expected from the
system and compare them with costs. Cost/Benefit analysis has been done on the
basis of total cost of the system and direct and indirect benefits derived from the
system. The total cost for the proposed system comprises of hardware costs and
software costs. The main aim of economic feasibility is to check whether the
system is financially affordable or not.

2. Technical Feasibility
Technical Feasibility centers on the existing system and to what extent it can
support the proposed system. In this part of feasibility analysis we determined the
technical possibilities for the implementation of the system. Two major benefits
are:
Improving the performance
Minimizing the cost of processing
3. Behavioral Feasibility:
Behavioral feasibility estimates the reaction of the User staff towards the
development of the computerized system. For the successful implementation of
any system, the users must be impressed that the new system is for his benefit.
So, the behavioral feasibility plays a very important role in the development of
new system It reveals that whether the system is acceptable by user or not. If
10

Industrial training report 2013


the user does not ready to use it, then it doesnt matter how best the system is or
how much effort you are putting in its development.

2.2 SOFTWARE REQUIREMENT SPECIFICATIONS (SRS)


Main objective of the LZW is a computer program dedicated to the task of
compressing and decompressing files for storage. It performs the task of
encryption and decompression in it also.
A Software requirements specification for a software system - is a complete
description of the behavior of a system to be developed. It includes a set of use
cases that describe all the interactions the users will have with the software. Use
cases are also known as functional requirements. In addition to use cases, the SRS
also contains non-functional (or supplementary) requirements. Non-functional
requirements are requirements which impose constraints on the design or
implementation (such as performance engineering requirements, quality standards,
or design constraints).
The most general organization of an SRS is as follow: Introduction
Purpose
Scope
Definitions
System Overview
References

Overall Description
Product Perspective
Product Functions
User Characteristics
Constraints, Assumptions and Dependencies

Specific Requirements
11

Industrial training report 2013


External interfaces
Functions
Performance requirements
Design constraints

2.3 SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC)


A software development process model is a description of the work practices,
tools and techniques used to develop software. Software models serve as
standards as well provide guidelines while developing software. It is
beneficial to one or more software development model while developing
software
1) ITERATIVE AND INCREMENTAL DEVELOPMENT:
Iterative and Incremental development is at the heart of a cyclic software
development process developed in response to the weaknesses of the waterfall
model. It starts with an initial planning and ends with deployment with the cyclic
interactions in between. Iterative and incremental developments are essential parts
of the Rational Unified Process, Extreme Programming and generally the
various agile software development frameworks.
It follows a similar process to the plan-do-check-act cycle of business
process improvement.
A common mistake is to consider "iterative" and "incremental" as
synonyms, which they are not. In software/systems development, however, they
typically go hand in hand. The basic idea is to develop a system through repeated
cycles (iterative) and in smaller portions at a time (incremental), allowing software
developers to take advantage of what was learned during development of earlier
parts or versions of the system. Learning comes from both the development and
use of the system, where possible key steps in the process start with a simple
implementation of a subset of the software requirements and iteratively enhance
the evolving versions until the full system is implemented. At each iteration,
design modifications are made and new functional capabilities are added.
The procedure itself consists of the initialization step, the iteration step,
and the Project Control List. The initialization step creates a base version of the
system. The goal for this initial implementation is to create a product to which the
user can react. It should offer a sampling of the key aspects of the problem and
12

Industrial training report 2013


provide a solution that is simple enough to understand and implement easily. To
guide the iteration process, a project control list is created that contains a record of
all tasks that need to be performed. It includes such items as new features to be
implemented and areas of redesign of the existing solution. The control list is
constantly being revised as a result of the analysis phase.
The iteration involves the redesign and implementation of a task from the
project control list, and the analysis of the current version of the system. The goal
for the design and implementation of any iteration is to be simple, straightforward,
and modular, supporting redesign at that stage or as a task added to the project
control list.
The level of design detail is not dictated by the interactive approach. In a
light-weight iterative project the code may represent the major source
of documentation of the system; however, in a critical iterative project a
formal Software Design Document may be used. The analysis of iteration is based
upon user feedback, and the program analysis facilities available. It involves
analysis of the structure, modularity, usability, reliability, efficiency, &
achievement of goals. The project control list is modified in light of the analysis
results.
Phases:
Incremental development slices the system functionality into increments (portions).
In each increment, a slice of functionality is delivered through cross-discipline
work, from the requirements to the deployment. The unified process groups
increments/iterations into phases: inception, elaboration, construction, and
transition.

Inception identifies project scope, risks, and requirements (functional and


non-functional) at a high level but in enough detail that work can be
estimated.

Elaboration delivers a working architecture that mitigates the top risks and
fulfills the non-functional requirements.

Construction incrementally fills-in the architecture with production-ready


code produced from analysis, design, implementation, and testing of the
functional requirements.

Transition delivers the system into the production operating environment.

13

Industrial training report 2013

Fig 2.2 Iterative Waterfall Model.

Functional Requirement:
a) Compression and Encryption:
Description: To take an input texts file and then compress it
Input: An text file.
Output: Compressed & encrypted file and dictionary file.

b) Decompression & Decryption:


Description: To take the compressed file and dictionary file as input and the
decompressed it.
Input: Compressed file & dictionary file.
Output: Decompressed file at specified location.
.

2.1 TECHNOLOGICAL REQUIREMENTS OF SOFTWARE


As per the study conducted the requested hardware technology and software required to execute
the project is readily available in the market. The project is primarily a data or file based
application, which require standard pc and does not require specific hardware. The application
will be developed in Visual Basic C++ as front end and back end.
14

Industrial training report 2013


2.2 HARDWARE&SOFTWARE REQUIREMENTS OF SOFTWARE
Hardware requirements of software are very affordable. The software is so user friendly that it
can run on a personal computer with medium configuration.
The various system requirements for installing LempelZivWelch (LZW) universal lossless
data compression algorithm application in the system are:LZW for Windows

XP/Vista/Windows 7 (64-bit compatible)


800X600 minimum monitor resolution
Internet connection or CD-ROM drive (for installation)
Minimum 512MB RAM recommended
180MB hard drive space required

15

Industrial training report 2013

Chapter 3
System Design
After maintaining the SRS document Design phase begins.
The activities carried out during design phase transform the SRS document into the design
document.

Fig4.1 The design process transform the SRS document into a design document
There are two approaches in software design named- Function oriented software Design and
Object oriented software design approaches.
We followed Function oriented software design.
Function Oriented Software Design:
It is further broken down into smaller functions. In this approach the problem is considered as
function
The term top-down decomposition is often used to denote the successive decomposition of a set
of high level functions into more detailed functions.

16

Industrial training report 2013

3.1 Flow Charts:


3.1.1Compression Diagram:
.

Fig. Compression of Data


17

Industrial training report 2013


3.1.2Decompression Diagram:

Fig. Decompression of Data

18

Industrial training report 2013

3.2 Data Flow Diagram


Explained below is all the Data Flow Diagrams for understanding the overall picture of data flow
in the progam:START

OPEN A FILE

Input data a file

Compare
words

Flag=1
Increment int value

Flag=0
Take initial index

Write Into File

Write into file

Show Output

19

Industrial training report 2013


STOP

Chapter 4
4.1. TESTING
Software testing

Software testing is the process used to measure the quality of developed computer
software. Usually, quality is constrained to such topics as correctness,
completeness, security, but can also include more technical requirements as
described under the ISO standard ISO 9126, such as capability, reliability,
efficiency, portability, maintainability, compatibility, and usability. Testing is a
process of technical investigation, performed on behalf of stakeholders, that is
intended to reveal quality-related information about the product with respect to the
context in which it is intended to operate.

White box, black box, and grey box testing


20

Industrial training report 2013


White box and black box testing are terms used to describe the point of view that
a test engineer takes when designing test cases. Black box testing treats the
software as a black-box without any understanding as to how the internals behave.
Thus, the tester inputs data and only sees the output from the test object. This level
of testing usually requires thorough test cases to be provided to the tester who then
can simply verify that for a given input, the output value (or behavior), is the same
as the expected value specified in the test case.
White box testing, however, is when the tester has access to the internal data
structures, code, and algorithms. For this reason, unit testing and debugging can be
classified as white-box testing and it usually requires writing code, or at a
minimum, stepping through it, and thus requires more skill than the black-box
tester. If the software in test is an interface or API of any sort, white-box testing is
almost always required.
In recent years the term grey box testing has come into common usage. This
involves having access to internal data structures and algorithms for purposes of
designing the test cases, but testing at the user, or black-box level. Manipulating
input data and formatting output do not qualify as grey-box because the input and
output are clearly outside of the black-box we are calling the software under test.
This is particularly important when conducting integration testing between two
modules of code written by two different developers, where only the interfaces are
exposed for test.
Grey box testing could be used in the context of testing a client-server
environment when the tester has control over the input, inspects the value in a SQL
database, and the output value, and then compares all three (the input, sql value,
and output), to determine if the data got corrupt on the database insertion or
retrieval.

21

Industrial training report 2013

Fig:-4.1.1
Verification and validation
Software testing is used in association with verification and validation (V&V).
Verification is the checking of or testing of items, including software, for
conformance and consistency with an associated specification. Software testing is
just one kind of verification, which also uses techniques such as reviews,
inspections, and walkthroughs. Validation is the process of checking what has been
specified is what the user actually wanted.
Verification: Have we built the software right? (i.e. does it match the
specification).
Validation: Have we built the right software? (i.e. Is this what the customer
wants?)

4.1.1 Levels of testing


Unit testing tests the minimal software component, or module. Each unit (basic
component) of the software is tested to verify that the detailed design for the
unit has been correctly implemented. In an Object-oriented environment, this is
22

Industrial training report 2013


usually at the class level, and the minimal unit tests include the constructors and
destructors.
Integration testing exposes defects in the interfaces and interaction between
integrated components (modules). Progressively larger groups of tested
software components corresponding to elements of the architectural design are
integrated and tested until the software works as a system.
Functional testing tests at any level (class, module, interface, or system) for
proper functionality as defined in the specification.
System testing tests a completely integrated system to verify that it meets its
requirements.
System integration testing verifies that a system is integrated to any external
or third party systems defined in the system requirements.
Acceptance testing can be conducted by the end-user, customer, or client to
validate whether or not to accept the product. Acceptance testing may be
performed as part of the hand-off process between any two phases of
development
. Alpha testing is simulated or actual operational testing by potential
users/customers or an independent test team at the developers' site. Alpha
testing is often employed for off-the-shelf software as a form of internal
acceptance testing, before the software goes to beta testing.
Beta testing comes after alpha testing. Versions of the software, known as
beta versions, are released to a limited audience outside of the company. The
software is released to groups of people so that further testing can ensure the
product has few faults or bugs. Sometimes, beta versions are made available
to the open public to increase the feedback field to a maximal number of
future users.
It should be noted that although both Alpha and Beta are referred to as testing it is
in fact use immersion. The rigors that are applied are often unsystematic and many
of the basic tenets of testing process are not used. The Alpha and Beta period
provides insight into environmental and utilization conditions that can impact the
software.
SMOKE TESTING
23

Industrial training report 2013


Smoke testing is a term used in plumbing, woodwind repair, electronics, and
computer software development. It refers to the first test made after repairs or first
assembly to provide some assurance that the system under test will not
catastrophically fail. After a smoke test proves that the pipes will not leak, the keys
seal properly, the circuit will not burn, or the software will not crash outright, the
assembly is ready for more stressful testing.

In plumbing, a smoke test forces actual smoke through newly plumbed pipes
to find leaks, before water is allowed to flow through the pipes.

In woodwind instrument repair, a smoke test involves plugging one end of


an instrument and blowing smoke into the other to test for leaks. (This test is
no longer in common use).

Chapter 4
Implementation
This chapter provides the detail of resources used during the development of the
program/software.

4.1 Working Environment


INTRODUCTION TO VISUAL C++
Microsoft Visual C++ (often abbreviated as MSVC or VC++) is a commercial integrated
development
environment (IDE)
product
from
Microsoft for
the C, C++,
and C++/CLI programming languages. It features tools for developing and debugging C++ code,
especially code written for the Microsoft Windows API, the DirectX API, and the Microsoft
.NET Framework. The IDE includes an AppWizard, Class Wizard, and testing features to make
programming easier.
The characteristics of C++ are the following listed below:
1. Portability
2. Compatibility
24

Industrial training report 2013


3. Modular Programming
4. Object Oriented Methodology
5. Speed
6. Dynamic Binding
7. Message passing
8. Data Security
9. Error Handling features
10. Less Complexity

4.2 Platform Used


This section describes the MINIMUM hardware requirements of the project.
CPU
RAM
Hard Drive
Resolution of Screen
Display

:
:
:
:
:

Pentium IV
128 MB
512 MB
1366x768
Coloured.

4.3 Development Tools


Here is the list of various tools used in developing this project.
C/C++ IDE
C/C++ Compiler

:
:

Visual C++, Code Blocks


gcc

Documentation

MS Word 2013

Screenshots

PicPick Editor

Video Tutorials

Debut Video Capture Software

Project Schedule

Gantt Project

Online Documentation

word press blog

Online Video Documentation

YouTube Channel

Utility Software

Communication

Google groups

25

Industrial training report 2013


4.2 IMPLEMENTATION ISSUES
Implementation phase of the software development is concerned with translating the design
specifications into the source code. After the system has been designed, arrives the stage of
putting it into actual usage known as the implementation of the system. This involves putting up
of actual practical usage of the theoretically designed system. The primary goal of
implementation is to write the source code and the internal documentation so that conformance
of the code to its specifications can easily be verified and so the debugging, modifications and
testing are eased. This goal can be achieved by making the source code as clear and as
straightforward as possible. Simplicity, Elegance and Clarity are the hallmarks of good programs
whereas complexity are indications of inadequate design and misdirected thinking. The system
implementation is a fairly complex and expensive task requiring numerous inter-dependent
activities. It involves the effort of a number of groups of people: user and the programmers and
the computer operating staff etc. This needs a proper planning to carry out the task successfully.
Thus it involves the following activities:
Writing and testing of programs individually
Testing the system as a whole using the live data
Training and Education of the users and supervisory staff
Source code clarity is enhance buy using structured coding techniques, by efficient coding
style, by appropriate supporting documents, by efficient internal comments and by features
provided in the modern programming language.
The following are the structured coding techniques:
1) Single Entry, Single Exit
2) Data Encapsulation
3) Using recursion for appropriate problems

Chapter 5
Coding
This phase consist of all the coding part of the software. This section is therefore consists of the
screenshots of all the forms in the project and their coding. This section can make the project to
be understood easily by any programmer all the connection are made , opened and closed and
many more things. This section provides the necessary coding for the development of the
application needed and maintains all the required code needed for it.
This phase is difficult only in starting, but when we start working this phase, with time its
becomes easy to do coding as it is easy to understand it due to its English like wordings. There
are different types of header files for different purposes.
This phase is one of the important phases because any mistakes done in this phase can lead to the
improper working of the software.
All screenshots are being discussed from next page:- CODE:26

Industrial training report 2013


#include<iostream>
#include<conio.h>
#include<fstream>
#include<string.h>
using namespace std;

int main(int argc,char* argv[])


{
int r=0,j=0,flag=0,count=0;
int size=argc;
//cout<<size;
char* str[30];
ofstream fout;
fout.open("E://abc.txt",ios::out);
fout.write((char*)&count,sizeof(int));
//int a[100];
//cout<<count<<endl;
for(int i=2;i<argc;i++)
{
str[j]=argv[i-1];
for(int q=j;q>=0;q--)
{
//cout<<argv[i]<<"...."<<str[q]<<endl;
if(strcmp(argv[i],str[q]))
{
27

Industrial training report 2013


flag=1;

}
else
{
flag=0;
fout.write((char*)&q,sizeof(int));
break;
}
}
//int a;
//a=count;
if(flag==1)
{
count++;
//cout<<count<<endl;
cout<<(char*)&count;
fout.write((char*)&count,sizeof(int));

}
/*if(flag==0)
{
fout.write(a,sizeof(int));
}*/
j++;
28

Industrial training report 2013


}
fout.close();
ifstream fin;
fin.open("E://abc.txt",ios::in);
fin.read((char*)&count,sizeof(int));
while(!fin.eof())
{
cout<<count<<endl;;
fin.read((char*)&count,sizeof(int));
}
fin.close();
return 0;
}
SNAPSHOTS

29

Industrial training report 2013

30

Industrial training report 2013

31

Industrial training report 2013

Chapter 6
CONCLUSION AND FUTURE SCOPE
6.1 Conclusion

We have successfully implemented the different module of LWZ


ALGORITHM. It is extremely well when compressing text or image i.e. compression
levels of 50% or better should be expected.

Increase security of data files: - the compressed file through LZW data compression is
totally in abscured form so there are no chances of data in the file to be read by an
unauthorized person.

Increased speed:- the compression speed of our software is faster then other software
as the code is small in no complex code is return as a result. The compression and
decompression is result at faster rate.

Compressing saved screen and display will generally show very good result.

It is user friendly and gives illustrative warnings on errors.

It is compatible in every window platform

It is fairly low cost software.

It is secure and reliable.

6.2 FUTURE ENHANCEMENTS


We can have a Website that will cover different technology which can be made it
more efficient, more powerful, lighter and more interactive with user.

REFERENCE

http://w3schools.com/
http://in3.C++.net/

Blackstock, Steve, LZW and GIF Explained, manuscript in the public domain, 1987
32

Industrial training report 2013


Montgomery, Bob, LZW Compression Used to Encode/Decode a GIF File, manuscript in the
public domain, 1988.

33

You might also like