Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

AUTOMATIC RECONFIGURATION FOR LARGE -SCALE

RELIABLE STORAGE SYSTEMS


A project report
submitted for the partial fulfillment of the requirement for the Degree of

Master of Computer Applications


by
J. Janardhan
C19101PCA6001

Under the Guidance of


Dr. S. JAWAHAR, MCA, M.Phil., Ph.D.

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
CHENNAI - 600 005
DECEMBER - 2021
BONAFIDE CERTIFICATE

This is to certify that the report entitled " AUTOMATIC RECONFIGURATION FOR
LARGE -SCALE RELIABLE STORAGE SYSTEMS” being submitted to the University of
Madras, Chennai by JANARDHAN J & (C19101PCA6001) for the Partial Fulfillment for the
award of degree of M.C.A is a bonafide record of work carried out by him/her under my guidance
and supervision.

Name of the Guide Co-Ordinator

Dr. S. Jawahar, MCA., M.Phil., Ph.D. Dr. S. Sasikala, MCA, M.Phil., Ph.D.
Assoc. Prof, Dept of MCA Assoc. Prof, Dept of MCA,
Asan Memorial College of Arts & Science. IDE, University of Madras.

Date:

Submitted for the Viva - Voce Examination held on 28-11-2021 at IDE,

University of Madras.

Examiners

1.

2.
DECLARATION

I, JANARDHAN J declare that this report on “AUTOMATIC


RECONFIGURATION FOR LARGE -SCALE RELIABLE STORAGE
SYSTEMS” is a bonafide report of the project work done by me in partial
fulfillment for the award of the Degree Master of Computer Applications
(MCA) by the University of Madras and further that this report is not a part of
any other report that formed the basis for the award of any degree in any
discipline in any university.

Place: CHENNAI JANARDHAN J

Date: 28-11-2021
ACKNOWLEDGEMENT

I am grateful to Dr. K. Ravichandran, Ph.D., Director, IDE, University of Madras for


their valuable support in the completion of this project.

I express my sincere thanks to Dr. S. Sasikala, Co-ordinator for giving me an


opportunity to do a real time project during the course of my Master of Computer
Applications degree.

My sincere thanks to my guide Dr. S. Jawahar, Associate Professor who took tireless
effort in furnishing me with all the required information and to make this project a grand
success.

I wish to express my deepest sense of gratitude to Mr. DESCO, TECHNO INFO


SOLUTIONS, Chennai who has been so kind and helpful to me in completion of this
project.
ABSTRACT:

Byzantine-fault-tolerant replication enhances the availability and reliability of Internet


services that store critical state and preserve it despite attacks or software errors. However,
existing Byzantine-fault-tolerant storage systems either assume a static set of replicas, or
have limitations in how they handle reconfigurations (e.g., in terms of the scalability of the
solutions or the consistency levels they provide). This can be problematic in long-lived, large-
scale systems where system membership is likely to change during the system lifetime.

In this paper, we present a complete solution for dynamically changing system membership
in a large-scale Byzantine-fault-tolerant system. We present a service that tracks system
membership and periodically notifies other system nodes of membership changes. The
membership service runs mostly automatically, to avoid human configuration errors; is itself
Byzantine fault- tolerant and reconfigurable; and provides applications with a sequence of
consistent views of the system membership.

We demonstrate the utility of this membership service by using it in a novel distributed hash
table called dBQS that provides atomic semantics even across changes in replica sets. dBQS
is interesting in its own right because its storage algorithms extend existing Byzantine
quorum protocols to handle changes in the replica set, and because it differs from previous
DHTs by providing Byzantine fault tolerance and offering strong semantics.

We implemented the membership service and dBQS. Our results show that the approach
works well, in practice: the membership service is able to manage a large system and the cost
to change the system membership is low.
INDEX
S. No. CONTENT PAGE. NO

INTRODUCTION

01. 1.1 COMPANY PROFILE

1.2 PROJECT OVERVIEW


SYSTEM ANALYSIS

2.1 FESIBILITY SYSTEM


02.
2.2 EXISTING SYSTEM

2.3 PROPOSED SYSTEM

SYSTEM CONFIGURATION

3.1 HARDWARE SPECIFICATION


03.
3.2 SOFTWARE SPECIFICATION

3.3 ABOUT THE SOFTWARE

SYSTEM DESIGN

4.1 NORMALIZATION

04. 4.2 TABLE DESIGN

4.3 INPUT DESIGN

4.4 SFD/DFD

05. SYSTEM DESCRIPTION

06. TESTING AND IMPLEMENTATION

07. CONCLUSION AND FUTURE SCOPE

08. FORMS AND REPORT

09. BIBLIOGRAPHY
1) INTRODUCTION
1.1 COMPANY PROFILE

TECHNO INFO SOLUTIONS has a long-standing from 2005, Chennai based company
with a National wide reputation not only for excellence in academic research and innovation
which benefits society, but also for commercially relevant research to assist companies in
achieving market leadership. The research involves in the doctorate sectors too. Its goals are
to enhance the user experience on computing devices, reduce the cost of writing and
maintaining software, and invent novel computing technologies.

Techno Info Solutions research also collaborates openly with colleges and universities
worldwide to broadly advance the field of computer science. Techno Info Solutions (TIS)
is one of the South India’s leading RPO Company that envisioned and pioneered the adoption
of the flexible global business practices that today enable companies to operate more
efficiently and produce more value.

We involved in software development, Hardware Integration, Product development and


research and development. Our major domains Healthcare, Logistics, Defense, and RPO.

We are part of South India’s growing conglomerates – Techno Info Solutions Groups - which,
with its interests in Financial Services, Agricultural, Engineering & Technology
Development, provides us with a grounded understanding of specific business challenges
facing global companies.

As industry leaders, we introduced offshore development and product development,


pioneered development and support frameworks, ensuring compressed delivery timeframes.
Today, our solutions provide strategic advantage to several most-admired organizations in
the world. We have long-standing and vibrant partnerships with many companies across the
globe.
Real Time Projects

Real Time Projects are executed in our Research and Development wing of TIS Group,
software development division, where a thorough research of all Projects are done and
developed as per MVC coding standard (CMM Level Standard). Techno Info Solutions is
specialized in innovative IT solutions to ease out the day-to-day operations through cost
effective software development projects. We employ highly qualified software developers
and creative designers with years of experience to accomplish software projects with utmost
satisfaction. Techno Info solutions help clients to design develop and integrate applications
and solutions based on the various platforms like Microsoft .Net, Java and PHP.

1.2 PROJECT OVERVIEW:

Byzantine-fault-tolerant replication enhances the availability and reliability of Internet


services that store critical state and preserve it despite attacks or software errors. However,
existing Byzantine-fault-tolerant storage systems either assume a static set of replicas or
have limitations in how they handle reconfigurations (e.g., in terms of the scalability of the
solutions or the consistency levels they provide). This can be problematic in long-lived,
large-scale systems where system membership is likely to change during the system lifetime.

In this paper, we present a complete solution for dynamically changing system membership
in a large-scale Byzantine-fault-tolerant system. We present a service that tracks system
membership and periodically notifies other system nodes of membership changes. The
membership service runs mostly automatically, to avoid human configuration errors; is itself
Byzantine fault- tolerant and reconfigurable; and provides applications with a sequence of
consistent views of the system membership.

We demonstrate the utility of this membership service by using it in novel distributed hash
table called dBQS that provides atomic semantics even across changes in replica sets. dBQS
is interesting in its own right because its storage algorithms extend existing Byzantine
quorum protocols to handle changes in the replica set, and because it differs from previous
DHTs by providing Byzantine fault tolerance and offering strong semantics.

We implemented the membership service and dBQS. Our results show that the approach
works well, in practice: the membership service can manage a large system and the cost to
change the system membership is low.

MODULES:

1. RELIABLE AUTOMATIC RECONFIGURATION

2. TRACKING MEMBERSHIP SERVICE

3. BYZANTINE FAULT TOLERANCE

4. DYNAMIC REPLICATION

RELIABLE AUTOMATIC RECONFIGURATION:

In this Module, it provides the abstraction of a globally consistent view of the system
membership. This abstraction simplifies the design of applications that use it, since it allows
different nodes to agree on which servers are responsible for which subset of the service. It
is designed to work at large scale, e.g., tens or hundreds of thousands of observers. Support
for large scale is essential since systems today are already large and we can expect them to
scale further. It is secure against Byzantine (arbitrary) faults. Handling Byzantine faults is
important because it captures the kinds of complex failure modes that have been reported
for our target deployments.

TRACKING MEMBERSHIP SERVICE:

In this Module, is only part of what is needed for automatic reconfiguration we assume
nodes are connected by an unreliable asynchronous network like the Internet, where
messages may be lost, corrupted, delayed, duplicated, or delivered out of order. While we
make no synchrony assumptions for the system to meet its safety guarantees, itis necessary
to make partial synchrony assumptions for livens. The MS describes membership changes
by producing a configuration, which identifies the set of servers currently in the system and
sending it to all servers. To allow the configuration to be exchanged among nodes without
possibility of forgery, the MS authenticates it using a signature that can be verified with a
well-known public key.

BYZANTINE FAULT TOLERANCE:

In this Module, to provide Byzantine fault tolerance for the MS, we implement it with group
replicas executing the PBFT state machine replication protocol. These MS replicas can run
on server nodes, but the size of the MS group is small and independent of the system size.
So, to implement from tracking service,

1. Add – It takes a certificate signed by the trusted authority describing the node adds the
node to the set of system members.

2. Remove – It also takes a certificate signed by the trusted authority that identifies the
node to be removed and removes this node from the current set of members.

3. Freshness – It receives a freshness challenge; the reply contains the nonce and current
epoch number signed by the MS.

4. PROBE – The MS sends probes to servers periodically. It serves respond with a simple
ack, or, when a nonce is sent, by repeating the nonce and signing the response.

5. New EPOCH – It informs nodes of a new epoch. Here certificate vouching for

the configuration and changes represent the delta in the membership.


DYNAMIC REPLICATION:

In this Module, to prevent attacker from predicting,

1. Choose the random number.

2. Sign the configuration using the old shares

3. Carry out are sharing of the MS keys with the new MS members. 4. Discard the old shares

IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus, it can be the most critical stage in achieving a successful new system
and in giving the user, confidence that the new system will work and be effective.

The implementation stage involves careful planning, investigation of the existing system
and its constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.

Modules
1. Reliable Automatic Reconfiguration

2. Tracking membership Service

3. Byzantine Fault Tolerance

4. Dynamic Replication

Reliable Automatic Reconfiguration


In this Module, it provides the abstraction of a globally consistent view of the system
membership. This abstraction simplifies the design of applications that use it, sinceit allows
different nodes to agree on which servers are responsible for which subset of the service. It
is designed to work at large scale, e.g., tens or hundreds of thousands of servers. Support
for largescale is essential since systems today are already large and we can expect them to
scale further.

It is secure against Byzantine (arbitrary) faults. Handling Byzantine faults is important


because it captures the kinds of complex failure modes that have been reported for our target
deployments.

Tracking membership Service


In this Module, is only part of what is needed for automatic reconfiguration. We assume
nodes are connected by an unreliable asynchronous network like the Internet, where
messages may be lost, corrupted, delayed, duplicated, or delivered out of order. While we
make no synchrony assumptions for the system to meet its safety guarantees, it is necessary
to make partial synchrony assumptions for liveness.

The MS describes membership changes by producing a configuration, which identifies the


set of servers currently in the system, and sending it to all servers. To allow the configuration
to be exchanged among nodes without possibility of forgery, the MS authenticates it using a
signature that can be verified with a well-known public key.

Byzantine Fault Tolerance


In this Module, to provide Byzantine fault tolerance for the MS, we implement it with group
replicas executing the PBFT state machine replication protocol.
These MS replicas can run on server nodes, but the size of the MS group is small and
independent of the system size. So, to implement from tracking service,

1. Add – It takes a certificate signed by the trusted authority describing the node adds
the node to the set of system members.
2. Remove – It also takes a certificate signed by the trusted authority that identifies the
node to be removed. And removes this node from the current set of members.
3. Freshness – It receives a freshness challenge; the reply contains the nonce and current
epoch number signed by the MS.
4. PROBE – The MS sends probes to servers periodically. It serves respond with a
simple ack, or, when a nonce is sent, by repeating the nonce and signing the response.
5. New EPOCH – It informs nodes of a new epoch. Here certificate vouching for the
configuration and changes represents the delta in the membership.

Dynamic Replication
In this Module, to prevent attacker from predicting

1. Choose the random number.


2. Sign the configuration using the old shares
3. Carry out a resharing of the MS keys with the new MS members. 4. Discard the old
shares

SECURE COMPUTING
Businesses, governments, academic institutions, and individual users are becoming
increasingly interconnected through a variety of wired and wireless communication
networks and with a variety of computing devices. Concerns about the security of
communications, transactions, and wireless networks are inhibiting realization of benefits
associated with pervasive connectivity and electronic commerce. These concerns include
exposure of data on systems, system compromise due to software attack, and lack of user
identity assurance for authorization. The latter concern is exacerbated by the increasing
prevalence of identity theft.
In addition, as users become more mobile, physical theft is becoming a growing concern.
User sand IT organizations need the industry to address these issues with standards-based
security solutions that reduce the risks associated with participation in an interconnected
world while also ensuring interoperability and protecting privacy. Standardization will also
enable a more consistent user experience across different device types.
The Trusted Computing Group (TCG) formed in 2003 to respond to this challenge. The
purpose of TCG is to develop, define, and promote open, vendor-neutral industry
specifications for trusted computing. These include hardware building block and software
interface specifications across multiple platforms and operating environments.
Implementation of these specifications will help manage data and digital identities more
securely, protecting them from external software attack and physical theft. TCG
specifications can also provide capabilities that can be used for more secure remote access
by the user and enable the user’s system to be used as a security token.
This backgrounder will provide an overview of the threats being addressed by TCG,
the user and IT benefits that can be derived from the use of products based on TCG
specifications, and the structure and specifications of the organization.

The Threat of Software Attack

A critical problem being addressed by products developed based upon these specifications
is the increasing threat of software attack due to a combination of increasingly sophisticated
and automated attack tools [1], the rapid increase in the number of vulnerabilities being
discovered [1], and the increasing mobility of users. The large number of vulnerabilities is
due, in part, to the incredible complexity of modern systems. For example, a typical Unix®
or Windows® system, including major applications, represents on the order of 100 million
lines of code. Recent studies have shown that typical All Rights Reserved 2product level
software has roughly one security related bug per thousand lines of source code. Thus, a
typical system will potentially have one hundred thousand security bugs.
The reality of this problem can be seen in data on reported vulnerabilities and incidents
maintained by the CERT® Coordination Center (CERT) at the federally funded Software
Engineering Institute operated by Carnegie Mellon University [2]. For example, reported
vulnerabilities have, on average, doubled each year 2000 – 2002. CERT is reporting similar
numbers of vulnerabilities reported for 2003 – 2006.
SM
Secure Computing has introduced the industry's first Secure Hybrid Delivery
Architecture. Customers may now gain access to best-of-breed security — in any
configuration — Secure Managed Services, Secure Appliances, and Secure Virtual
Appliances.

Secure Hybrid Delivery Benefits:

• Easily fits into existing infrastructures while allowing for future growth
• Consistently and over time enables the most favorable operational efficiencies and
resource reductions
• Provides sound deployment options for energy optimization and Green IT

• Provides customers with the lowest possible total cost of ownership

The security software market has been rapidly eclipsed by appliances over the last 5 years.
Now, virtual environments and hosted solutions are showing strong growth and hybrid
solutions that enhance existing security infrastructure are looking to become the norm.
Secure Computing continues to evolve its offerings providing customers a flexible, hybrid
delivery model that includes products, virtualized configurations, and managed services.

Secure Hybrid Delivery Features:

• On premises, virtualized and hosted service


delivery platforms
• Fully portable services across all delivery
platforms
• Hybrid service deployment splitting services
across platforms
• Common policy definition and administration
• Unified service offerings and pricing
• Unified reporting
Customers should not be forced to choose between different delivery models for their web,
messaging, and application security. Our Hybrid Delivery Architecture delivers portable
services across hosted, virtual, and hardware-based appliances, and provides customers
with the ability to mix and match delivery models based on their needs. With common
policy and reporting, the Secure Hybrid Delivery Model is designed to offer customers
solutions that best meet their current and future business needs, maximize their flexibility,
and help them optimize datacenter space

2) SYSTEM ANALYSIS
INTRODUCTION TO SYSTEM ANALYSIS
System analysis is a process of gather and interpreting facts, diagnosing problem and the
information to recommend improvements on the system. It is a problem-solving activity that
require intensive communication between the system users and system developers. System
analysis or study is an important phase of any system development process. The system is
studied to the minutest detail and analyzed. The system analyst plays the role of the
interrogator and dwells deep into the working of the present system. The system is viewed
as a whole and the input to the system are identified. The outputs from the organizations are
traced to the various process. System analysis is concerned with becoming aware of the
problem, identifying the relevant and decisional variables, analyzing, and synthesizing the
various factors and determining an optional or at least a satisfactory solutions or program of
action. A detailed study of the process must be made by various techniques like interviews,
questionaries etc. The data collected by these sources must be scrutinized to arrive to a
conclusion. The conclusion ins a understanding how the system functions. This system is
called existing system. Now the existing system is subjected to close study and problem
areas are identified. The designer now functions as a problem solver and tries to sort out the
difficulties that the enterprise faces. The solutions are given as proposals. The proposals are
then weighed with the existing system analytical and the best one is selected. The proposal
is presented to the user for an endorsement by the user. The proposal is reviewed on user
request and suitable changes are made. This is loop that ends as soon as the user is satisfied
with proposal. Preliminary study is the process of gathering and interpreting facts, using the
information for further studies on the system. Preliminary study is problem solving activity
that requires intensive communication between the system users and system developers. It
does various feasibility studies. In these studies, a rough figure of the system activities can
be obtained, from which the decision about the strategies to be followed for effective system
study and analysis can be taken.

2.1 FEASIBILITY SYSTEM


This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will
lead to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity.

The level of acceptance by the users solely depends on the methods that are employed to
educate the user about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.

2.2 EXISTING SYSTEM:

In Existing System, replication enhanced the reliability of internet services to store the
data’s the preserved data to be secured from software errors. But, existing Byzantine-fault
tolerant systems is a static set of replicas. It has no limitations. So, scalability is
inconsistency. So, these data are not came for long-lived systems. The existence of the
following cryptographic techniques that an adversary cannot subvert: a collision resistant
hash function, a public key cryptography scheme, and forward-secure signing key and the
existence of a proactive threshold signature protocol.
2.3 PROPOSED SYSTEM:

In Proposed System, has two parts. The first is a membership service (MS) that tracks and
responds to membership changes. The MS works mostly automatically and requires only
minimal human intervention; this way we can reduce manual configuration errors, which are
a major cause of disruption in computer systems periodically, the MS publishes a new
system membership; in this way it provides a globally consistent view of the set of available
servers. The choice of strong consistency makes it easier to implement applications, since it
allows clients and servers to make consistent local decisions about which servers are
currently responsible for which parts of the service. The second part of our solution
addresses the problem of how to reconfigure applications automatically as system
membership changes. We present a storage system, dBQS that provides Byzantine-fault-
tolerant replicated storage with strong consistency.

3) SYSTEM CONFIGURATION

3.1 HARDWARE SPECIFICATION

· System : Pentium IV 2.4 GHz.


· Hard Disk : 40 GB.
· Floppy Drive : 1.44 Mb.
· Monitor : 15 VGA Color.
· Mouse : Logitech.
· RAM : 512 Mb

3.2 SOFTWARE SPECIFICATION

· Operating system : Windows XP.


· Coding Language : C#.Net
· Database : SQL Server 2005
3.3 Software Environment

FEATURES OF. NET

Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily
and securely interoperate. There’s no language barrier with .NET: there are numerous
languages available to the developer including Managed C++, C#, Visual Basic and Java
Script. The .NET framework provides the foundation for components to interact seamlessly,
whether locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate.

“.NET” is also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET
Server, for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment within
which programs run. The most important features are
Conversion from a low-level assembler-style language, called Intermediate Language (IL),
into code native to the platform being executed on.
Memory management, notably including garbage collection.

Checking and enforcing security restrictions on the running code.

Loading and executing programs, with version control and other such
features.

The following features of the .NET framework are also worth description:

Managed Code

The code that targets .NET, and which contains certain extra Information - “metadata” - to
describe itself. Whilst both managed and unmanaged code can run in the runtime, only
managed code contains the information that allows the CLR to guarantee, for instance, safe
execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do
not. Targeting CLR can, depending on the language you’re using, impose certain constraints
on the features available. As with managed and unmanaged code, one can have both managed
and unmanaged data in .NET applications - data that doesn’t get garbage collected but instead
is looked after by unmanaged code.

Common Type System


The CLR uses something called the Common Type System (CTS) to strictly enforce type-
safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification


The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common Language
Specification (CLS) has been defined. Components that follow these rules and expose only
CLS features are considered CLS compliant.

THE CLASS LIBRARY


.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of
the namespace is called System; this contains basic types like Byte, Double, Boolean, and
String, as well as Object. All objects derive from System. Object. As well as objects, there
are value types. Value types can be allocated on the stack, which can provide useful
flexibility. There are also efficient means of converting value types to object types when
necessary.

The set of classes is comprehensive, providing collections, file, screen, and network I/O,
threading, and so on, as well as XML and database connectivity.

The class library is subdivided into several sets (or namespaces), each providing distinct
areas of functionality, with dependencies between the namespaces kept to a minimum.

LANGUAGES SUPPORTED BY .NET

The multi-language capability of the .NET Framework and Visual Studio .NET enables
developers to use their existing programming skills to build all types of applications and
XML Web services. The .NET framework supports new versions of Microsoft’s old
favorites Visual Basic and C++ (as VB.NET and Managed C++), but there are also a number
of new additions to the family.
Visual Basic .NET has been updated to include many new and improved language features
that make it a powerful object-oriented programming language. These features include
inheritance, interfaces, and overloading, among others. Visual Basic also now supports
structured exception handling, custom attributes and also supports multi-threading.

Visual Basic .NET is also CLS compliant, which means that any CLS-compliant language
can use the classes, objects, and components you create in Visual Basic .NET.

Managed Extensions for C++ and attributed programming are just some of the enhancements
made to the C++ language. Managed Extensions simplify the task of migrating existing C++
applications to the new .NET Framework.

C# is Microsoft’s new language. It’s a C-style language that is essentially “C++ for Rapid
Application Development”. Unlike other languages, its specification is just the grammar of
the language. It has no standard library of its own, and instead has been designed with the
intention of using the .NET libraries as its own.

Microsoft Visual J# .NET provides the easiest transition for Java-language developers into
the world of XML Web Services and dramatically improves the interoperability of Java-
language programs with existing software written in a variety of other programming
languages.

Active State has created Visual Perl and Visual Python, which enable .NET-aware
applications to be built in either Perl or Python. Both products can be integrated into the
Visual Studio .NET environment. Visual Perl includes support for Active State’s Perl Dev
Kit.

Other languages for which .NET compilers are available include

• FORTRAN

• COBOL

• Eiffel
.Net Framework

ASP.NET Windows Forms

XML WEB SERVICES

Base Class Libraries

Common Language Runtime

Operating System

C#.NET is also compliant with CLS (Common Language Specification) and supports
structured exception handling. CLS is set of rules and constructs that are supported by
the CLR (Common Language Runtime). CLR is the runtime environment provided by
the .NET Framework; it manages the execution of the code and also makes the
development process easier by providing services.

C#.NET is a CLS-compliant language. Any objects, classes, or components that created


in C#.NET can be used in any other CLS-compliant language. In addition, we can use
objects, classes, and components created in other CLS compliant languages in C#.NET.
The use of CLS ensures complete interoperability among applications, regardless of the
languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS:

Constructors are used to initialize objects, whereas destructors are used to destroy them. In
other words, destructors are used to release the resources allocated to the object. In C#.NET
the sub finalize procedure is available. The sub finalize procedure is used to complete the
tasks that must be performed when an object is destroyed. The sub finalize procedure is
called automatically when an object is destroyed. In addition, the sub finalize procedure can
be called only from the class it belongs to or from derived classes.

GARBAGE COLLECTION

Garbage Collection is another new feature in C#.NET. The .NET Framework monitors
allocated resources, such as objects and variables. In addition, the .NET Framework
automatically releases memory for reuse by destroying objects that are no longer in use.

In C#.NET, the garbage collector checks for the objects that are not currently in use by
applications. When the garbage collector comes across an object that is marked for garbage
collection, it releases the memory occupied by the object.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures


with the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can


handle multiple tasks simultaneously, we can use multithreading to decrease the time taken
by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at
runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception
handlers. Using Try…Catch…Finally statements, we can create robust and effective
exception handlers to improve the performance of our application.

THE .NET FRAMEWORK


The .NET Framework is a new computing platform that simplifies application development
in the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object


codes is stored and executed locally on Internet-distributed or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and


guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-
based applications.

FEATURES OF SQL-SERVER

The OLAP Services feature available in SQL Server version 7.0 is now

called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with
the term Analysis Services. Analysis Services also includes a new data mining component.
The Repository component available in SQL Server version 7.0 is now called Microsoft SQL
Server 2000 Meta Data Services. References to the component now use the term Meta Data
Services. The term repository is used only in reference to the repository engine within Meta
Data Services
SQL-SERVER database consists of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT
5. MACRO

TABLE:

A database is a collection of data about a specific topic.

VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can specify
what kind of data will be hold.

Datasheet View
To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynast (if you
edit it) or a snapshot (it cannot be edited). Each time we run query, we get latest information
in the dynaset. Access either displays the dynast or snapshot for us to view or perform an
action on it, such as deleting or updating.

AJAX:

ASP.NET Ajax marks Microsoft's foray into the ever-growing Ajax framework market.
Simply put, this new environment for building Web applications puts Ajax at the front and
center of the .NET Framework.
Flow Chart

Server
login

upload
files

Client
Login

request file
send to server

no Membership yes
provider

file not files send


transfered via router
send report

check MS
Replica
yes once again no
activate
membership

Byzantine Fault
Tolerance
file request send
to server

File transfer
Activity Diagram

server client
login login

file
upload request
files
to be send

receive
server
from client
check membership

check send membership


membership activate form
service

view membership
and activate clients receive publickey
and give publickey

file send to client receive files


via router
Class Diagram

server client
upload files request send
receive request activate membership
check membership service receive public key
activate membership download files
send public key
send files send()
download()
upload()
send pk()
send files()

router
receive files
send file
check MS replica
byanztine fault tolerance
send()
MS replica()
Use Case Diagram
Sequence Diagram
DATA FLOW DIAGRAM

login

upload
files

Client
Login

request file
send to server

no Membership yes
provider

file not files send


transfered via router
send report

check MS
Replica
yes once again no
activate
membership

Byzantine Fault
Tolerance
file request send
to server

File transfer
TABLE DESIGN:
Client Registration:

Cloud Upload

Store File
Client View

NORMALIZATION

It is a process of converting a relation to a standard form. The process is used to handle the
problems that can arise due to data redundancy i.e. repetition of data in the database, maintain
data integrity as well as handling problems that can arise due to insertion, updating, deletion
anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate


anomalies and maintain anomalies and maintain data integrity. To do this we use normal
forms or rules for structuring relation.
Insertion anomaly: Inability to add data to the database due to absence of other data.
Deletion anomaly: Unintended loss of data due to deletion of other data.

Update anomaly: Data inconsistency resulting from data redundancy and partial update
Normal Forms: These are the rules for structuring relations that eliminate anomalies.
FIRST NORMAL FORM:
A relation is said to be in first normal form if the values in the relation are atomic for every
attribute in the relation. By this we mean simply that no attribute value can be a set of values
or, as it is sometimes expressed, a repeating group.
Column Data Type default

Name value

ip Address varchar(20) Primary key

File varchar(2000

SECOND NORMAL FORM:

A relation is said to be in second Normal form is it is in first normal form and it should
satisfy any one of the following rules.
1) Primary key is a not a composite primary key

2) No non key attributes are present

3) Every non key attribute is fully functionally dependent on full set of primary key.

Column Data Type

Name

Node1 varchar(10)

Node2 varchar(10)

Node3 varchar(10)

Node4 varchar(10)

Node5 varchar(10)

INPUT DESIGN
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary
to put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the
amount of input required, controlling the errors, avoiding delay, avoiding extra steps and
keeping the process simple. The input is designed in such a way so that it provides security
and ease of use with retaining the privacy. Input Design considered the following things:

➢ What data should be given as input?

➢ How the data should be arranged or coded?

➢ The dialog to guide the operating personnel in providing input.


➢ Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process
and show the correct direction to the management for getting correct information from the
computerized system.

2.It is achieved by creating user-friendly screens for the data entry to handle large volume
of data. The goal of designing input is to make data entry easier and to befree from errors.
The data entry screen is designed in such a way that all the data manipulates can be
performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user
will not be in maize of instant. Thus, the objective of input design is to create an input layout
that is easy to follow

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and
to other system through outputs. In output design it is determined how the information is to
be displaced for immediate need and the hard copy output. It is the most important and
direct source information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.

1.Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain information produced by the system.
The output form of an information system should accomplish one or more of the following
objectives.

❖ Convey information about past activities, current status or projections of the


Future.

❖ Signal important events, opportunities, problems, or warnings.


❖ Trigger an action.
❖ Confirm an action.

SYSTEM STUDY

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

• ECONOMICAL FEASIBILITY
• TECHNICAL FEASIBILITY
• SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed system
must have a modest requirement, as only minimal or null changes are required for
implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by
the users solely depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be raised so that he is
also able to make some constructive criticism, which is welcomed, as he is the final user of
the system.
6.SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality
of components, sub assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a specific
testing requirement.

TYPES OF TESTS

Unit testing

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of
the application .it is done after the completion of an individual unit before integration. This
is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs
and expected results.

Integration testing

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components
were individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components.

Functional test

Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions,


or special test cases. In addition, systematic coverage pertaining to identifyBusiness process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets requirements. It tests
a configuration to ensure known and predictable results. An example of system testing is the
configuration-oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.
White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose.
It is used to test areas that cannot be reached from a black box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of
tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.

6.1 Unit Testing:


Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.

Test strategy and approach

Field testing will be performed manually, and functional tests will be written in detail.

Test objectives

• All field entries must work properly.

• Pages must be activated from the identified link.

• The entry screen, messages and responses must not be delayed.

Features to be tested

• Verify that the entries are of the correct format


• No duplicate entries should be allowed

• All links should take the user to the correct page.

Integration Testing
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company
level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

7. CONCLUSIONS AND FUTURE SCOPE

Conclusion
This paper presents a complete solution for building large scale, long-lived systems that must
preserve critical state in spite of malicious attacks and Byzantine failures. We present a
storage service with these characteristics called BQS, and a membership service that is part
of the overall system design but can be reused by any Byzantine-fault tolerant large-scale
system.
The membership service tracks the current system membership in a way that is mostly
automatic, to avoid human configuration errors. It is resilient to arbitrary faults of the nodes
that implement it, and is reconfigurable, allowing us to change the set of nodes that
implement the MS when old nodes fail, or periodically to avoid a targeted attack. When
membership changes happen, the replicated service has work to do: responsibility must
shift to the new replica group, and state transfer must take place from old replicas to new
ones, yet the system must still provide the same semantics as in a static system. We show
how thesis accomplished in dBQS. We implemented the membership service and dBQS.
Our experiments show that our approach is practical and could be used in a real
deployment: the MS can manage a very large number of servers, and reconfigurations have
little impact on the performance of the replicated service.

Future scope of application

This application can be easily implemented under various situations. We can add new
features as and when we require. Re usability is possible as and when require in this
application. There is flexibility all the modules.

The software is extendable in ways that its original developers may not expect. the following
principles enhances extensibility like hide data structure, avoid traversing multiple links, or
methods, avoid case statements on object types and distinguish public and private operations.

Reusability:

Reusability is possible as and when require in this application. We can update it next version.
Reusable software reduces design, coding and testing cost by amortizing effort over several
designs. Reducing the amount of code also simplifies understandings, which increases the
like hood that the code is incorrect. We follow the both type of reusability. Sharing of newly
written code within a project and reuse of previously written code on new projects.
SOURCE CODE

REGISTER using System;


usingSystem.Collections;
usingSystem.Configuration;
usingSystem.Data; usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlControls;
usingSystem.Web.UI.WebControls;
usingSystem.Web.UI.WebControls.WebPar
ts; usingSystem.Xml.Linq;
usingSystem.Data.SqlClient; using
System.IO; using System.Net;
usingSystem.Net.Mail;

public partial class reg : System.Web.UI.Page


{

SqlConnection con = new


SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]); string
status = "wait";
protected void Page_Load(object sender, EventArgs e)

{
}

protected void Button1_Click(object sender, EventArgs e)


{
con.Open();
SqlCommandcmd = new SqlCommand("insert into creg values('" + TextBox1.Text + "','" +
TextBox7.Text + "','" + TextBox2.Text + "','" + TextBox8.Text + "','" + TextBox9.Text
+ "','" + DropDownList1.Text + "','" + TextBox3.Text + "','" + TextBox4.Text + "','" +
TextBox5.Text + "','" + TextBox6.Text + "','"+status+"')", con);
cmd.ExecuteNonQuery(); con.Close();

TextBox1.Text = "";
TextBox7.Text = "";
TextBox2.Text = "";
TextBox8.Text = "";
TextBox9.Text = "";
DropDownList1.Text = "";
TextBox3.Text = "";
TextBox4.Text = "";
TextBox5.Text = "";
TextBox6.Text = "";
RegisterStartupScript("msg", "<script>alert('Registered Successfully...!')</script>");
Response.Redirect("clientlogin.aspx");
}
protected void ImageButton1_Click1(object sender, ImageClickEventArgs e)

{
}
}

CLIENT LOGIN PAGE : using System;


usingSystem.Collections;
usingSystem.Configuration;
usingSystem.Data; usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlControls;
usingSystem.Web.UI.WebControls;
usingSystem.Web.UI.WebControls.WebPar
ts; usingSystem.Xml.Linq;
usingSystem.Data.SqlClient; using
System.IO; using System.Net;
usingSystem.Net.Mail;

public partial class clientlogin : System.Web.UI.Page


{
SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]); protected
void Page_Load(object sender, EventArgs e)

{
}
protected void ImageButton1_Click(object sender, ImageClickEventArgs e)
{
// Response.Redirect(".aspx");
}
protected void Button1_Click(object sender, EventArgs e)
{
con.Ope
n();
SqlCommandcmd = new SqlCommand("select User Name from creg where name = '" +
TextBox1.Text + "'", con);
//SqlCommand cmd1 = new SqlCommand("select status from userdetail where
Username = '" + TextBox2.Text + "'", con); stringuname =
(string)cmd.ExecuteScalar(); //string pass = (string)cmd1.ExecuteScalar();
con.Close();

//Session["username"] = TextBox1.Text;
Response.Redirect("clienthome.aspx");

}
protected void Button2_Click(object sender, EventArgs e)
{
TextBox1.Text = "";
TextBox2.Text = "";
}

protected void Button3_Click(object sender, EventArgs e)


{
//Response.Redirect("reg.aspx");
}
protected void ImageButton1_Click1(object sender, ImageClickEventArgs e)
{
Response.Redirect("reg.aspx");
}
}

UPLOAD FILE :
using System;
usingSystem.Collections.Generic;
usingSystem.Linq;
usingSystem.Collections;
usingSystem.Configuration;
usingSystem.Data; usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlControls;
usingSystem.Web.UI.WebControls;
usingSystem.Web.UI.WebControls.WebPar
ts; usingSystem.Xml.Linq;
usingSystem.Data.SqlClient; using
System.IO; using System.Net;
usingSystem.Net.Mail;

public partial class uploadfile : System.Web.UI.Page


{

protected void Page_Load(object sender, EventArgs e)

{
}
protected void TextBox1_TextChanged(object sender, EventArgs e)
{

}
protected void TextBox2_TextChanged(object sender, EventArgs e)
{

}
protected void Button1_Click(object sender, EventArgs e)
{
using (SqlConnection Con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"])) try
{
Con.Open();
SqlCommand cmd1 = new SqlCommand("select * from storefile where filename =
@filename", Con); cmd1.Parameters.Add("@filename", SqlDbType.VarChar, 50).Value
= TextBox3.Text;

SqlDataReader dr1 = cmd1.ExecuteReader();


if (dr1.Read())
{
//lblmsg.Visible = true;
//lblmsg.Text = "Document already exist";

} else
{

//INSERT TO TABLE string filename =


Path.GetFileName(FileUpload1.PostedFile.FileName); if (filename ==
"")
{
// lblmsg.Visible = true;
// lblmsg.Text = "select the file name";
}
else
{
//Random key generation
//Random val = new Random();
//intrno = val.Next(123, 1000);
//message = Convert.ToString(rno);
Stream str = FileUpload1.PostedFile.InputStream;
BinaryReaderbr = new BinaryReader(str);
Byte[] size = br.ReadBytes((int)str.Length);
using (SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]))
{
using (SqlCommandcmd = new SqlCommand())
{
// cmd.CommandText = "insert into
upload(id,fileid,name,title,date,filename,filetype,filedata)
values(@id,@fileid,@name,@title,@date,@filename,@filetype,@filedata)";
cmd.CommandText = "insert into
storefile(id,title,filename,providerlocation,filetype,filedata)
values(@id,@title,@filename,@providerlocation,@filetype,@filedata)";
cmd.Parameters.AddWithValue("@id", TextBox3.Text);
cmd.Parameters.AddWithValue("@title", TextBox1.Text);
//cmd.Parameters.AddWithValue("@filetitle",
DropDownList1.Text);
//cmd.Parameters.AddWithValue("@cash", TextBox2.Text);
//cmd.Parameters.AddWithValue("@title", TextBox3.Text);
cmd.Parameters.AddWithValue("@filename", filename);
cmd.Parameters.AddWithValue("@providerlocation", TextBox2.Text);
cmd.Parameters.AddWithValue("@filetype", "application/word");
cmd.Parameters.AddWithValue("@filedata", size);
//cmd.Parameters.AddWithValue("@count", count);
//cmd.Parameters.AddWithValue("@securitykey", TextBox2.Text);
cmd.Connection = con; con.Open(); cmd.ExecuteNonQuery(); con.Close();
Label6.Visible = true;
Label6.Text = "File Uploaded Successfully";

}
}
dr1.Close();
}
}
}
catch (Exception ex)
{
Label6.Visible = true;
Label6.Text = "Error --> " + ex.Message;
}
finally
{
Con.Close();
}

protected void LinkButton1_Click(object sender, EventArgs e)


{
Response.Redirect("download.aspx");
}
}

SEARCH PAGE : using


System;
usingSystem.Collections.Gen
eric; usingSystem.Linq;
usingSystem.Collections;
usingSystem.Configuration;
usingSystem.Data;
usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlCo
ntrols;
usingSystem.Web.UI.WebCo
ntrols;
usingSystem.Web.UI.WebCo
ntrols.WebParts;
usingSystem.Xml.Linq;
usingSystem.Data.SqlClient;
using System.IO; using
System.Net;
usingSystem.Net.Mail;

public partial class searchs : System.Web.UI.Page


{ protected void Page_Load(object sender,
EventArgs e)

{
}

protected void TextBox1_TextChanged(object sender, EventArgs e)

{
}
protected void Button1_Click(object sender, EventArgs e)
{
SqlConnection Con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);

if (TextBox1.Text != null)

{
try
{
SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
SqlDataAdapter da = new SqlDataAdapter("select
id,title,filename,providerlocation,filetype,filedata from storefile where title='" +
TextBox1.Text + "'", con);
DataSet ds = new DataSet();
da.Fill(ds);

if (ds.Tables[0].Rows.Count<= 0)
{
Label2.Visible = false;
Label2.Text = "NO RECORDS AVAILABLE";
}
GridView1.DataSource = ds;
GridView1.DataBind();
}
catch (Exception ex)
{
Label2.Visible = false;
Label2.Text = "Error --> " + ex.Message;

}
}

protected void download_Click(object sender, EventArgs e)


{
LinkButtonlnkbtn = sender as LinkButton;
GridViewRowgvrow = lnkbtn.NamingContainer as GridViewRow; intfileid =
Convert.ToInt32(GridView1.DataKeys[gvrow.RowIndex].Value.ToString());
//string name, type;
using (SqlConnection con =
new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]))
{
using (SqlCommandcmd = new
SqlCommand(ConfigurationManager.AppSettings["ConnectionString"]))
{
cmd.CommandText = "select id,title,filename,providerlocation,filetype,filedata from
storefile where id=@id"; cmd.Parameters.AddWithValue("@id", fileid);
cmd.Connection = con; con.Open();
SqlDataReaderdr =
cmd.ExecuteReader(); if (dr.Read())
{
Response.ContentType = dr["FileType"].ToString();
Response.AddHeader("Content-Disposition", "attachment;filename=\"" +
dr["FileName"] + "\" ");
Response.BinaryWrite((byte[])dr["FileData"]);
Response.End();

}
}
}
}

protected void GridView1_SelectedIndexChanged(object sender, EventArgs e)


{

}
}

DOWNLOAD FILE :
using System;
usingSystem.Collections;
usingSystem.Configuration;
usingSystem.Data;
usingSystem.Linq;
usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlControl
s;
usingSystem.Web.UI.WebControl
s;
usingSystem.Web.UI.WebControl
s.WebParts;
usingSystem.Xml.Linq;
usingSystem.Data.SqlClient;
usingSystem.Collections.Generic;
usingSystem.Data.OleDb;
usingSystem.Web.SessionState;

public partial class download : System.Web.UI.Page


{ protected void Page_Load(object sender,
EventArgs e)

{
}
protected void download_Click(object sender, EventArgs e)
{
LinkButtonlnkbtn = sender as LinkButton;
GridViewRowgvrow = lnkbtn.NamingContainer as GridViewRow; intfileid =
Convert.ToInt32(GridView1.DataKeys[gvrow.RowIndex].Value.ToString());
//string name, type;
using (SqlConnection con =
new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]))
{
using (SqlCommandcmd = new
SqlCommand(ConfigurationManager.AppSettings["ConnectionString"]))
{
cmd.CommandText = "select id,title,filename,providerlocation,filetype,filedata from
storefile where id=@id";
cmd.Parameters.AddWithValue("@id",
fileid); cmd.Connection = con; con.Open();
SqlDataReaderdr =
cmd.ExecuteReader(); if (dr.Read())
{
Response.ContentType = dr["FileType"].ToString();
Response.AddHeader("Content-Disposition", "attachment;filename=\"" +
dr["FileName"] + "\" ");
Response.BinaryWrite((byte[])dr["FileData"]);
Response.End();

}
}
}
}
}

GRAPH PAGE :
using System;
usingSystem.Configuration;
usingSystem.Data;
usingSystem.Data.SqlClient;

usingSystem.Linq; usingSystem.Web;
usingSystem.Web.Security;
usingSystem.Web.UI;
usingSystem.Web.UI.HtmlControls;
usingSystem.Web.UI.WebControls;
usingSystem.Web.UI.WebControls.WebParts;
usingSystem.Xml.Linq;
usingSystem.Web.UI.DataVisualization.Charti
ng;

public partial class _Default : System.Web.UI.Page


{ protected void Page_Load(object sender,
EventArgs e)
{
if (!IsPostBack)
{
string query = "select distinct name from clientview";
DataTabledt = GetData(query);
DropDownList1.DataSource = dt;
DropDownList1.DataTextField = "name";
DropDownList1.DataValueField = "name";
DropDownList1.DataBind();
DropDownList1.Items.Insert(0, new ListItem("Select", ""));
}
}
protected void Chart1_Load(object sender, EventArgs e)

{
}

protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e)


{
Chart1.Visible = DropDownList1.SelectedValue != ""; string query =
string.Format("select name,count(userid) from clientview where name =
'{0}' group by name ",
DropDownList1.SelectedValue); DataTabledt =
GetData(query); string[] x = new
string[dt.Rows.Count]; // string[] y = new
string[dt.Rows.Count]; int[] y = new
int[dt.Rows.Count]; for (inti = 0; i<dt.Rows.Count;
i++)
{
x[i] = dt.Rows[i][0].ToString();
//y[i] = dt.Rows[i][1].ToString();
y[i] = Convert.ToInt32(dt.Rows[i][1]);
}
Chart1.Series[0].Points.DataBindXY(x, y);
Chart1.Series[0].ChartType = SeriesChartType.Column;
Chart1.ChartAreas["ChartArea1"].Area3DStyle.Enable3D = true;
Chart1.Legends[0].Enabled = true;
}
private static DataTableGetData(string query)
{
DataTabledt = new DataTable();
SqlCommandcmd = new SqlCommand(query);
SqlConnection con = new
SqlConnection(ConfigurationManager.AppSettings["ConnectionString"]);
//SqlConnection con = new SqlConnection(con);
SqlDataAdaptersda = new
SqlDataAdapter(); cmd.CommandType =
CommandType.Text; cmd.Connection =
con; sda.SelectCommand = cmd;
sda.Fill(dt);
returndt;
}
protected void SqlDataSource1_Selecting(object sender,
SqlDataSourceSelectingEventArgs e)
{

}
protected void SqlDataSource2_Selecting(object sender,
SqlDataSourceSelectingEventArgs e)

{
}
}

8.FORMS AND REPORTS


BIBLIOGRAPHY

Good Teachers are worth more than thousand books, we have them in Our

Department

References Made From:

1. User Interfaces in C#: Windows Forms and Custom Controls by Matthew


MacDonald.

2. Applied Microsoft® .NET Framework Programming (Pro-Developer) by Jeffrey


Richter.

3. Practical .Net2 and C#2: Harness the Platform, the Language, and the Framework
by Patrick Smacchia.
4. Data Communications and Networking, by Behrouz A Forouzan.

5. Computer Networking: A Top-Down Approach, by James F. Kurose.

6. Operating System Concepts,by Abraham Silberschatz.

7. M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski,G. Lee,


D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia,“Above the clouds: A berkeley
view of cloud computing,” University ofCalifornia, Berkeley, Tech. Rep. USB-
EECS-2009-28, Feb 2009.

8. “The apache cassandra project,” http://cassandra.apache.org/.

You might also like