Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49

Achieving Secure, Scalable, and Fine-grained

Data Access Control in Cloud Computing

Abstract:
Cloud computing is an emerging computing paradigm in which
resources of the computing infrastructure are provided as services over the
Internet. This paper proposed some services for data security and access control
when users outsource sensitive data for sharing on cloud servers. This paper
addresses this challenging open issue by, on one hand, defining and enforcing
access policies based on data attributes, and, on the other hand, allowing the data
owner to delegate most of the computation tasks involved in fine grained data
access control to untrusted cloud servers without disclosing the underlying data
contents. Our proposed scheme enables the data owner to delegate tasks of data
file re-encryption and user secret key update to cloud servers without disclosing
data contents or user access privilege information. We achieve this goal by
exploiting and uniquely combining techniques of attribute-based encryption
(ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has
salient properties of user access privilege confidentiality and user secret key
accountability and achieves fine - graininess, scalability and data confidentiality
for data access control in cloud computing. Extensive analysis shows that our
proposed scheme is highly efficient and provably secures under existing security
models.

Advantages
 Low initial capital investment
 Shorter start-up time for new services
 Lower maintenance and operation costs
 Higher utilization through virtualization
 Easier disaster recovery

Existing System:
Our existing solution applies cryptographic methods by disclosing
data decryption keys only to authorized users. These solutions inevitably
introduce a heavy computation overhead on the data owner for key distribution
and data management when fine grained data access control is desired, and thus
do not scale well.

Proposed System:
In order to achieve secure, scalable and fine-grained access control
on outsourced data in the cloud, we utilize and uniquely combine the following
three advanced cryptographic techniques:

 Key Policy Attribute-Based Encryption (KP-ABE).


 Proxy Re-Encryption (PRE)
 Lazy re-encryption

Module Description:
1) Key Policy Attribute-Based Encryption (KP-ABE):
KP-ABE is a public key cryptography primitive for one-to-many
communications. In KP-ABE, data are associated with attributes for each of which
a public key component is defined. User secret key is defined to reflect the access
structure so that the user is able to decrypt a cipher text if and only if the data
attributes satisfy his access structure. A KP-ABE scheme is composed of four
algorithms which can be defined as follows:
 Setup Attributes
 Encryption
 Secret key generation
 Decryption

Setup Attributes:
This algorithm is used to set attributes for users. From these attributes
public key and master key for each user can be determined. The attributes, public
key and master key are denoted as
Attributes- U = {1, 2. . . N}
Public key- PK = (Y, T1, T2, . . . , TN)
Master key- MK = (y, t1, t2, . . . , tN)

Encryption:
This algorithm takes a message M, the public key PK, and a set of
attributes I as input. It outputs the cipher text E with the following format:
E = (I, ˜ E, {Ei}i )
where ˜E = MY, Ei = Ti.

Secret key generation:


This algorithm takes as input an access tree T, the master key MK, and
the public key PK. It outputs a user secret key SK as follows.
SK = {ski}

Decryption:
This algorithm takes as input the cipher text E encrypted under the
attribute set U, the user’s secret key SK for access tree T, and the public key PK.
Finally it output the message M if and only if U satisfies T.

2) Proxy Re-Encryption (PRE):


Proxy Re-Encryption (PRE) is a cryptographic primitive in which a semi-
trusted proxy is able to convert a cipher text encrypted under Alice’s public key
into another cipher text that can be opened by Bob’s private key without seeing
the underlying plaintext. A PRE scheme allows the proxy, given the proxy re-
encryption key
rka↔b,
to translate cipher texts under public key pk1 into cipher texts under public key
pk2 and vise versa.

3) Lazy re-encryption:
The lazy re-encryption technique and allow Cloud Servers to aggregate
computation tasks of multiple operations. The operations such as
 Update secret keys
 Update user attributes.

System Requirements:

Hardware Requirements:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

Software Requirements:

• Operating system : - Windows XP.


• Coding Language : DOT NET
• Data Base : SQL Server 2005
• Visual Data Mining of Web Navigational Data
• Abstract:
• The project aims at the development of integrated knowledge management tools
and techniques for communities of users in the web, that support a) the semi-
automatic organization of knowledge based on collaborative participation
mechanisms, b) a high degree of freedom to define the structure to be used for the
representation of knowledge, c) personalized retrieval and visualization of
knowledge by means of the automatic generation of documents adapted to user
needs and other runtime conditions, and d) the automatic recognition of users’
identity and characteristics, based on biometric analysis and data mining techniques.
The different modules of the environment to be developed will communicate with
each other through a shared ontology that captures the concepts and categories by
which the knowledge of a given domain is described.
• The project follows the Semantic Web perspective, which proposes a better
organized and structured World Wide Web, by means of the introduction of explicit
semantic knowledge about available resources in the web, to facilitate resource
discovery, sharing and integration. The research proposed here starts from
previously developed tools and results reached by our research team, which has a
wide experience in the areas involved in this proposal. The techniques to be
developed in the current project will be validated with the development of a
corporate knowledge management application for the institution to which our team
belongs.
• System Analysis:
• Existing System:
• Application of data analysis techniques to the World Wide Web, referred to as Web
analysis, has been the focus of several recent research projects and papers.
However, there is no established vocabulary, leading to confusion when comparing
research efforts. The term Web analysis has been used in two distinct ways. The
first, called Web content analysis in this paper is the process of information
discovery from sources across the World Wide Web. The second called Web usage
mining is the process of analysis for user browsing and access patterns. In this
paper we define Web mining and present an overview of the various researches
issues, techniques, and development efforts. We briefly describe web miner, a
system for Web usage mining, and conclude this paper by listing research issues.
• Proposed System:
• The web usage visual web mining tools do not indicate which data mining algorithms
are used or provide effective graphical visualizations of the results obtained. Visual
Web Mining techniques can be used to determine typical navigation patterns in an
organizational web site. The process of combining Data analysis and information
visualization techniques in order to discover useful information about web usage
patterns is called visual web mining. The goal of this paper is to discuss the
development of a data analysis prototype called web patterns, which allows the user
to effectively visualize web usage patterns.
• MODULES:
• Data Cleaning
• User Identification
• Session Identification
• Session Filter
• Data Summarization
• Data Cleaning:
• The data cleaning process is used to remove the non analyzed resource request like
jpg, gif, robot request from the server log file; it gets the raw log data as input and
produces the cleaned log data as the output. Data cleaning is the first step
performed in the Web usage mining process. Some low level data integration tasks
may also be performed at this stage, such as combining multiple logs, incorporating
referrer logs, etc. After the data cleaning, the log entries must be partitioned into
logical clusters using one or a series of transaction identification modules.
• The goal of transaction identification is to create meaningful clusters of references
for each user. The task of identifying transactions is one of either dividing a large
transaction into multiple smaller ones or merging small transactions into fewer larger
ones. The input and output transaction formats match so that any number of
modules to be combined in any order, as the data analyst sees _t. Once the domain-
dependent data transformation phase is completed, the resulting transaction data
must be formatted to conform to the data model of the appropriate data mining
task.
• For instance, the format of the data for the association rule discovery task may be
different than the format necessary for mining sequential patterns. Finally, a query
mechanism will allow the user (analyst) to provide more control over the discovery
process by specifying various constraints. For more details on the WEBMINER.
• User Identification:
• The user identification process is used to identity the user from the cleaned log data.
It assign user id to the unique ip address. If for the same ip address if the os, or
host agent is different it assign different user_id.
• The behavioral data were obtained from the Web server access logs of both School
Web and the German Education Server in the form of a flat file containing one line
per page request. The requests were listed in the commonly used Extended Log
Format, which contains information on the host (IP address), the requested URL, the
date and time of the transaction, and its status or error code (e.g., 200/success,
404/file not Found 301/permanent redirect). Since log files give rise to a number of
uncertainties (e.g., Berendt & Spiliopoulou, 2000), the data needed to be cleaned
and prepared before analysis.
• First, hits needed to be transformed into page impressions by filtering out, from the
Web server log files, all requests for GIF or other image files. Second, the
participants had been instructed to turn off their proxies, so their individual
computers’ current IP numbers appeared in the log files (IP numbers used in the
experiment were known). To Segment the sequence of requests from one IP
number into sessions of different participants, Web pages controlling the experiment
were employed. These asked the participant to “log into” and “log out of” the
experiment. Finally, the participants were asked to empty their browser’s cache and
turn off caching prior to starting the experiment to ensure that the whole sequence
of pages seen by the user, including revisits to pages previously requested, was
recorded in the log file.
• Session Filtering:
• The session filtering process gets the input from the session data and it removes the
useless session, session of length one. Thus, we obtained the full sequence of pages
visited on the School Web or German Education Server for each participant.
Intermediate visits to other servers could only be traced via the log file referrer field
if the return move was initiated by a hyperlink on the remote page or if a redirect
was employed. This was observed only seldom, because the experimental tasks did
not require searching for information outside the respective servers (if it did occur, it
was not recorded, because participants generally returned via the back button).
Measures. The following measures were used.
• 1. Extent of navigation. This was measured by the number of nodes and the number
of edges.7
• 2. Search strategies. We chose the branching factor of the home page as our
indicator of how broadly a user searched that is, of how much the overall strategy
resembled breadth-first search.
• The branching factor equals the number of different new nodes that is, branches
opened up, directly after the home page. Repeat visits were not counted that is, in a
path [home page, X, Y, home page, Y], only X counts toward the branching factor,
since this back-and forth movement is interpreted as landmark use rather than as
the examination of new ways of answering the question. In contrast, a depth-first
navigation strategy explores a path that leads the user away from the start page
until the end of that path is reached, or at least until it becomes obvious that this
will not lead to the goal. We considered long paths from the home page to be
indicative of such a strategy and chose the average depth of exploration as our
measure of a depth-first strategy: the number of forward moves divided by the
number of branches emanating from the home page.
• Session Identification:
• The session identification process gets the input from the user identity file for each
user id extract the page accessed. If the time difference of the subsequent page
access exceeds the threshold time then open the new session otherwise add it to
the same session.
• This was observed only seldom, because the experimental tasks did not require
searching for information outside the respective servers (if it did occur, it was not
recorded, because participants generally returned via the back button). Measures.
The following measures were used.
• In contrast, a depth-first navigation strategy explores a path that leads the user
away from the start page until the end of that path is reached, or at least until it
becomes obvious that this will not lead to the goal. We considered long paths from
the home page to be indicative of such a strategy and chose the average depth of
exploration as our measure of a depth-first strategy: the number of forward moves
divided by the number of branches emanating from the home page.
• Data Summarization:
• The data summarization process extracts the fields of the session data and display
the results graphically. It displays the visits/day, visits/hour, top browser, os,
referrer etc.
• Web usage data is collected in various ways, each mechanism collecting attributes
relevant for its purpose. There is a need to pre-process the data to make it easier to
mine for knowledge. Specifically, we believe that issues such as instrumentation and
data collection, data integration and transaction identification need to be addressed.
• Clearly improved data quality can improve the quality of any analysis on it. A
problem in the Web domain is the inherent conflict between the analysis needs of
the analysts (who want more detailed usage data collected), and the privacy needs
of users (who want as little data collected as possible).
• This has lead to the development of cookie _les on one side and cache busting on
the other. The emerging OPS standard on collecting profile data may be a
compromise on what can and will be collected. However, it is not clear how much
compliance to this can be expected. Hence, there will be a continual need to develop
better instrumentation and data collection techniques, based on whatever is possible
and allowable at any point in time. Portions of Web usage data exist in sources as
diverse as Web server logs, referral logs, registration less, and index server logs.
Intelligent integration and correlation of information from these diverse sources can
reveal usage information which may not be evident from any one of them.
• SYSTEM REQUIREMENTS:
• Hardware Interfaces:
• Processor Type : Pentium -IV
• Speed : 2.4 GHZ
• Ram : 256 MB RAM
• Hard disk : 20 GB HD
• Software Interfaces:
• Operating System : Win2000
• Programming Package : .Net
• Server : ISS Server
• Tools : Flowchart .NET
Fuzzy Keyword Search over Encrypted Data in
Cloud Computing

Abstract:
As Cloud Computing becomes prevalent, more and more sensitive information are
being centralized into the cloud. Although traditional searchable encryption schemes allow a user
to securely search over encrypted data through keywords and selectively retrieve files of interest,
these techniques support only exact keyword search. In this paper, for the first time we formalize
and solve the problem of effective fuzzy keyword search over encrypted cloud data while
maintaining keyword privacy. Fuzzy keyword search greatly enhances system usability by
returning the matching files when users’ searching inputs exactly match the predefined keywords
or the closest possible matching files based on keyword similarity semantics, when exact match
fails. In our solution, we exploit edit distance to quantify keywords similarity and develop two
advanced techniques on constructing fuzzy keyword sets, which achieve optimized storage and
representation overheads. We further propose a brand new symbol-based trie-traverse searching
scheme, where a multi-way tree structure is built up using symbols transformed from the resulted
fuzzy keyword sets. Through rigorous security analysis, we show that our proposed solution is
secure and privacy-preserving, while correctly realizing the goal of fuzzy keyword search.
Extensive experimental results demonstrate the efficiency of the proposed solution.

Algorithm / Technique used:


String Matching Algorithm

Algorithm Description:
The approximate string matching algorithms among them can be classified into two
categories: on-line and off-line. The on-line techniques, performing search without an index, are
unacceptable for their low search efficiency, while the off-line approach, utilizing indexing
techniques, makes it dramatically faster. A variety of indexing algorithms, such as suffix trees,
metric trees and q-gram methods, have been presented. At the first glance, it seems possible for
one to directly apply these string matching algorithms to the context of searchable encryption by
computing the trapdoors on a character base within an alphabet. However, this trivial
construction suffers from the dictionary and statistics attacks and fails to achieve the search
privacy. An instance M of the data type string-matching is an object maintaining a pattern and
a string. It provides a collection of different algorithms for computation of the exact string
matching problem. Each function computes a list of all starting positions of occurrences of the
pattern in the string.

System Architecture:

Existing System:
This straightforward approach apparently provides fuzzy keyword search over the encrypted files
while achieving search privacy using the technique of secure trapdoors. However, this
approaches serious efficiency disadvantages. The simple enumeration method in constructing
fuzzy key-word sets would introduce large storage complexities, which greatly affect the
usability.

For example, the following is the listing variants after a substitution operation on the first
character of keyword

CASTLE: {AASTLE, BASTLE, DASTLE, YASTLE, ZASTLE}.


Proposed System:

Main Modules:

1. Wildcard – Based Technique

2. Gram - Based Technique

3. Symbol – Based Trie – traverse Search Scheme

1. Wildcard – Based Technique:


In the above straightforward approach, all the variants of the keywords have to be listed
even if an operation is performed at the same position. Based on the above observation, we
proposed to use an wildcard to denote edit operations at the same position. The wildcard-based
fuzzy set edits distance to solve the problems.

For example, for the keyword CASTLE with the pre-set edit distance 1, its wildcard based fuzzy
keyword set can be constructed as

SCASTLE, 1 = {CASTLE, *CASTLE,*ASTLE, C*ASTLE, C*STLE, CASTL*E, CASTL*, CASTLE*}.

Edit Distance:

a. Substitution
b. Deletion
c. Insertion

a) Substitution : changing one character to another in a word;

b) Deletion : deleting one character from a word;

c) Insertion: inserting a single character into a word.


2. Gram – Based Technique:

Another efficient technique for constructing fuzzy set is based on grams. The gram of a string is
a substring that can be used as a signature for efficient approximate search. While gram has
been widely used for constructing inverted list for approximate string search, we use gram for the
matching purpose. We propose to utilize the fact that any primitive edit operation will affect at
most one specific character of the keyword, leaving all the remaining characters untouched. In
other words, the relative order of the remaining characters after the primitive operations is
always kept the same as it is before the operations.

For example, the gram-based fuzzy set SCASTLE, 1 for keyword CASTLE can be constructed
as

{CASTLE, CSTLE, CATLE, CASLE, CASTE, CASTL, ASTLE}.


3. Symbol – Based Trie – traverse Search Scheme
To enhance the search efficiency, we now propose a symbol-based trie-traverse search
scheme, where a multi-way tree is constructed for storing the fuzzy keyword set over a finite
symbol set. The key idea behind this construction is that all trapdoors sharing a common prefix
may have common nodes. The root is associated with an empty set and the symbols in a trapdoor
can be recovered in a search from the root to the leaf that ends the trapdoor. All fuzzy words in
the trie can be found by a depth-first search.

In this section, we consider a natural extension from the previous single-user setting to multi-
user setting, where a data owner stores a file collection on the cloud server and allows an
arbitrary group of users to search over his file collection.
System Requirements:

Hardware Requirements:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

Software Requirements:

• Operating system : - Windows XP.


• Coding Language : DOT NET
• Data Base : SQL Server 2005
Conclusion:
1. In this paper, for the first time we formalize and solve the problem of supporting efficient yet
privacy-preserving fuzzy search for achieving effective utilization of remotely stored encrypted
data in Cloud Computing.

2. We design two advanced techniques (i.e., wildcard-based and gram- based techniques) to
construct the storage-efficient fuzzy keyword sets by exploiting two significant observations on
the similarity metric of edit distance.

3. Based on the constructed fuzzy keyword sets, we further propose a brand new symbol-based
trie-traverse searching scheme, where a multi-way tree structure is built up using symbols
transformed from the resulted fuzzy keyword sets.

4. Through rigorous security analysis, we show that our proposed solution is secure and privacy-
preserving, while correctly realizing the goal of fuzzy keyword search. Extensive experimental
results demonstrate the efficiency of our solution.
Group Device Pairing based Secure Sensor
Association and Key Management for Body Area
Networks

1.ABSTRACT :

Body Area Networks (BAN) is a key enabling technology


in E-healthcare such as remote health monitoring. An important security
issue during bootstrap phase of the BAN is to securely associate a group of
sensor nodes to a patient, and generate necessary secret keys to protect
the subsequent wireless communications. Due to the ad hoc nature of the
BAN

and the extreme resource constraints of sensor devices, providing secure,


fast, efficient and user-friendly secure sensor association is a challenging
task. In this paper, we propose a lightweight scheme for secure sensor
association and key management in BAN. A group of sensor nodes, having
no prior shared secrets before they meet, establish initial trust through
group device pairing (GDP), which is an authenticated group key
agreement protocol where the legitimacy of each member node can be
visually verified by a human. Various kinds of secret keys can be generated
on demand after deployment. The GDP supports batch deployment of
sensor nodes to save setup time, does not rely on any additional hardware
devices, and is mostly based on symmetric key cryptography, while
allowing batch node addition and revocation. We implemented GDP on a
sensor network testbed and evaluated its performance. Experimental
results show that that GDP indeed achieves the expected design goals.
2. EXISTING SYSTEM :

A wireless body area networks (BAN) have


emerged as an enabling technique for E-healthcare systems, which will
revolutionize the way of hospitalization . A BAN is composed of small
wearable or implantable sensor nodes that are placed in, on or around a
patient’s body, which are capable of sensing, storing, processing and
transmitting data via wireless communications. In addition, a controller (a
hand-held device like PDA or smart phone) is usually associated with the
same patient, which collects, processes,

and transmits the sensor data to the upper tier of the network for healthcare
records. In contrast to conventional sensor networks, a BAN deals with
more important medical information which has more stringent requirements
for security. Especially, secure BAN bootstrapping is essential since it
secures the very first step. In this we focus on the secure sensor
association problem during BAN bootstrapping (before the BAN is actually
deployed).

A group of BAN devices


should be correctly and securely associated to an intended patient. In
particular, the sensor nodes must authenticate to each other and form a
group with the controller. Secret keys which only belong to the intended
group are generated, so as to protect the subsequent communications.
Since the wireless communication is imperceivable by human, during this
process it is desirable to let a user physically make sure that the devices
ultimately forming a group includes and only includes the intended devices
that s/he wants to associate (group demonstrative identification). Since the
time spent in BAN bootstrapping is a critical concern in many applications
(e.g., in EMS where 5 minutes may result in a difference between life and
death), the protocol must be fast while being user-friendly, i.e., involving
less human interactions. Moreover, overhead is another issue since the
medical sensor nodes are extremely resource-constrained.

3. PROPOSED SYSTEM :

We propose a novel protocol, group device pairing


(GDP), for secure sensor association and key management in BAN. A
group of nodes and a controller that may have never met before and share
no pre-shared secrets, form a group securely to associate to the correct
patient. For each subgroup, GDP achieves authenticated group key
agreement by simultaneously and manually compare the LED blinking
patterns on all nodes, which can be done within 30 seconds with enough
security strength in practical applications. GDP helps the user of BAN to
visually make sure that the BAN consists only of those nodes that s/he
wants to associate with the patient.

We propose a novel scheme for secure sensor


association and key management in BAN. First, we put forward GDP that
associates a group of BAN devices with a patient. GDP leverages device
pairing and group key agreement in an unique way, in that only one
simultaneous comparison of synchronous LED blinking sequences is
required

for a batch of at most 10 nodes, which lasts less than 30 seconds. GDP is
fast, efficient, user-friendly, and also error-proof. Second, GDP enables
efficient key management after network deployment. Multiple types of keys
can be derived on-demand based on the group key. Also, dynamic
operations, such as regular key updates, batch node addition and
revocation are supported naturally by GDP. Our scheme is mostly based
on symmetric key cryptography (SKC), thus

having low communication and computation overhead. Third, we implement


GDP on a 10 node sensor network testbed to evaluate its performance.
Experimental results show that group sensor association can be done
within 30 seconds with low overhead, and is intuitive-to-use.

4.HARDWARE REQUIREMENTS:

• System : Pentium IV 2.4 GHz.


• Hard Disk : 40 GB.
• Floppy Drive : 1.44 MB.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 256 MB.

5.SOFTWARE REQUIREMENTS:

• Operating System : - Windows XP Professional.


• Front End : - Asp .Net 2.0.
• Coding Language : - Visual C# .Net.
Dynamic Authentication for Cross-Realm
SOA-Based Business Processes

Abstract:
Modern distributed business applications are embedding an increasing degree of
automation and dynamism, from dynamic supply-chain management, enterprise
federations, and virtual collaborations to dynamic service interactions across
organizations. Such dynamism leads to new challenges in security and
dependability. In Service-Oriented Architecture (SOA), collaborating services may
belong to different security realms but often need to be engaged dynamically at
runtime. If a cross-realm authentication relationship cannot be generated
dynamically at runtime between heterogeneous security realms, it is technically
difficult to enable dynamic business processes through secure collaborations
between services. A potential solution to this problem is to generate a trust
relationship across security realms so that a user can use the credential in the local
security realm to obtain the credentials to access resources in a remote realm.
However, the process of generating such kinds of trust relationships between two
disjoint security realms is very complex and time consuming, which could involve a
large number of extra operations for credential conversion and require
collaborations in multiple security realms. In this paper, we propose a new cross-
realm authentication protocol for dynamic service interactions. This protocol does
not require credential conversion or establishment of authentication paths.

Algorithm / Technique used:

Diffie -Hellman Algorithm.

Algorithm Description:
Diffie-Hellman key exchange offers the best of both worlds -- it uses public key techniques to
allow the exchange of a private encryption key! Let's take a look at how the protocol works, from
the perspective of Alice and Bob, two users who wish to establish secure communications. We
can assume that Alice and Bob k... 

The Diffie-Hellman Key Exchange Algorithm is used to by two parties to create a session key.
The two parties go through a 4 step process to generate the key. In order for an attacker to obtain
the key, he/she must face the discrete logrithm problem. Here are the steps.

1: Station A or Station B selects a large, secure prime number p and a primitive root a (mod p).
Both p and a can be made public.

2: Station A chooses a secret random x with 1 <= x <= p-2, and Station B selects a secret
random y with 1 <= y <= p-2.

3: Station A send a^x (mod p) to Station B, and Station B sends a^y (mod p) to Station A.

4: Using the messages that they each have received, they can each calculate the session key K.
Station A calculates K by K congruent to (a^y)^x (mod p), and Station B calculates K by K
congruent to (a^x)^y (mod p).

System Architecture:
Existing System:
In Service-Oriented Architecture (SOA), collaborating services may belong to
different security realms but often need to be engaged dynamically at runtime. If a
cross-realm authentication relationship cannot be generated dynamically at runtime
between heterogeneous security realms, it is technically difficult to enable dynamic
business processes through secure collaborations between services. A potential
solution to this problem is to generate a trust relationship across security realms so
that a user can use the credential in the local security realm to obtain the
credentials to access resources in a remote realm. However, the process of
generating such kinds of trust relationships between two disjoint security realms is
very complex and time consuming, which could involve a large number of extra
operations for credential conversion and require collaborations in multiple security
realms. In this paper, we propose a new cross-realm authentication protocol for
dynamic service interactions. This protocol does not require credential conversion or
establishment of authentication paths.
Proposed System:
Clustering and Cluster-Based Routing Protocol
for Delay-Tolerant Mobile Networks

1.ABSTRACT :

This reveals distributed clustering scheme and proposes a cluster-


based routing protocol for Delay- Tolerant Mobile Networks (DTMNs). The
basic idea is to distributively group mobile nodes with similar mobility
pattern into a cluster, which can then interchangeably share their resources
(such as buffer space) for overhead reduction and load balancing,aiming to
achieve efficient and scalable routing in DTMN. Due to the lack of
continuous communications among mobile nodes and possible errors in the
estimation of nodal contact probability, convergence and stability become
major challenges in distributed clustering in DTMN. To this end, an
exponentially weighted moving average (EWMA) scheme is employed for
on-line updating nodal contact probability, with its mean proven to converge
to the true contact probability. Based on nodal contact probabilities, a set of
functions including Sync(), Leave(), and Join() are devised for cluster
formation and gateway selection. Finally, the gateway nodes exchange
network information and perform routing. Extensive simulations are carried
out to evaluate the effectiveness and efficiency of the proposed cluster-
based routing protocol. The simulation results show that it achieves higher
delivery ratio and significantly lower overhead and end-to-end delay
compared with its non-clustering counterpart.

2. EXISTING SYSTEM :
For intermittent connectivity among mobile nodes, especially
under low nodal density and/or short radio transmission range, the Delay-
Tolerant Network (DTN) technology has been introduced to mobile
wireless communications, such as ZebraNet , Shared Wireless Info-Station
(SWIM) , Delay/Fault-Tolerant Mobile Sensor Network (DFT-MSN) , and
mobile Internet and peer-to-peer mobile ad hoc networks –. DTN is
fundamentally an opportunistic communication system, where
communication links only exist temporarily, rendering it impossible to
establish end-to-end connections for data delivery. In such networks,
routing is largely based on nodal contact probabilities (or more
sophisticated parameters based on nodal contact probabilities).

Due to the lack of continuous communications among mobile


nodes and possible errors in the estimation of nodal contact
probability,convergence and stability become major challenges in
distributed
clustering in DTMN.

3.PROPOSED SYSTEM :

To introduce clustering and cluster-based


routing

in DTMN is the main criteria in this project. The basic idea is to let each
mobile node to learn unknown and possibly random mobility parameters
and join together with other mobile nodes that have similar mobility pattern
into a cluster.

An exponentially weighted moving average (EWMA) scheme is employed


for on-line updating nodal contact probability, with its mean proven to
converge to the true contact probability. Based on nodal contact
probabilities, a set of functions including Sync(), Leave(), and Join() are
devised for cluster formation and gateway selection. Finally, the gateway
nodes exchange network information and perform routing. Clustering has
long been considered as an

effective approach to reduce network overhead and improve scalability.


Various clustering algorithms have been investigated in the context of
mobile ad hoc networks. We proposes a DTN hierarchical routing (DHR)
protocol to improve routing scalability. DHR is based on a deterministic
mobility model, where all nodes move according to strict, repetitive
patterns, which are known by the routing and clustering algorithms. We
investigate distributed clustering and

cluster-based routing protocols for Delay-Tolerant Mobile Networks


(DTMNs). The basic idea is to autonomously learn unknown and possibly
random mobility parameters and to group mobile nodes with similar mobility
pattern into the same cluster.

We introduce our proposed clustering algorithm for DTMN,which


undergoes the following steps. First, each node learns direct contact
probabilities to other nodes. It is not necessary that a node stores contact
information of all other nodes in network. Second, a node decides to join or
leave a cluster based on its contact probabilities to other members of that
cluster. Since our objective is to group all nodes with high pair-wise contact
probabilities together, a node joins a cluster only if its pair-wise contact
probabilities to all existing members are greater than a threshold γ . A node
leaves the current cluster if its contact probabilities to some cluster
members drop below ү. Load balancing is an effective enhancement to the
proposed routing protocol. The basic idea is to share traffic load among
cluster members in order to reduce the dropping probability due to queue
overflow at some nodes. Sharing traffic inside a cluster is reasonable,
because nodes in the same cluster have similar mobility pattern, and thus
similar ability to deliver data messages.

4.HARDWARE REQUIREMENTS:
• System : Pentium IV 2.4 GHz.
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 MB.
• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 256 MB.

5.SOFTWARE REQUIREMENTS:

• Operating System : - Windows XP Professional.


• Front End : - Asp .Net 2.0.
• Coding Language : - Visual C# .Net.
A Distributed Protocol to Serve Dynamic Groups
for Peer-to-Peer Streaming
Abstract:

Peer-to-peer (P2P) streaming has been widely deployed over the Internet. A streaming
system usually has multiple channels, and peers may form multiple groups for content
distribution. In this paper, we propose a distributed overlay framework (called SMesh) for
dynamic groups where users may frequently hop from one group to another while the total pool
of users remain stable. SMesh first builds a relatively stable mesh consisting of all hosts for
control messaging. The mesh supports dynamic host joining and leaving, and will guide the
construction of delivery trees. Using the Delaunay Triangulation (DT) protocol as an example,
we show how to construct an efficient mesh with low maintenance cost. We further study
various tree construction mechanisms based on the mesh, including embedded, bypass, and
intermediate trees. Through simulations on Internet-like topologies, we show that SMesh
achieves low delay and low link stress.

Algorithm Used:

Aggregation and delegation algorithm, Delaunay Triangulation

(DT) Protocol

Existing System:

P2P overlay network, hosts are responsible for packets replication and forwarding. A P2P
network only uses unicast and does not need multicast capable routers. It is, hence, more
deployable and flexible. Currently, there are two types of overlays for P2P streaming: tree
structure and gossip mesh. The first one builds one or multiple overlay tree(s) to distribute data
among hosts. Examples include application-layer multicast protocols.

The existing networking infrastructure are multicast capable. Emerging commercial


video transport and distribution networks heavily make use of IP multicasting. However, there
are many operational issues that limit the use of IP multicasting into individual autonomous
networks. Furthermore, only trusted hosts are allowed to be multicast sources. Thus, while it is
highly efficient, IP multicasting is still not an option for P2P streaming at the user level.

Proposed System:

In the Proposed applications, as peers may dynamically hop from one group to
another, it becomes an important issue to efficiently deliver specific contents to peers. One
obvious approach is to broadcast all contents to all hosts and let them select the contents.
Clearly, this is not efficient in terms of bandwidth and end-to-end delay, especially for
unpopular channels. Maintaining a separate and distinct delivery overlay for each channel
appears to be another solution. However, this approach introduces high control overhead to
maintain multiple dynamic overlays. When users frequently hop from one channel to another,
overlay reformation becomes costly and may lead to high packet loss

In this paper, we consider building a data delivery tree for each group. To reduce tree
construction and maintenance costs, we build a single shared overlay mesh. The mesh is
formed by all peers in the system and is, hence, independent of joining and leaving events in
any group. This relatively stable mesh is used for control messaging and guiding the
construction of overlay trees. With the help of the mesh, trees can be efficiently constructed
with no need of loop detection and elimination. Since an overlay tree serves only a subset of
peers in the network, we term this framework Subset-Mesh, or SMesh.

Modules:

1. Peer to Peer Network Module:


P2P streaming system, the server (or a set of servers) usually provides multiple
channels. A peer can freely switch from one channel to another, Peers in the same
group share and relay the same streaming content for each other

2. Distributed partition detection Module:


We use Delaunay Triangulation (DT) as an example. We propose several techniques to
improve the DT mesh, e.g., for accurately estimating host locations and distributed
partition detection. Based on the mesh, we study several tree construction mechanisms
to trade off delay and network resource consumption automatically Adjust itself to form
anew mesh. The trees on top of it will then accordingly adjust tree nodes and tree
edges. Also note that in SMesh a host may join as many groups as its local resource
allows. If a host joins multiple groups, its operations in different groups are independent
of each other.

Fig 1: Distributed detection peer to peer two adjacent triangles nodes

3 .Dynamic Joining Host Module:

A joining host, after obtaining its coordinates, sends a MeshJoin message with its
coordinates to any host in the system. Mesh Join is then sent back to the joining host
along the DT mesh based on compass routing. Since the joining host is not a member of
the mesh yet, it can be considered as a partitioned mesh consisting of a single host. The
Mesh Join message finally triggers the partition recovery mechanism at a particular host
in the mesh, which helps the new host join the mesh.

4 . Path Aggregation for QoS Provisioning Module:


Two independent connections across domains A and B are set up, which leads to high
usage of long paths and hence high network resource consumption. Furthermore, in the
traditional DT protocol, a host may have many children. However, a host often has a
node stress threshold K for each group depending on its resource. To address these
problems, we require that the minimum adjacent angle between two children of a host
should exceed a certain threshold T Consider a source s and a host u in the network.
Once u accepts a child, u checks whether its node stress exceeds K or whether the
minimum adjacent angle between its children is less than T It selects a pair of children
with the minimum adjacent angle and delegates the child farther from the source to the
other. Note that after aggregation, the overlay tree is still loop free because hosts are
still topologically sorted according to their distances from the source

System Specifications:

Hardware Requirements:
SYSTEM : Pentium IV 2.4 GHz
HARD DISK : 40 GB
FLOPPY DRIVE : 1.44 MB
MONITOR : 15 VGA colour
MOUSE : Logitech.
RAM : 256 MB
KEYBOARD : 110 keys enhanced.
Software Requirements:
Operating system : Windows XP Professional
Front End : Java Technology
Tool : Eclipse 3.3
Staying Connected in a Mobile Healthcare System:
Experiences from the MediNet Project

Abstract:
A mobile healthcare system consists of a number of components networked together
including patients and their healthcare providers. It is important in this type of system
for the patient to remain connected at all times even if one of the communication
components fails. This paper discusses the design of the MediNet system and shows
how it seamlessly handles connectivity issues between patients and their mobile
phones, between the healthcare meters and mobile phones, and between mobile
phones and web server components. The overall goal behind our design strategies is to
continue providing a high level of service to the patient in the face of communication
problems leading to improved acceptability and trust of the system by patients.
Proposed System:
One of the main objectives of our research with MediNet was to design a system that
would give the user a feeling of connectedness. Connectedness is viewed from two
different perspectives in this paper. The first perspective defines it as being logically,
coherently or intelligibly ordered or presented. In order to achieve logical
connectedness, the system’s us- ability was addressed: the usability of the meters, the
usability of the patient interface and the usability of the healthcare providers and care-
givers interface. The assumption here is that if the user has this feeling of
connectedness, he or she will trust the system and use it again. The second perspective
is from a technical standpoint where connectedness is defined as the state of being
connected, where a connection is a relation between two things and events.
Fig: Proposed Architecture

There are many connections in a mHealth system. Figure 2 shows the connections
within the MediNet system, namely:
(1) Connection between the patient and the mobile phone,
(2) Connection between the patient, their healthcare meters, and the mobile phone,
(3) Connection between the patient’s phone and some networked resource e.g., server, and
(4) Connection between the patient and their healthcare providers and caregivers.
Security is a major concern in any healthcare system and in the mobile context it is no
different. In the case of patients sending private or sensitive data to a remote server, it
is important to guard this data against eavesdroppers or external parties who can use
the information against the person. To address this issue some systems have used a
patient id instead of the patient’s name on all records so as not to associate the
information with the identity of someone Modules:
Modules:
There four Main Modules in E-Knowledge in Health Care System they are
 Web Application OR Web Portal Module
 Mobile Networked Environment module
 Patient Module.
 Doctor Module
 Administrator Module.
 General User Module
1. Web Application OR Web Portal Development Model:
The Web portal is an application that can be used by the patient’s healthcare providers
and other authorized individuals to view the patient’s information. Data is displayed in
various formats such as graphical and tabular views of readings. Thus, a patient’s
doctor can monitor a patient remotely and can contact the patient if the need arises.
The Web portal can also be used by authorized persons (e.g., relatives and caregivers)
to monitor the health of a patient and provide advice and support as necessary from a
distance
2. Mobile Networked Environment module:
Mobile applications (J2me) that depend on network services may experience some form
of disruption when there is a lack of connectivity. This disruption may occur at anytime
and represents a burden to the user. Research has shown that users like the feeling
of”always being on” and may get unmotivated if interruptions happen too frequently.
The effects are even more pronounced if disruptions occur when the user is performing
a key task such as receiving feedback from a healthcare provider. Mobile healthcare
applications must take this into consideration and ensure that fail-safe procedures are
in place so that the burden to the user is reduced or in the best case, the user is not
even made aware of the disruption.
3. PATIENT MODULE:
Everyone needs to have Medical attention at any time. So we allow every user
to register freely at any time.
The user does registration by specifying their details like His/ Her Name,
Contact Number, Address, and Mail Id etc. After validation the user will receive
a message regarding his/her membership.
After registration the user can Login to this site with his/her Unique Id, which is
provided by E-Knowledge in Health Care System and password.
To get an appointment from a doctor we need the user to pay some minimum
amount through credit card/account.
The Doctor list is categorized by his/her specialization and Location like HEART,
SKIN CARE, CHILD CARE etc. and T.NAGAR, VALASARAWAKKAM,
KODAMBAKKAM respectively. So that the user can easily access the doctor for
his/her treatment.
The user can fix Appointment with a particular doctor by specifying the time,
which is convenient for them. After this they will be shown a confirmation
message about the timing they preferred.
A patient visits the doctor at specified timing as per the appointment timing. E-
Knowledge in Health Care System maintains the prescription given by the
doctor for future use. Patient can view their prescription any time.
Patients can cancel their appointments within a time limit. The time limit is
about two hours from the time they had registered their appointments and
their Money will be refunded.

4. DOCTOR MODULE: Via Mobile:


Administrator does Doctor registration.
Every Doctor will have their own unique Id and Password with which, they will
login to this site.
After they logged into this site. They will have their main form. From there by
choosing the link, they can see their appointments. He/she can see their new
appointments and they can also see the previous appointments.
After attending the patient, the Doctor selects a particular patient Id for
prescription. In the prescription form, the Doctor will enter the detail about the
prescription and give to the particular patient like specifying Patient condition,
Kind symptom and dosage about the medicine.
For some patient, the Doctor may need to recommend some other Hospitals for
future treatments for this, the Health Consulting System site provide other option
for Doctors. Here, the Doctor will redirect to Hospital recommendation Form.
In Hospital Recommendation Form, the Doctor searches for a Hospital, which will
be suitable for a particular Patient who needs Hospitalization for future
treatments?
After choosing the Hospital, the Doctor will send the details about the Patient to
the Person who is in charge of the Recommended Hospital through mail and the
patient will also receive the mail about the Hospital preferred by the Doctor for
his/her future treatments.

5. ADMINISTRATOR MODULE: Via Mobile:


A Genuine person from the Administrator side will collect information about the
Doctor like his/her Qualification, specialization, Address, etc for Registration.
After filtering the invalid data, the Doctor Details will be uploaded in Health
Consulting System site.
Before uploading their details the Administrator will send mail to the Doctor’s E-
mail ID about his/her Id and Password.
The Administrator can also add new Hospital which is specified by the Doctors
while recommending new Hospitals for patient for further or future treatments.
The Administrator can view the Account information and can also view the
suggestion (feedbacks) given by different users of this site.
The Administrator is the one, who updates latest Health Tips provided in this
site.
6. GENERAL USER MODULE:
General Users are those who have not registered in this site. They can view
general information about the Doctors.
They view the Health Tips.
They can give their suggestion about this site.
They can register themselves and become a member of Health Consulting
System.
Conclusion:
This paper identified the need for a mobile healthcare system to continue to provide a
high level of service to its users in the face of communication failures. It described the
design strategies used in MediNet to ensure that patients continue to use the system
despite failures that may occur when transferring data between healthcare devices and
mobile phones and between mobile phones and web server components. The intention
is to keep the patient feeling connected at all times leading to improved acceptability
and trust of the system by patients.

System Configuration:

H/W System Configuration:-

Processor - Pentium –III


Speed - 1.1 GHz
RAM - 256 MB (min)
Hard Disk - 20 GB
Floppy Drive - 1.44 MB
Key Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA

S/W System Configuration:-


 Operating System : Windows95/98/2000/XP
 Application Server : Tomcat5.0/6.X

 Front End : HTML, Java, Jsp,J2ME


 Wireless Toolkit : J2ME Wireless Toolkit 2.5.2
(Or) Nokia 5100 sdk 1.0

 Scripts : JavaScript.
 Server side Script : Java Server Pages.
 Database : MySql.
 Database Connectivity : JDBC.
Multicast Multi-path Power Efficient Routing in
Mobile ADHOC networks

Aim:

Aim of this project work has been proposed to maximize the network lifetime and
minimizes the power consumption during the source to destination route establishment . The
proposed work is aimed to provide efficient power aware routing considering real and non real
time data transfer.

Abstract:

This proposal of this project presents a measurement-based routing algorithm to load


balance intra domain traffic along multiple paths for multiple multicast sources. Multiple paths
are established using application-layer overlaying. The proposed algorithm is able to converge
under different network models, where each model reflects a different set of assumptions about
the multicasting capabilities of the network. The algorithm is derived from simultaneous
perturbation stochastic approximation and relies only on noisy estimates from measurements.
Simulation results are presented to demonstrate the additional benefits obtained by
incrementally increasing the multicasting capabilities. The main application of mobile ad hoc
network is in emergency rescue operations and battlefields. This project addresses the problem
of power awareness routing to increase lifetime of overall network. Since nodes in mobile ad
hoc network can move randomly, the topology may change arbitrarily and frequently at
unpredictable times. Transmission and reception parameters may also impact the topology.
Therefore it is very difficult to find and maintain an optimal power aware route

EXISTING SYSTEM:

The existing work on multicast routing with power constraints are refer in the
literatures Propose a solution to optimally distribute the traffic along multiple multicast
trees. However, the solution covers the case when there is only one active source in the
network. In addition, it is assumed that the gradient of an analytical cost function is
available, which is continuously differentiable and strictly convex. These assumptions
may not be reasonable due to the dynamic nature of networks.
Even though they approach the problem under a more general architecture,
practicality of these solutions is limited due to the unrealistic assumption that the
network is lossless as long as the average link rates do not exceed the link capacities.
Moreover, a packet loss is actually much more costly when network coding is employed
since it potentially affects the decoding of a large number of other packets. In addition,
any factor that changes the min-cut max-flow value between a source and a receiver
requires the code to be updated at every node simultaneously, which brings high level
of complexity and coordination.
PROPOSED SYSTEM:
The proposal in this paper presented a distributed optimal routing algorithm to
balance the load along multiple paths for multiple multicast sessions. Our
measurement-based algorithm does not assume the existence of the gradient of an
analytical cost function and is inspired by the unicast routing algorithm based on
Simultaneous Perturbation Stochastic Approximation (SPSA). In addition, we address
the optimal multipath multicast routing problem in a more general framework than
having multiple trees. We consider different network models with different
functionalities.
With this generalized framework, our goal is to examine the benefits observed by the
addition of new capabilities to the network beyond basic operations such as storing and
forwarding. In particular, we will first analyze the traditional network model without any IP
multicasting functionality where multiple paths are established using (application-layer) overlay
nodes. Next, we consider a network model in which multiple trees can be established. Finally,
look at the generalized model by allowing receivers to receive multicast packets at arbitrarily
different rates along a multicast tree. Such an assumption potentially creates a complex
bookkeeping problem since source nodes have to make sure each receiver gets a distinct set of
packets from different trees

REMiT (An algorithm for Refining Energy-Efficient Source-based Multicast Tree) for
building an existing multicast tree into a more energy efficient multicast tree. As a distributed
algorithm, REMiT uses minimum-weight spanning tree (MST) or single-source shortest path tree
(SSSPT) as the initial solution and improves the multicast tree energy efficiency by switching
some tree nodes from their respective parent nodes to new corresponding parent nodes. The
selection of the initial tree is dependent on the energy model.

Modules:

1. Route Discovery:
The first criterion in wireless medium is to discover the available routes and establish
them before transmitting. To understand this better let us look at the example below.
The below architecture consists of 11 nodes in which two being source and destination
others will be used for data transmission. The selection of path for data transmission is
done based on the availability of the nodes in the region using the ad-hoc on demand
distance vector routing algorithm. By using the Ad hoc on Demand Distance Vector
routing protocol, the routes are created on demand, i.e. only when a route is needed
for which there is no “fresh” record in the routing table. In order to facilitate
determination of the freshness of routing information, AODV maintains the time since
when an entry has been last utilized. A routing table entry is “expired” after a certain
predetermined threshold of time. Consider all the nodes to be in the position. Now the
shortest path is to be determined by implementing the Ad hoc on Demand Distance
Vector routing protocol in the wireless simulation environment for periodically sending
the messages to the neighbors and the shortest path.
In the MANET, the nodes are prone to undergo change in their positions. Hence the
source should be continuously tracking their positions. By implementing the AODV
protocol in the simulation scenario it transmits the first part of the video through the
below shown path. After few seconds the nodes move to new positions.

2. Route Maintenance:
The next step is the maintenance of these routes which is equally important. The
source has to continuously monitor the position of the nodes to make sure the data is
being carried through the path to the destination without loss. In any case, if the
position of the nodes change and the source doesn’t make a note of it then the packets
will be lost and eventually have to be resent.
The modified proposed algorithm:
Threshold = 50%; success = 0; cutoff = 10%
A := S;
Repeat
If g(A) >= threshold then
B := A;
Let A be neighbor of B that minimizes
pc(B,A) = power-cost(B,A) + v(s)f‘(A);
Send message to A;
success = 1;
Until A = D (* Destination reached *)
or if success <> 1 then
if threshold > cutoff then
threshold = threshold /2;
or A = B ( * Delivery failed *);

3. Data Transmission:
The path selection, maintenance and data transmission are consecutive process which happen
in split seconds in real-time transmission. Hence the paths allocated priory is used for data
transmission. The first path allocated previously is now used for data transmission. The data is
transferred through the highlighted path. The second path selected is now used for data
transmission. The data is transferred through the highlighted path. The third path selected is
used for data transmission. The data is transferred through the highlighted path.

4. Minimum-energy multicast tree Module:

Our main objective is to construct a minimum-energy multicast tree rooted at the source node.
We explore the following two problems related to energy-efficient multicasting in WANETs using
a source-based multicast tree wireless multicast and the concept of wireless multicast
advantage. Because the problem of constructing the optimal energy-efficient
broadcast/multicast tree is NP-hard, several heuristic algorithms for building a source-based
energy-efficient broadcast/multicast tree have been developed recently. Wieselthier et al.
presented BIP/MIP algorithm which is a centralized source-based broadcast/ multicast tree
building centralized algorithm

System Configuration
H/W System Configuration
Processor - Pentium –III
Speed - 1.1 GHz

RAM - 256 MB (min)

Hard Disk - 20 GB

Floppy Drive - 1.44 MB

Key Board - Standard Windows Keyboard

Mouse - Two or Three Button Mouse

Monitor - SVGA

Software Requirements:-

Language : Java RMI, SWING, J2ME

Mobile toolkit : J2ME Wireless Toolkit 2.5.2

Development Tool : My Eclipse 3.0

O/S : WIN2000/XP

You might also like